SQL and DB2 create command - sql

I am going through a tutorial on how to use DB2 which is in a linux environment.
I am supposed to connect to a database, create a table, insert some data under db2 shell:
db2 connect to c3421m
db2
db2 => update command options using z ON Assignment_0.txt
db2 => update command options using v ON
db2 => CREATE TABLE BAND_2015 // gives error I got stuck here
// here is where I get stuck i am supposed to create a table and execute the follwing command under DB2 shell: CREATE TABLE BAND_2015
Code given:
create table band_2015 ( \
band_no integer not null primary key, \
band_name varchar(25) not null, \
band_home varchar(25) not null, \
band_type varchar(10) check (band_type in (‘concert’,’rock’,’jazz’,’military’)), \
b_start_date date not null, \
band_contact varchar(10) not null )
So how do I create this table? I was told to copy it to a text editor(do i save it as band_2015.sql ?). I am completely new to this but i have a lot of experience in other programming languages...

The problem is the terminating character. By default in the carriage return (enter). However, for your tutorial, you should type multi-line commands. For this case, you change the terminator character by defining another
For semi-colon
db2 -t
select *
from table;
For at sign or any other character.
db2 -td#
select *
from table #
For no character:
db2
select * from table

In the DB2 command-line processor by default commands and statements cannot span multiple lines, so it treats CREATE TABLE BAND_2015 as a complete statement, which is of course not the case. In the code given to you those backslashes appear for a reason -- they indicate to the CLP that the statement continues on the next line.
Alternatively, you can start the CLP with the command line option -t, which will designate the semicolon, instead of the new line, as the statement terminator. You can then type the statement as you did, without the backslashes, and terminate it with the ";".

Related

PostgreSQL Inserting Null works using COPY but fails using INSERT

I have a bigint column named mycolumn. I execute SQL scripts using the PSQL command.
Using COPY command:
COPY public.mytable (myothercol, mycolumn) FROM stdin;
1 \N
\.
This works. But the following does not work:
EXECUTE 'insert into public.mytable (myothercol, mycolumn) values ($1,$2);' USING
1,NULL;
This gives me error:
column "mycolumn" is of type bigint but expression is of type text
Why does insert not work for null value, whereas COPY works?
You best tell PostgreSQL to convert the parameter to bigint explicitly:
EXECUTE 'insert into public.mytable (myothercol, mycolumn) values ($1,$2::bigint);'
USING 1,NULL;
The problem is that PostgreSQL does not automatically know what data type a NULL is, so it guesses text. COPY does not have to guess a data type.

get the current date and set it to variable in order to use it as table name in HIVE

I want to get the current date as YYMMDD and then set it to variable in order to use it as table name.
Here is my code:
set dates= date +%Y-%m-%d;
CREATE EXTERNAL TABLE IF NOT EXISTS dates(
id STRING,
region STRING,
city STRING)
But this method doesn't work, because it seems the assignments are wrong. Any idea?
Hive does not calculate variables, it substitutes them as is, in your case it will be exactly this string 'date +%Y-%m-%d'. Also it is not possible to use UDF like current_date() in place of table name in DDL.
The solution is to calculate variable in the shell and pass it to Hive:
In the shell
dates=$(date +%Y_%m_%d);
hive --hivevar date="$dates" -f myscript.hql
In the script:
use mydb; create table if not exists tab_${hivevar:date} (id int);
Or you can execute hive script from command line using hive -e, in this case variable can be substituted using shell:
dates=$(date +%Y_%m_%d);
hive -e "use mydb; create table if not exists tab_${dates} (id int);"

simple sql script within a bash fails

#!/bin/bash
mysql -h 172.17.0.1 -P 13306 -u root -p123<<MYSQL_SCRIPT
USE 1DB;
CREATE TABLE 111 (
ID TEXT,
TEST_CASE TEXT
);
INSERT INTO 111 (ID, TEST_CASE)
VALUES ("111", "111");
MYSQL_SCRIPT
when run this script, bash returns: ERROR 1064 (42000) at line 4: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '111 (
ID TEXT,
TEST_CASE TEXT
)' at line 1
Possibly look at this answer - Are you allowed to use numbers as table names in MySQL? - and more specifically the line 'Identifiers may begin with a digit but unless quoted may not consist solely of digits.'
You could try this:
USE 1DB;
CREATE TABLE '111' (
ID TEXT,
TEST_CASE TEXT
);
INSERT INTO '111' (ID, TEST_CASE)
VALUES ("111", "111");
MYSQL_SCRIPT

sqlite - Insert data into blob

I'm trying to insert binary data into a blob using SQLite3's shell, which means regular SQL statements. Here's my table:
CREATE TABLE MYTABLE
(ID INTEGER,
BINDATA BLOB NOT NULL,
SOMEFK INTEGER REFERENCES OTHERTABLE(ID) NOT NULL,
PRIMARY KEY(ID)
);
And this is the kind of insert statement I'm trying:
INSERT INTO MYTABLE (BINDATA, SOMEFK)
VALUES (__READBINDATA('/tmp/somefile'), 1);
With __READBINDATA(file) being the function I am looking for. Is that possible?
There is no built-in or shell function to read a file into a blob.
However, with the help of the hexdump tool, it's possible to transform a file's contents into a blob literal:
echo "insert into mytable(bindata, somefk) " \
"values(x'"$(hexdump -v -e '1/1 "%02x"' /tmp/somefile)"', 1);"
This command can then be piped into the sqlite3 shell.

How do you use script variables in psql?

In MS SQL Server, I create my scripts to use customizable variables:
DECLARE #somevariable int
SELECT #somevariable = -1
INSERT INTO foo VALUES ( #somevariable )
I'll then change the value of #somevariable at runtime, depending on the value that I want in the particular situation. Since it's at the top of the script it's easy to see and remember.
How do I do the same with the PostgreSQL client psql?
Postgres variables are created through the \set command, for example ...
\set myvariable value
... and can then be substituted, for example, as ...
SELECT * FROM :myvariable.table1;
... or ...
SELECT * FROM table1 WHERE :myvariable IS NULL;
edit: As of psql 9.1, variables can be expanded in quotes as in:
\set myvariable value
SELECT * FROM table1 WHERE column1 = :'myvariable';
In older versions of the psql client:
... If you want to use the variable as the value in a conditional string query, such as ...
SELECT * FROM table1 WHERE column1 = ':myvariable';
... then you need to include the quotes in the variable itself as the above will not work. Instead define your variable as such ...
\set myvariable 'value'
However, if, like me, you ran into a situation in which you wanted to make a string from an existing variable, I found the trick to be this ...
\set quoted_myvariable '\'' :myvariable '\''
Now you have both a quoted and unquoted variable of the same string! And you can do something like this ....
INSERT INTO :myvariable.table1 SELECT * FROM table2 WHERE column1 = :quoted_myvariable;
One final word on PSQL variables:
They don't expand if you enclose them in single quotes in the SQL statement.
Thus this doesn't work:
SELECT * FROM foo WHERE bar = ':myvariable'
To expand to a string literal in a SQL statement, you have to include the quotes in the variable set. However, the variable value already has to be enclosed in quotes, which means that you need a second set of quotes, and the inner set has to be escaped. Thus you need:
\set myvariable '\'somestring\''
SELECT * FROM foo WHERE bar = :myvariable
EDIT: starting with PostgreSQL 9.1, you may write instead:
\set myvariable somestring
SELECT * FROM foo WHERE bar = :'myvariable'
You can try to use a WITH clause.
WITH vars AS (SELECT 42 AS answer, 3.14 AS appr_pi)
SELECT t.*, vars.answer, t.radius*vars.appr_pi
FROM table AS t, vars;
Specifically for psql, you can pass psql variables from the command line too; you can pass them with -v. Here's a usage example:
$ psql -v filepath=/path/to/my/directory/mydatafile.data regress
regress=> SELECT :'filepath';
?column?
---------------------------------------
/path/to/my/directory/mydatafile.data
(1 row)
Note that the colon is unquoted, then the variable name its self is quoted. Odd syntax, I know. This only works in psql; it won't work in (say) PgAdmin-III.
This substitution happens during input processing in psql, so you can't (say) define a function that uses :'filepath' and expect the value of :'filepath' to change from session to session. It'll be substituted once, when the function is defined, and then will be a constant after that. It's useful for scripting but not runtime use.
FWIW, the real problem was that I had included a semicolon at the end of my \set command:
\set owner_password 'thepassword';
The semicolon was interpreted as an actual character in the variable:
\echo :owner_password
thepassword;
So when I tried to use it:
CREATE ROLE myrole LOGIN UNENCRYPTED PASSWORD :owner_password NOINHERIT CREATEDB CREATEROLE VALID UNTIL 'infinity';
...I got this:
CREATE ROLE myrole LOGIN UNENCRYPTED PASSWORD thepassword; NOINHERIT CREATEDB CREATEROLE VALID UNTIL 'infinity';
That not only failed to set the quotes around the literal, but split the command into 2 parts (the second of which was invalid as it started with "NOINHERIT").
The moral of this story: PostgreSQL "variables" are really macros used in text expansion, not true values. I'm sure that comes in handy, but it's tricky at first.
postgres (since version 9.0) allows anonymous blocks in any of the supported server-side scripting languages
DO '
DECLARE somevariable int = -1;
BEGIN
INSERT INTO foo VALUES ( somevariable );
END
' ;
http://www.postgresql.org/docs/current/static/sql-do.html
As everything is inside a string, external string variables being substituted in will need to be escaped and quoted twice. Using dollar quoting instead will not give full protection against SQL injection.
You need to use one of the procedural languages such as PL/pgSQL not the SQL proc language.
In PL/pgSQL you can use vars right in SQL statements.
For single quotes you can use the quote literal function.
I solved it with a temp table.
CREATE TEMP TABLE temp_session_variables (
"sessionSalt" TEXT
);
INSERT INTO temp_session_variables ("sessionSalt") VALUES (current_timestamp || RANDOM()::TEXT);
This way, I had a "variable" I could use over multiple queries, that is unique for the session. I needed it to generate unique "usernames" while still not having collisions if importing users with the same user name.
Another approach is to (ab)use the PostgreSQL GUC mechanism to create variables. See this prior answer for details and examples.
You declare the GUC in postgresql.conf, then change its value at runtime with SET commands and get its value with current_setting(...).
I don't recommend this for general use, but it could be useful in narrow cases like the one mentioned in the linked question, where the poster wanted a way to provide the application-level username to triggers and functions.
I've found this question and the answers extremely useful, but also confusing. I had lots of trouble getting quoted variables to work, so here is the way I got it working:
\set deployment_user username -- username
\set deployment_pass '\'string_password\''
ALTER USER :deployment_user WITH PASSWORD :deployment_pass;
This way you can define the variable in one statement. When you use it, single quotes will be embedded into the variable.
NOTE! When I put a comment after the quoted variable it got sucked in as part of the variable when I tried some of the methods in other answers. That was really screwing me up for a while. With this method comments appear to be treated as you'd expect.
I really miss that feature. Only way to achieve something similar is to use functions.
I have used it in two ways:
perl functions that use $_SHARED variable
store your variables in table
Perl version:
CREATE FUNCTION var(name text, val text) RETURNS void AS $$
$_SHARED{$_[0]} = $_[1];
$$ LANGUAGE plperl;
CREATE FUNCTION var(name text) RETURNS text AS $$
return $_SHARED{$_[0]};
$$ LANGUAGE plperl;
Table version:
CREATE TABLE var (
sess bigint NOT NULL,
key varchar NOT NULL,
val varchar,
CONSTRAINT var_pkey PRIMARY KEY (sess, key)
);
CREATE FUNCTION var(key varchar, val anyelement) RETURNS void AS $$
DELETE FROM var WHERE sess = pg_backend_pid() AND key = $1;
INSERT INTO var (sess, key, val) VALUES (sessid(), $1, $2::varchar);
$$ LANGUAGE 'sql';
CREATE FUNCTION var(varname varchar) RETURNS varchar AS $$
SELECT val FROM var WHERE sess = pg_backend_pid() AND key = $1;
$$ LANGUAGE 'sql';
Notes:
plperlu is faster than perl
pg_backend_pid is not best session identification, consider using pid combined with backend_start from pg_stat_activity
this table version is also bad because you have to clear this is up occasionally (and not delete currently working session variables)
Variables in psql suck. If you want to declare an integer, you have to enter the integer, then do a carriage return, then end the statement in a semicolon. Observe:
Let's say I want to declare an integer variable my_var and insert it into a table test:
Example table test:
thedatabase=# \d test;
Table "public.test"
Column | Type | Modifiers
--------+---------+---------------------------------------------------
id | integer | not null default nextval('test_id_seq'::regclass)
Indexes:
"test_pkey" PRIMARY KEY, btree (id)
Clearly, nothing in this table yet:
thedatabase=# select * from test;
id
----
(0 rows)
We declare a variable. Notice how the semicolon is on the next line!
thedatabase=# \set my_var 999
thedatabase=# ;
Now we can insert. We have to use this weird ":''" looking syntax:
thedatabase=# insert into test(id) values (:'my_var');
INSERT 0 1
It worked!
thedatabase=# select * from test;
id
-----
999
(1 row)
Explanation:
So... what happens if we don't have the semicolon on the next line? The variable? Have a look:
We declare my_var without the new line.
thedatabase=# \set my_var 999;
Let's select my_var.
thedatabase=# select :'my_var';
?column?
----------
999;
(1 row)
WTF is that? It's not an integer, it's a string 999;!
thedatabase=# select 999;
?column?
----------
999
(1 row)
I've posted a new solution for this on another thread.
It uses a table to store variables, and can be updated at any time. A static immutable getter function is dynamically created (by another function), triggered by update to your table. You get nice table storage, plus the blazing fast speeds of an immutable getter.