Query:
CREATE TABLE SRC(SRC_STRING VARCHAR(20))
CREATE OR REPLACE TABLE TGT(tgt_STRING VARCHAR(10))
INSERT INTO SRC VALUES('JKNHJYGHTFGRTYGHJ')
INSERT INTO TGT(TGT_STRING) SELECT SRC_STRING::VARCHAR(10) FROM SRC
Error: String 'JKNHJYGHTFGRTYGHJ' is too long and would be truncated
Is there any way we can enable enforce length(not for COPY command) while inserting data from high precision to low precision column?
I'd recommend using the SUBSTR( ) function, to pick the piece of data you want, example as follows where I take the first 10 characters (if available, if there were only 5 it'd use those 5 characters).
CREATE OR REPLACE TEMPORARY TABLE SRC(
src_string VARCHAR(20));
CREATE OR REPLACE TEMPORARY TABLE TGT(
tgt_STRING VARCHAR(10));
INSERT INTO src
VALUES('JKNHJYGHTFGRTYGHJ');
INSERT INTO tgt(tgt_string)
SELECT SUBSTR(src_string, 1, 10)
FROM SRC;
SELECT * FROM tgt; --JKNHJYGHTF
Here's the documentation on the function:
https://docs.snowflake.com/en/sql-reference/functions/substr.html
How can I insert data into the table from the finished procedure, which was created using other scripts (data are on the rows in the results). This solutions was necessarily because I must concatenate coordinates.
One of the finishing step is:
select concat ('insert into table_shop ([IU], [ODD]) values ', data1)as PasteDat
from #tmp_07
In value data1 I have upload data.
When finished scripts I have result a lot of rows.
For example:
insert into table_shop ([IU], [ODD]) values ('A0001', 'D08')
insert into table_shop ([IU], [ODD]) values ('Agw44', 'D10')
insert into table_shop ([IU], [ODD]) values ('A5888', 'D18')
.
.
.
Now what I do is copying rows and paste on the other new query. Is there a more elegant way to do it in bulk?
Hope this helps
Use AdventureWorks2012
GO
Create Table #temp --table into which we need to insert
(
[DepartmentID] int,
[Name] varchar(50)
)
GO
Create PROCEDURE SP_ResultSet --SP which returns a result set
as
Select [DepartmentID]
,[Name]
from [HumanResources].[Department]
GO
Insert into #temp EXEC SP_ResultSet -- serves the purpose
GO
Select * from #temp order by [DepartmentID]
I have below text file having different words inside it:
My aim is to insert only 4 character words from the text file into a table variable which is #temp, using bcp command.
So, at the end, the table variable #temp will look like below:
Create a table where you will store the data coming from your file:
create table import(WORDS nvarchar(100))
Import data from file with bcp into the table created in the first step:
bcp [test].[dbo].[import] in d:\test.txt -c -T
Declare #table variable:
declare #table table ([ID] int identity(1,1), WORDS nvarchar(100))
Insert into #table variable only words with length = 4:
insert into #table
select WORDS
from import
where len(WORDS) <= 4
Now #table variable contains this data:
I have 2 csv files. In one file I have a phone number with prices and in the second file I have a phone number with the name of its owner.
First file: file1.csv
491732234332;30,99
491723427343;12,59
491732097232;33,31
Second file: file2.csv
01732/234332;Ben Jefferson
01723/427343;Jon Doe
01732/097232;Benjamin Franklin
My problem is, that the phone number columns are formatted differently and I can not find a way to compare them.
Desired output is:
01732/234332;30,99;Ben Jefferson
01723/427343;12,59;Jon Doe
01732/097232;33,31;Benjamin Franklin
My sql statement is
create temp table FILETB1
(phonenr char(30),
price char(30)
);
create temp table FILETB2
(phonenr char(40),
owner char(60)
);
load from "file1.csv"
insert into FILETB1;
load from "file2.csv"
insert into FILETB2;
unload to "output.csv"
select FILETB1.phonenr, FILETB1.price, FILETB2.owner
from FILETB1, FILETB2
where FILETB1.phonenr = FILETB2.phonenr
How do I have to modify my where clause to be able to compare both columns?
We are working on linux with IBM INFORMIX-SQL Version 7.50.UC5 which makes finding a working solution not easier since many functions are not supported...
Any help is highly appreciated!
Using just the facilities of ISQL, you can use:
CREATE TEMP TABLE FILETB1
(
phonenr CHAR(30),
price CHAR(30)
);
CREATE TEMP TABLE FILETB2
(
phonenr CHAR(40),
owner CHAR(60)
);
LOAD FROM "file1.csv" DELIMITER ';' INSERT INTO FILETB1;
LOAD FROM "file2.csv" DELIMITER ';' INSERT INTO FILETB2;
UNLOAD TO "output.csv" DELIMITER ';'
SELECT FILETB2.phonenr, FILETB1.price, FILETB2.owner
FROM FILETB1, FILETB2
WHERE FILETB1.phonenr[3,6] = FILETB2.phonenr[2,5]
AND FILETB1.phonenr[7,12] = FILETB2.phonenr[7,12];
Testing with DB-Access, I got:
$ dbaccess stores so-35360310.sql
Database selected.
Temporary table created.
Temporary table created.
3 row(s) loaded.
3 row(s) loaded.
3 row(s) unloaded.
Database closed.
$ cat output.csv
01732/234332;30,99;Ben Jefferson;
01723/427343;12,59;Jon Doe;
01732/097232;33,31;Benjamin Franklin;
$
The key is using the built-in substring [start,end] operator. You compare the two parts of the phone numbers that are comparable. And you select the number from file2.csv (table FILETB2) because that's the format you wanted.
For the sample data, of course, you could simply use Unix command line tools to do the job, but knowing how to do it inside the DBMS is helpful too.
You could also use the SUBSTR(col, start, len) function:
UNLOAD TO "output2.csv" DELIMITER ';'
SELECT FILETB2.phonenr, FILETB1.price, FILETB2.owner
FROM FILETB1, FILETB2
WHERE SUBSTR(FILETB1.phonenr, 3, 3) = SUBSTR(FILETB2.phonenr, 2, 3)
AND SUBSTR(FILETB1.phonenr, 7, 6) = SUBSTR(FILETB2.phonenr, 7, 6);
This produces the same output from the sample data.
If ISQL does not recognize the DELIMITER ';' clause to the UNLOAD (or LOAD) pseudo-SQL statements, then you can set the environment variable DBDELIMITER=';' before running the script and remove those clauses from the SQL.
The sugestion is, for the file2.csv if you use tr you get:
[infx1210#tardis ~]$ cat file2.csv | tr '/' ';' > file.2
[infx1210#tardis ~]$ cat file.2
01732;234332;Ben Jefferson
01723;427343;Jon Doe
01732;097232;Benjamin Franklin
[infx1210#tardis ~]$
For the file1.csv if you know that the prefix is always 6 digits long you can use:
[infx1210#tardis ~]$ cut -c7- file1.csv > file.1
[infx1210#tardis ~]$ cat file.1
234332;30,99
427343;12,59
097232;33,31
[infx1210#tardis ~]$
As you can see you can use the 1st field of the file.1 to cross directly with the 2nd one on the file.2.
Then you can execute:
CREATE TEMP TABLE filetb1(
phonenr CHAR(30),
price CHAR(30)
);
CREATE TEMP TABLE filetb2(
prefix CHAR(30),
phonenr CHAR(30),
owner CHAR(60)
);
LOAD FROM 'file.1' DELIMITER ';' INSERT INTO filetb1;
LOAD FROM 'file.2' DELIMITER ';' INSERT INTO filetb2;
UNLOAD TO 'output.csv' DELIMITER ';'
SELECT
TRIM(f2.prefix )|| '/' || TRIM(f2.phonenr),
f1.price,
f2.owner
FROM
filetb1 f1, filetb2 f2
WHERE
f1.phonenr = f2.phonenr;
And you'll get the disered ouput:
[infx1210#tardis ~]$ cat output.csv
01732/234332;30,99;Ben Jefferson;
01723/427343;12,59;Jon Doe;
01732/097232;33,31;Benjamin Franklin;
[infx1210#tardis ~]$
If you're not sure that the prefix on the file1.csv is a 6 digit length leave it and use LIKE:
CREATE TEMP TABLE filetb1(
phonenr CHAR(30),
price CHAR(30)
);
CREATE TEMP TABLE filetb2(
prefix CHAR(30),
phonenr CHAR(30),
owner CHAR(60)
);
LOAD FROM 'file.1' DELIMITER ';' INSERT INTO filetb1;
LOAD FROM 'file.2' DELIMITER ';' INSERT INTO filetb2;
UNLOAD TO 'output.csv' DELIMITER ';'
SELECT
TRIM(f2.prefix )|| '/' || TRIM(f2.phonenr),
f1.price,
f2.owner
FROM
filetb1 f1, filetb2 f2
WHERE
f1.phonenr LIKE TRIM(f2.phonenr)||'%';
I'm trying to insert binary data into a blob using SQLite3's shell, which means regular SQL statements. Here's my table:
CREATE TABLE MYTABLE
(ID INTEGER,
BINDATA BLOB NOT NULL,
SOMEFK INTEGER REFERENCES OTHERTABLE(ID) NOT NULL,
PRIMARY KEY(ID)
);
And this is the kind of insert statement I'm trying:
INSERT INTO MYTABLE (BINDATA, SOMEFK)
VALUES (__READBINDATA('/tmp/somefile'), 1);
With __READBINDATA(file) being the function I am looking for. Is that possible?
There is no built-in or shell function to read a file into a blob.
However, with the help of the hexdump tool, it's possible to transform a file's contents into a blob literal:
echo "insert into mytable(bindata, somefk) " \
"values(x'"$(hexdump -v -e '1/1 "%02x"' /tmp/somefile)"', 1);"
This command can then be piped into the sqlite3 shell.