I used in AgensGraph release 1.3
but I don't used where clause in operator.
e.g:
eya=# CREATE (a:person {name:['ljh','jhlee'], age:102});
GRAPH WRITE (INSERT VERTEX 1, INSERT EDGE 0)
CREATE TABLE t_name (
name text,
age int
);
CREATE TABLE
insert into t_name values ('ljh', '102');
INSERT 0 1
eya=# insert into t_name values ('khan', '101');
INSERT 0 1
eya=# SELECT *
FROM t_name
WHERE name IN (MATCH (a:person) WHERE a.age = 102 RETURN a.name);
ERROR: syntax error at or near "MATCH" LINE 3: WHERE name IN (MATCH
(a:person) WHERE a.age = 102 RETURN a....
You should regexp in AgensGraph ver 1.3 release.
because, 1.3 version auto, and explicit type casting not operated.
so. you query executed below
SELECT * FROM t_name WHERE name IN (SELECT regexp_replace(unnest(string_to_array(t1.name::text, ',')),'[^a-zA-Z0-9]', '', 'g')
FROM (MATCH (a:person) WHERE a.age = 102 RETURN a.name) t1);
name | age
------+-----
ljh | 102
Related
I have some trouble trying to create an update/insert query.
From a .CSV, I create a temporary table (heading and one datarow as a example):
sku
product_id
description_en
description_ru
description_lv
EE1010
4633
Description in Eng
Description in Rus
Description in Lat
I intend to iterate over each row and update/insert rows into another table with this query:
UPDATE ProductLocalized
SET FullDescription = (CASE
WHEN LanguageID = 7 THEN description_en
WHEN LanguageID = 12 THEN description_ru
WHEN LanguageID = 14 THEN description_lv
END)
WHERE LanguageID IN (7, 12, 14)
AND ProductID = product_id;
My problem is how to add the INSERT part if some of the languages missing?
You can use Upsert in SQL to achieve this. Please find below quick example for the same.
create table dbo.test_source
(
id int identity(1,1),
language varchar(50),
description varchar(100)
)
create table dbo.test_dest
(
id int identity(1,1),
language varchar(50),
description varchar(100)
)
Insert into dbo.test_source values ('English', 'British language')
Insert into dbo.test_source values ('Hindi', 'Indian language')
Insert into dbo.test_source values ('Chinese', 'China')
Insert into dbo.test_dest values ('English', 'British language')
Insert into dbo.test_dest values ('Hindi', 'NA')
SELECT * FROM dbo.test_Source
SELECT * FROM dbo.test_Dest
Result
id language description
----------- -------------------------------------------------- ----------------------------------------------------------------------------------------------------
1 English British language
2 Hindi Indian language
3 Chinese China
id language description
----------- ------------------ -------------
1 English British language
2 Hindi NA
MERGE dbo.test_dest as MyTarget
USING
(
SELECT
ID,
Language,
Description
FROM dbo.test_source
) as MySource
ON MyTarget.Language = MySource.Language
WHEN MATCHED AND NOT
(
MySource.Description = ISNULL(MyTarget.Description, '')
)
THEN
UPDATE
Set MyTarget.Description = MySource.Description
WHEN NOT MATCHED BY TARGET
THEN
INSERT (Language, description)
VALUES (MySource.Language
,MySource.Description);
SELECT * FROM dbo.test_Dest
Result
id language description
----------- -------------------------------------------------- ----------------------------------------------------------------------------------------------------
1 English British language
2 Hindi Indian language
3 Chinese China
We can see record with 2 got updated with source table description and record with id 3 got inserted as it was not exist into destination table.
You can use IF EXIST or IF NOT EXIST statements to filter the records and then apply the INSERT or UPDATE Commands.
Example:
IF NOT EXIST ( Condition ) { INSERT }
IF EXIST ( Condition ) { UPDATE }
An alternate way is:
IF EXIST( Condition )
update
ELSE
insert
After adding a new column is it possible to insert values for that column using only one query?
TABLE:
id | category | level | new_column_name
---+-------------+-------+----------------
1 | Character | 3 |
2 | Character | 2 |
3 | Character | 2 |
4 | Character | 5 |
I'd need to do something like
INSERT INTO table_name
(new_column_name)
VALUES
('foo'), -- value for new_column_name : row1
('bar'), -- value for new_column_name : row2
...
;
I already used a similar query using postgresql cli in order to insert values, but it fails becauase uses null for the unset column values, one of which (id) is PRIMARY_KEY, so it can't be NULL.
This is the error it logs:
ERROR: null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, null, null, 1359).
EDIT:
What I'm doing is more an update than an insert, but what I would be able to do is to insert all the different values at once.
For instance, if possible, I would avoid doing this:
UPDATE table_name
SET new_column_name = 'foo'
WHERE id = 1;
UPDATE table_name
SET new_column_name = 'bar',
WHERE id = 2;
--...
You can use VALUES so as to construct an in-line table containing the values to be updated.:
UPDATE table_name AS v
SET new_column_name = s.val
FROM (VALUES (1, 'foo'), (2, 'bar')) AS s(id, val)
WHERE v.id = s.id
Demo here
You can use a huge CASE:
UPDATE table_name
SET new_column_name
= CASE WHEN id = 1 THEN 'foo'
WHEN id = 2 THEN 'bar'
END;
I have a schema as show like the below, and I want to run a query where I get a column in the output for every row of the points table.
So for each usage row I want to multiply the amount of the usage times the amount for the referenced points_id, and then sum that up and group by person. So for the example data I'd want output that looked like this:
Name | foo | bar | baz
-------|------|------|------
Scott | 10.0 | 24.0 | 0.0
Sam | 0.0 | 0.0 | 46.2
Here's the schema/data:
CREATE TABLE points (
ident int primary key NOT NULL,
abbrev VARCHAR NOT NULL,
amount real NOT NULL
);
CREATE TABLE usage (
ident int primary key NOT NULL,
name VARCHAR NOT NULL,
points_id integer references points (ident),
amount real
);
INSERT INTO points (ident, abbrev, amount) VALUES
(1, 'foo', 1.0),
(2, 'bar', 2.0),
(3, 'baz', 3.0);
INSERT INTO usage (ident, name, points_id, amount) VALUES
(1, 'Scott', 1, 10),
(2, 'Scott', 2, 12),
(3, 'Sam', 3, 3.4),
(4, 'Sam', 3, 12);
I'm using PostgreSQL 9.2.8
The data is just sample. There are thousands of rows in the real usage table and probably a dozen in the points table. The real intent here is I don't want to hardcode all the points summations as I use them in many functions.
select
t1.name,
sum(case when t2.abbrev='foo' then t1.amount*t2.amount else 0 end) as foo,
sum(case when t2.abbrev='bar' then t1.amount*t2.amount else 0 end) as bar,
sum(case when t2.abbrev='baz' then t1.amount*t2.amount else 0 end) as baz
from usage t1 inner join points t2 on t1.points_id=t2.ident
group by t1.name;
SQL Fiddle Example:http://sqlfiddle.com/#!15/cc84a/6;
Use following PostgreSQL function for dynamic cases:
create or replace function sp_test()
returns void as
$$
declare cases character varying;
declare sql_statement text;
begin
select string_agg(concat('sum(case when t2.abbrev=','''',abbrev,'''',' then t1.amount*t2.amount else 0 end) as ', abbrev),',') into cases from points;
drop table if exists temp_data;
sql_statement=concat('create temporary table temp_data as select
t1.name,',cases ,'
from usage t1 inner join points t2 on t1.points_id=t2.ident
group by t1.name ');
execute sql_statement;
end;
$$
language 'plpgsql';
Function uses temporary table to store dynamic columns data.
Call function in following way to get data:
select * from sp_test(); select * from temp_data;
I have the following table and data:
1) i want to show only the last part of sentence.
2) remove any single character from end of sentence.
3) remove and special charachter like -,#,?,_ from end of words
create table t1 (id number(9) , words varchar2(20));
insert into t1 values(1,'hello UK');
insert into t1 values(2,'hello Eypt');
insert into t1 values(3,'hello ALL');
insert into t1 values(4,'hello I');
insert into t1 values(5,'hello USA');
insert into t1 values(6,'hello #');
insert into t1 values(7,'hello #');
insert into t1 values(8,'hello A');
insert into t1 values(9,'hello 20');
insert into t1 values(10,'hello 2-2-2010');
i have used this
select REGEXP_SUBSTR(words,'\S+$)from t1;
expected results
id word
1 UK
2 EGYPT
3 ALL
5 USA
9 20
10 2-2-2010
MySQL version
SELECT id, SUBSTRING_INDEX('hello UK', ' ', -1) as word WHERE LENGHT(word) > 1
OracleDB version (The one you must use)
SELECT id, SUBSTR('hello UK', INSTR('hello UK', ' ')) as word WHERE LENGHT(word) > 1
will return in both cases {id} : UK
Don't forget to replace 'hello UK' with the good column name :)
HERE is the explanation for SUBSTR used with INSTR
Good luck :)
Read your database manual, they all have functions that do string manipulation and they all depend on the application for their syntax. SUBSTRING(words, 7, DATALENGTH(word) -7) would work in SQL Server.
There is a table in Oracle with the columns:
id | start_number | end_number
---+--------------+------------
1 | 100 | 200
2 | 151 | 200
3 | 25 | 49
4 | 98 | 99
5 | 49 | 100
There is a list of numbers (50, 99, 150).
I want an sql statement that returns all the ids where any of the numbers in the list of numbers is found equal to or between the start_number and the end_number.
Using the above example; 1, 4 and 5 should be returned.
1 - 150 is between or equal to 100 and 200
2 - none of the numbers are between or equal to 151 and 200
3 - none of the numbers are between or equal to 25 and 49
4 - 99 is between or equal to 98 and 99
5 - 50 and 99 are between or equal to 49 and 100
drop table TEMP_TABLE;
create table TEMP_TABLE(
THE_ID number,
THE_START number,
THE_END number
);
insert into TEMP_TABLE(THE_ID, THE_START, THE_END) values (1, 100, 200);
insert into TEMP_TABLE(THE_ID, THE_START, THE_END) values (2, 151, 200);
insert into TEMP_TABLE(THE_ID, THE_START, THE_END) values (3, 25, 49);
insert into TEMP_TABLE(THE_ID, THE_START, THE_END) values (4, 98, 99);
insert into TEMP_TABLE(the_id, the_start, the_end) values (5, 49, 100);
The following is the solution I came up with based on the comments and answers below plus some additional research:
SELECT
*
from
TEMP_TABLE
where
EXISTS (select * from(
select column_value as id
from table(SYS.DBMS_DEBUG_VC2COLL(50,99,150))
)
where id
BETWEEN TEMP_TABLE.the_start AND TEMP_TABLE.the_end
)
This works too:
SELECT
*
from
TEMP_TABLE
where
EXISTS (select * from(
select column_value as id
from table(sys.ku$_vcnt(50,99,150))
)
where id
BETWEEN TEMP_TABLE.the_start AND TEMP_TABLE.the_end
)
Here is a full example:
create table #list (
number int
)
create table #table (
id int,
start_number int,
end_number int
)
insert into #list values(50)
insert into #list values(99)
insert into #list values(150)
insert into #table values(1,100,200)
insert into #table values(2,151,200)
insert into #table values(3,25,49)
insert into #table values(4,98,99)
insert into #table values(5,49,100)
select distinct a.* from #table a
inner join #list l --your list of numbers
on l.number between a.start_number and a.end_number
drop table #list
drop table #table
You'll simply need to remove the code about #table (create, insert and drop) and put your table in the select.
It partly depends on how your are storing your list of numbers. I'll assume that they're in another table for now, as even then you have many options.
SELECT
*
FROM
yourTable
WHERE
EXISTS (SELECT * FROM yourList WHERE number BETWEEN yourTable.start_number AND yourTable.end_number)
Or...
SELECT
*
FROM
yourTable
INNER JOIN
yourList
ON yourList.number BETWEEN yourTable.start_number AND yourTable.end_number
Both of those are the simplest expressions, and work well for small data sets. If your list of numbers is relatively small, and your original data is relatively large, however, this may not scale well. This is because both of the above scan the whole of yourTable and then check each record against yourList.
What may be preferable is to scan the list, and then attempt to use indexes to check against the original data. This would require you to be able to reverse the BETWEEN statement to yourTable.start_number BETWEEN x and y
This can only be done if you know the maximum gap between start_number and end_number.
SELECT
*
FROM
yourList
INNER JOIN
yourTable
ON yourTable.end_number >= yourList.number
AND yourTable.start_number <= yourList.number
AND yourTable.start_number >= yourList.number - max_gap
To achieve this I would store the value of max_gap in another table, and update it as the values in yourTable change.
You will want to create a temporary table to hold your numbers, if the numbers aren't already in one. Then it becomes relatively simple:
SELECT DISTINCT mt.ID FROM MyTable mt
INNER JOIN TempTable tt --your list of numbers
ON tt.number Between mt.start_number and mt.end_number
To create the table based on an array of passed values, you can use table definitions in your procedure. I'm light on Oracle syntax and don't have TOAD handy, but you should be able to get something like this to work:
CREATE OR REPLACE PROCEDURE FindIdsFromList
AS
DECLARE
TYPE NumberRecord IS RECORD (Number int NOT NULL)
TYPE NumberList IS TABLE OF NumberRecord;
NumberList myNumberList;
BEGIN
myNumberList := (50,99,150);
SELECT DISTINCT mt.ID FROM MyTable mt
INNER JOIN myNumberList nt --your list of numbers
ON nt.Number Between mt.start_number and mt.end_number
END