I'm trying to create a query with a IN clause that maps a table with other that have a composite key. For reference the first table maps a composite key (three columns) to OtherTable (contains a OtherTableId object).
Example:
select t1
from Table t1
where t1.otherTable in :listOfOtherTables;
...
List<OtherTable> listOfOtherTables= Arrays.asList(new OtherTable(...), ...);
query.setParameter("listOfOtherTables", listOfOtherTables);
I searched how to do this and is pretty straightforward. In fact I use this with strings with success like this:
select t1
from Table t1
where t1.states in :listOfStates;
...
List<String> listOfStates = Arrays.asList("A", "B");
query.setParameter("listOfStates", listOfStates);
When I run the first example (by the way the project is in Spring 4.2 + Weblogic 12.1.2), the transformation for the query to be executed in Oracle is this:
select t1
from Table t1
where (NULL, NULL NULL) IN (("value1", "value2", "value3"), ("value1", "value2", "value3), ...);
Where is supposed to be the column names, appears NULL.
Anyone can help me?
PS: I have tried this also:
select t1
from Table t1
where t1.otherTable in (:otherTableObject1, :otherTableObject2, otherTableObject3);
Also doesn't work.
Related
I have two table one of which contains the rule for another
create table t1(id int, query string)
create table t2(id int, place string)
insert into t1 values (1,'id < 10')
insert into t1 values (2,'id == 10')
And the values in t2 are
insert into t2 values (11,'Nevada')
insert into t2 values (20,'Texas')
insert into t2 values (10,'Arizona')
insert into t2 values (2,'Abegal')
I need to select from second table as per the value of first table column value.
like
select * from t2 where {query}
or
with x(query)
as
(select c2 from test)
select * from test where query;
but neither are helping.
There are a couple of problems with storing criteria in a table like this:
First, as has already been noted, you'll likely have to resort to dynamic SQL, which can get messy, and limits how you can use it.
It's going to be problematic (to say the least) to validate and parse your criteria. What if someone writes a rule of [id] *= 10, or [this_field_doesn't_exist] = blah?
If you're just storing potential values for your [id] column, one solution would be to have your t1 (storing your queries) include a min value and max value, like this:
CREATE TABLE t1
(
[id] INT IDENTITY(1,1) PRIMARY KEY,
min_value INT NULL,
max_value INT NULL
)
Note that both the min and max values can be null. Your provided criteria would then be expressed as this:
INSERT INTO t1
([id], min_value, max_value)
VALUES
(1, NULL, 10),
(2, 10, 10)
Note that I've explicitly referenced what attibutes we're inserting into, as you should also be doing (to prevent issues with attributes being added/modified down the line).
A null value on min_value means no lower limit; a null max_value means no upper limit.
To then get results from t2 that meet all your t1 criteria, simply do an INNER JOIN:
SELECT t2.*
FROM t2
INNER JOIN t1 ON
(t2.id <= t1.max_value OR t1.max_value IS NULL)
AND
(t2.id >= t1.min_value OR t1.min_value IS NULL)
Note that, as I said, this will only return results that match all your criteria. If you need to more complex logic (for example, show records that meet Rules 1, 2 and 3, or meet Rule 4), you'll likely have to resort to dynamic SQL (or at the very least some ugly JOINs).
As stated in a comment, however, you want to have more complex rules, which might mean you have to end up using dynamic SQL. However, you still have the problem of validating and parsing your rule. How do you handle cases where a user enters an invalid rule?
A better solution might be to store your rules in a format that can easily be parsed and validated. For example, come up with an XML schema that defines a valid rule/criterion. Then, your Rules table would have a rule XML attribute, tied to that schema, so users could only enter valid rules. You could then either shred that XML document, or create the SQL client-side to come up with your query.
I got the answer myself. And I am putting it below.
I have used python CLI to do the job. (As snowflake does not support dynamic query)
I believe one can use the same for other DB (tedious but doable)
setting up configuration to connect
CONFIG_PATH = "/root/config/snowflake.json"
with open(CONFIG_PATH) as f:
config = json.load(f)
#snowflake
snf_user = config['snowflake']['user']
snf_pwd = config['snowflake']['pwd']
snf_account = config['snowflake']['account']
snf_region = config['snowflake']['region']
snf_role = config['snowflake']['role']
ctx = snowflake.connector.connect(
user=snf_user,
password=snf_pwd,
account=snf_account,
region=snf_region,
role=snf_role
)
--comment
Used multiple cursor as in loop we don't want recursive connection
cs = ctx.cursor()
cs1 = ctx.cursor()
query = "select c2 from test"
cs.execute(query)
for (x) in cs:
y = "select * from test1 where {0}".format(', '.join(x).replace("'",""))
cs1.execute(y)
for (y1) in cs1:
print('{0}'.format(y1))
And boom done
I have 2 bigquery tables with nested columns, I need to update all the columns in table1 whenever table1.value1=table2.value, also those tables having a huge amount of data.
I could update a single nested column with static column like below,
#standardSQL
UPDATE `ck.table1`
SET promotion_id = ARRAY(
SELECT AS STRUCT * REPLACE (100 AS PromotionId ) FROM UNNEST(promotion_id)
)
But when I try to reuse the same to update multiple columns based on table2 data I am getting exceptions.
I am trying to update table1 with table2 data whenever the table1.value1=table2.value with all the nested columns.
As of now, both tables are having a similar schema.
I need to update all the columns in table1 whenever table1.value1=table2.value
... both tables are having a similar schema
I assume by similar you meant same
Below is for BigQuery Standard SQL
You can use below query to get combining result and save it back to table1 either using destination table or CREATE OR REPLACE TABLE syntax
#standardSQL
SELECT AS VALUE IF(value IS NULL, t1, t2)
FROM `project.dataset.table1` t1
LEFT JOIN `project.dataset.table2` t2
ON value1 = value
I have not tried this approach with UPDATE syntax - but you can try and let us know :o)
There is a db2 database with two tables. The first one, table1, has autoincrement column ID. It is the foreign key for the table2.
A am writing an HTML generator for SQL queries. So with some input parameters it generates a query or multiple queries. It is not connected to the database.
What I need is to get that autoincrement field and use it in next queries.
So basically, the scenario is:
insert into table1;
select autogenerated field ID;
insert into table2 using that ID;
insert into table2 using that ID;
...some more similar inserts...
insert into table2 using that ID;
And all that SQL query should be generated and then used as a single SQL script.
I was thinking about something like this:
SELECT ID FROM FINAL TABLE (INSERT INTO Table1 (t1column1, t1column2, etc.)
VALUES (t1value1, t1value2, etc.))
But I don't know, how I can write the result into a variable so I could use it in next queries like this:
INSERT INTO Table2 (foreignKeyCol, t2column1, t2column2, etc.)
VALUES ($ID, t2value1, t2value2, etc.)
I could just paste that select instead of $ID, but the second query can be used several times with the same $ID and different values.
EDIT: DB2 10.5 on Linux.
You can chain several inserts together using CTEs, like so:
WITH idcte (id) as (
SELECT ID FROM FINAL TABLE (
INSERT INTO Table1 (t1column1, t1column2, etc.)
VALUES (t1value1, t1value2, etc.)
)
),
ins1 (id) as (
SELECT foreignKeyCol FROM FINAL TABLE (
INSERT INTO Table2 (foreignKeyCol, t2column1, t2column2, etc.)
SELECT id, t2value1, t2value2, etc.
FROM idcte
)
),
-- more CTEs
SELECT foreignKeyCol FROM FINAL TABLE (
-- your last INSERT ... SELECT FROM
)
Essentially you will have to wrap each INSERT into a SELECT FROM FINAL TABLE for this to work.
Alternatively, you can use a global variable to keep the ID value:
CREATE VARIABLE myNewId INT;
SET myNewId = (SELECT ID FROM FINAL TABLE (
INSERT INTO Table1 (t1column1, t1column2, etc.)
VALUES (t1value1, t1value2, etc.)
));
INSERT INTO Table2 (foreignKeyCol, t2column1, t2column2, etc.)
VALUES (myNewId, t2value1, t2value2, etc.);
DROP VARIABLE myNewId;
This assumes a recent version of Db2 for LUW.
I have two tables in my SQLite Database (dummy names):
Table 1: FileID F_Property1 F_Property2 ...
Table 2: PointID ForeignKey(fileid) P_Property1 P_Property2 ...
The entries in Table2 all have a foreign key column that references an entry in Table1.
I now would like to select entries from Table2 where for example F_Property1 of the referenced file in Table1 has a specific value.
I tried something naive:
select * from Table2 where fileid=(select FileID from Table1 where F_Property1 > 1)
Now this actually works..kind of. It selects a correct file id from Table1 and returns entries from Table2 with this ID. But it only uses the first returned ID. What I need it to do is basically connect the returned IDs from the inner select by OR so it returns data for all the IDs.
How can I do this? I think it is some kind of cross-table-query like what is asked here What is the proper syntax for a cross-table SQL query? but these answers contain no explaination of what they are actually doing so I'm struggeling with any implementation.
They are using JOIN statements, but wouldn't this mix entries from Table1 and Table2 together while only checking matching IDs in both tables? At least that is how I understand this http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins
As you may have noticed from the style, I'm very new to using databases in general, so please forgive me if not everything is clear about what I want. Please leave a comment and I will try to improve the question if neccessary.
The = operator compares a single value against another, so it is assumed that the subquery returns only a single row.
To check whether a (column) value is in a set of values, use IN:
SELECT *
FROM Table2
WHERE fileid IN (SELECT FileID
FROM Table1
WHERE F_Property1 > 1)
The way joins work is not by "mixing" the data, but sort of combining them based on the key.
In your case (I am assuming the key field in Table 1 is unique), if you join those two tables on the primary key field, you will end up with all the entries in table2 plus all corresponding fields from table1. If you were doing this:
select * from table1, table2 where table1.fieldID=table2.foreignkey;
then, providing your key fields are set up right, you will end up with the following:
PointID ForeignKey(fileid) P_Property1 P_Property2 FileID F_Property1 F_Property2
The field values from table1 would be from matching rows.
Now, if you do this:
select table1.* from table 1, table2 where
table1.fieldID=table2.foreignkey and F_Property1>1;
Would essentially get the same set of records, but will only show the columns from the second table, and only those that satisfy the where condition for the first one.
Hope this helps :)
If I understood your question correctly this will get the job done.
Select t2.*
from table1 t1
inner join table2 t2 on t2.id = t1.id
where t1.Prop = 'SomeValue'
I'm attempting to select a table of data and insert this data into another file with similar column names (it's essentially duplicate data). Current syntax as follows:
INSERT INTO TABLE1 (id, id2, col1, col2)
SELECT similiarId, similiarId2, similiarCol1, similiarCol2
FROM TABLE2
The problem I have is generating unique key fields (declared as integers) for the newly inserted records. I can't use table2's key's as table1 has existing data and will error on duplicate key values.
I cannot change the table schema and these are custom id columns not generated automatically by the DB.
Does table1 have an auto-increment on its id field? If so, can you lose similiarId from the insert and let the auto-increment take care of unique keys?
INSERT INTO TABLE1 (id2, col1, col2) SELECT similiarId2, similiarCol1, similiarCol2
FROM TABLE2
As per you requirement you need to do you query like this:
INSERT INTO TABLE1 (id, id2, col1, col2)
SELECT (ROW_NUMBER( ) OVER ( ORDER BY ID ASC ))
+ (SELECT MAX(id) FROM TABLE1) AS similiarId
, similiarId2, similiarCol1, similiarCol2
FROM TABLE2
What have I done here:
Added ROW_NUMBER() which will start from 1 so also added MAX() function for ID of destination table.
For better explanation See this SQLFiddle.
Im not sure if I understad you correctly:
You want to copy all data from TABLE2 but be sure that TABLE2.similiarId is not alredy in TABLE1.id, maybe this is solution for your problem:
DECLARE #idmax INT
SELECT #idmax = MAX(id) FROM TABLE1
INSERT INTO TABLE1 (id, id2, col1, col2)
SELECT similiarId + #idmax, similiarId2, similiarCol1, similiarCol2
FROM TABLE2
Now insert will not fail because of primary key violation because every inserted id will be greater then id witch was alredy there.
If the id field is defined as auto-id and you leave it out of the insert statement, then sql will generate unique id's from the available pool.
In SQL Server we have the function ROW_NUMBER, and if I have understood you correctly the following code will do what you need:
INSERT INTO TABLE1 (id, id2, col1, col2)
SELECT (ROW_NUMBER( ) OVER ( ORDER BY similiarId2 ASC )) + 6 AS similiarId,
similiarId2, similiarCol1, similiarCol2
FROM TABLE2
ROW_NUMBER will bring the number of each row, and you can add a "magic value" to it to make those values different from the current max ID of TABLE1. Let's say your current max ID is 6, then adding 6 to each result of ROW_NUMBER will give you 7, 8, 9, and so on. This way you won't have the same values for the TABLE1's primary key.
I have asked Google and it said to me that Sybase has the function ROW_NUMBER too (http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.sqlanywhere.12.0.1/dbusage/ug-olap-s-51258147.html), so I think you can try it.
If you want to make an identical table why not simply use (quick and dirty) Select INTO method ?
SELECT * INTO TABLE2
FROM TABLE1
Hope This helps.
Make the table1 ID IDENTITY if it is not a custom id.
or
Create new primary key in table1 and make it IDENTITY, and you can keep the previous IDs in the same format (but not primary key).
Your best bet may be to add an additional column on Table2 for Table1.Id. This way you keep both sets of Keys.
(If you are busy with a data merge, retaining Table1.Id may be important for any foreign keys which may still reference Table1.Id - you will then need to 'fix up' foreign keys in tables referencing Table1.Id, which now need to reference the applicable key in table 2).
If you need your 2nd table keep similar values as in 1st table , then donot apply auto increment on 2nd table.
If you have large range, and want easy fast make and don't care about ID:
Example wit CONCAT
INSERT INTO session(SELECT CONCAT("3000", id) as id, cookieid FROM `session2`)
but you can using also REPLACE