I have 2 tables, Table1 and Table2. I need to replace a string or a series of strings (separated by commas) in Table1 referred from Table2.
I did a query on this but no luck:
TableNew: Iif(Instr([Table1.ColumnX1],[Table2.ColumnY1],Replace([Table1.ColumnX1],[Table2.ColumnY1],[Table2.ColumnY2]),[Table1.ColumnX1])
What i wanted to achieve was like this, in Table1 ColumnX1 there is:
A,B,C,1,2,3,4,D,E,F,5,6
Then in Table2 I have:
+----------+-----------+
| ColumnY1 | ColumnY2 |
+----------+-----------+
| A | Z |
| B | Y |
| C | X |
| D | W |
| E | V |
| F | U |
+----------+-----------+
After running that Query, it would result to
Z,Y,X,1,2,3,4,W,V,U,5,6
I would like this to run in each row available in Table1.
Thanks in advance.
You can use a query such as the following to modify the values held by Table1:
update table1 inner join table2 on instr(1, table1.columnx1, table2.columny1) > 0
set table1.columnx1 = replace(table1.columnx1, table2.columny1, table2.columny2)
Note that the joins implemented in the above query cannot be displayed by the MS Access query designer, however, it is valid SQL which may be successfully executed by the JET database engine used by MS Access.
Related
I'm honestly not sure how to title this - so apologies if it is unclear.
I have two tables I need to compare. One table contains tree names and nodes that belong to that tree. Each Tree_name/Tree_node combo will have its own line. For example:
Table: treenode
| TREE_NAME | TREE_NODE |
|-----------|-----------|
| 1 | A |
| 1 | B |
| 1 | C |
| 1 | D |
| 1 | E |
| 2 | A |
| 2 | B |
| 2 | D |
| 3 | C |
| 3 | D |
| 3 | E |
| 3 | F |
I have another table that contains names of queries and what tree_nodes they use. Example:
Table: queryrecord
| QUERY | TREE_NODE |
|---------|-----------|
| Alpha | A |
| Alpha | B |
| Alpha | D |
| BRAVO | A |
| BRAVO | B |
| BRAVO | D |
| CHARLIE | A |
| CHARLIE | B |
| CHARLIE | F |
I need to create an SQL where I input the QUERY name, and it returns any ‘TREE_NAME’ that includes all the nodes associated with the query. So if I input ‘ALPHA’, it would return TREE_NAME 1 & 2. If I ask it for CHARLIE, it would return nothing.
I only have read access, and don’t believe I can create temp tables, so I’m not sure if this is possible. Any advice would be amazing. Thank you!
You can use group by and having as follows:
Select t.tree_name
From tree_node t
join query_record q
on t.tree_node = q.tree_node
WHERE q.query = 'ALPHA'
Group by t.tree_name
Having count(distinct t.tree_node)
= (Select count(distinct q.tree_node) query_record q WHERE q.query = 'ALPHA');
Using an IN condition (a semi-join, which saves time over a join):
with prep (tree_node) as (select tree_node from queryrecord where query = :q)
select tree_name
from treenode
where tree_node in (select tree_node from prep)
group by tree_name
having count(*) = (select count(*) from prep)
;
:q in the prep subquery (in the with clause) is the bind variable to which you will assign the various QUERY values at runtime.
EDIT
I don't generally set up the test case on online engines; but in a comment below this answer, the OP said the query didn't work for him. So, I set up the example on SQLFiddle, here:
http://sqlfiddle.com/#!4/b575e/2
A couple of notes: for some reason, SQLFiddle thinks table names should be at most eight characters, so I had to change the second table name to queryrec (instead of queryrecord). I changed the name in the query, too, of course. And, second, I don't know how I can give bind values on SQLFiddle; I hard-coded the name 'Alpha'. (Note also that in the OP's sample data, this query value is not capitalized, while the other two are; of course, text values in SQL are case sensitive, so one should pay attention when testing.)
You can do this with a join and aggregation. The trick is to count the number of nodes in query_record before joining:
select qr.query, t.tree_name
from (select qr.*,
count(*) over (partition by query) as num_tree_node
from query_record qr
) qr join
tree_node t
on t.tree_node = qr.tree_node
where qr.query = 'ALPHA'
group by qr.query, t.tree_name, qr.num_tree_node
having count(*) = qr.num_tree_node;
Here is a db<>fiddle.
I know this might sound weird and but after executing a Teradata query, I am getting the result as such:
| ID | Name |
+------+---------+
| 1007 | Raj |
| | |
| 1001 | Sanjib |
| 1008 | Suman |
| 1004 | Mohan |
The 2nd row is just blank. (Sorry, could not edit properly but I hope you get the point) The query is pretty simple.
SELECT DISTINCT column_names
FROM table1
FULL OUTER JOIN table2 ON table1.ID = table2.ID;
I do not have access to its' DDL statements.
There was also another scenario where this same table was in the output, just without any row elements, just the column names.
I am using SQL WORKBENCH/J. Am I missing something?
I am currently working with an H2 database and I have written the following SQL, however the H2 database engine does not support the NOT IN being performed on a multiple column sub-query.
DELETE FROM AllowedParam_map
WHERE (AllowedParam_map.famid,AllowedParam_map.paramid) NOT IN (
SELECT famid,paramid
FROM macros
LEFT JOIN macrodata
ON macros.id != macrodata.macroid
ORDER BY famid)
Essentially I want to remove rows from allowedparam_map wherever it has the same combination of famid and paramid as the sub-query
Edit: To clarify, the sub-query is specifically trying to find famid/paramid combinations that are NOT present in macrodata, in an effort to weed out the allowedparam_map, hence the ON macros.id != macrodata.macroid. I'm also terrible at SQL so this might be completely the wrong way to do it.
Edit 2: Here is some more info about the pertinent schema:
Macros
| ID | NAME | FAMID |
| 0 | foo | 1 |
| 1 | bar | 1 |
| 2 | baz | 1 |
MacroData
| ID | MACROID | PARAMID | VALUE |
| 0 | 0 | 1 | 1024 |
| 1 | 0 | 2 | 200 |
| 2 | 0 | 3 | 89.85 |
AllowedParam_Map
| ID | FAMID | PARAMID |
| 0 | 1 | 1 |
| 1 | 1 | 2 |
| 2 | 1 | 3 |
| 3 | 1 | 4 |
The parameters are allowed on a per-family basis. Notice how the allowedParam_map table contains an entry for famid=1 and paramid=4, even though macro 0, aka "foo", does not have an entry for paramid=4. If we expand this, there might be another famid=1 macro that has paramid=4, but we cant be sure. I want to cull from the allowedParam_map table any unused parameters, based on the data in the macrodata table.
IN and NOT IN can always be replaced with EXISTS and NOT EXISTS.
Some points first:
You are using an ORDER BY in your subquery, which is of course superfluous.
You are outer-joining a table, which should have no effect when asking for existence. So either you need to look up a field in the outer-joined table, then inner-join it or you don't, then remove it from the query. (It's queer to join every non-related record (macros.id != macrodata.macroid) anyway.
You say in the comments section that both famid and paramid reside in table macros, so you can remove the outer join to macrodata from your query. You get:
As you say now that famid is in table macros and paramid is in table macrodata and you want to look up pairs that exist in AllowedParam_map, but not in the aformentioned tables, you seem to be looking for a simple inner join.
DELETE FROM AllowedParam_map
WHERE NOT EXISTS
(
SELECT *
FROM macros m
JOIN macrodata md ON md.macroid = m.id
WHERE m.famid = AllowedParam_map.famid
AND md.paramid = AllowedParam_map.paramid
);
You can use not exists instead:
DELETE FROM AllowedParam_map m
WHERE NOT EXISTS (SELECT 1
FROM macros LEFT JOIN
macrodata
ON macros.id <> macrodata.macroid -- I strongly suspect this should be =
WHERE m.famid = ?.famid and m.paramid = ?.paramid -- add the appropriate table aliases
);
Notes:
I strongly suspect the <> should be =. <> does not make sense in this context.
Replace the ? with the appropriate table alias.
NOT EXISTS is better than NOT IN anyway. It does what you expect if one of the value is NULL.
Is this correct SQL:
UPDATE T1alias
SET T1alias.Row2 = T2alias.Row2
FROM
(
T1 AS T1alias
INNER JOIN
T2 AS T2alias
ON T1alias.Row1 = T2alias.Row1
)
This query seems to return the right results, but I dont understand why.
I mean the FROM clause refers to an complete different Dataset as to the table T1 which has to be updated.
F.e.:
T1 T2
---------------------- ----------------------
| Row1 | Row2 | Row3 | | Row1 | Row2 | Row3 |
---------------------- ----------------------
| 1 | 2 | 3 | | 1 | 7 | 8 |
--------------------- ----------------------
| 4 | 5 | 6 | | 9 | 10 | 11 |
---------------------- ----------------------
T1 INNER JOIN T2 ON T1alias.Row1 = T2alias.Row1
-------------------------------------------------------------
| T1.Row1 | T1.Row2 | T1.Row3 | T2.Row1 | T2.Row2 | T2.Row3 |
-------------------------------------------------------------
| 1 | 2 | 3 | 1 | 7 | 8 |
-------------------------------------------------------------
So how can I UPDATE T1 from the joined Table?
In my opinion these are complete different datasets.
I would understand the sql query if it would look like:
UPDATE T1alias
SET T1alias.Row2 = T2alias.Row2
FROM
(
T1 AS T1alias
INNER JOIN
T2 AS T2alias
ON T1alias.Row1 = T2alias.Row1
) AS T1T2JoinedAlias
WHERE T1T2JoinedAlias.Row1 = T1alias.Row1
Could someone explain these to me, please.
(I m working on Microsoft SQL Server 2008 R2)
If you look at the execution plan of your SQL statement you will understand what is going on:
As you can see (in my case) the Query Optimiser does a scan of both tables specified in the FROM clause and retrieves rows that fulfil the inner join.
These rows are then passed along the chain to the Table Update physical operator which, as you can see, is told to perform an update on T1 (you tell it to do this by saying "Update T1Alias" in your query above, you also tell it which field(s) to update by your SET command)
The query analyser tends to choose the best execution plan for your query after the algebrizer has compiled it into binary, so whether you get the same execution plan as me or not will depend on a number of factors including whether you have indexes on the tables.
Hope this helps.
I have a table that contains URL strings, i.e.
/A/B/C
/C/E
/C/B/A/R
Each string is split into tokens where the separator in my case is '/'. Then I assign integer value to each token and the put them into dictionary (different database table) i.e.
A : 1
B : 2
C : 3
E : 4
D : 5
G : 6
R : 7
My problem is to find those rows in first tables which contain given sequence of tokens. Additional problem is that my input is sequence of ints, i.e. I have
3, 2
and I'd like to find following rows
/A/B/C
/C/B/A/R
How to do this in efficient way. By this I mean how to design proper database structure.
I use PostgreSQL, solution should work well for 2 mln of rows in first table.
To clarify my example - I need both 'B' AND 'C' to be in the URL. Also 'B' and 'C' can occur in any order in the URL.
I need efficient SELECT. INSERT does not have to be efficient. I do not have to do all work in SQL if this changes anything.
Thanks in advance
I'm not sure how to do this, but I'm just giving you some idea that might be useful. You already have your initial table. You process is and create the token table:
+------------+---------+
| TokenValue | TokenId |
+------------+---------+
| A | 1 |
| B | 2 |
| C | 3 |
| E | 4 |
| D | 5 |
| G | 6 |
| R | 7 |
+------------+---------+
That's ok for me. Now, what I would do is to create a new table in which I would match the original table with the tokens of the token table (OrderedTokens). Something like:
+-------+---------+---------+
| UrlID | TokenId | AnOrder |
+-------+---------+---------+
| 1 | 1 | 1 |
| 1 | 2 | 2 |
| 1 | 3 | 3 |
| 2 | 5 | 1 |
| 2 | 2 | 2 |
| 2 | 1 | 3 |
| 2 | 7 | 4 |
| 3 | 3 | 1 |
| 3 | 4 | 2 |
+-------+---------+---------+
This way you can even recreate your original table as long as you use the order field. For example:
select string_agg(t.tokenValue, '/' order by ot.anOrder) as OriginalUrl
from OrderedTokens as ot
join tokens t on t.tokenId = ot.tokenId
group by ot.urlId
The previous query would result in:
+-------------+
| OriginalUrl |
+-------------+
| A/B/C |
| D/B/A/R |
| C/E |
+-------------+
So, you don't even need your original table anymore. If you want to get Urls that have any of the provided token ids (in this case B OR C), you sould use this:
select string_agg(t.tokenValue, '/' order by ot.anOrder) as OriginalUrl
from OrderedTokens as ot
join Tokens t on t.tokenId = ot.tokenId
group by urlid
having count(case when ot.tokenId in (2, 3) then 1 end) > 0
This results in:
+-------------+
| OriginalUrl |
+-------------+
| A/B/C | => It has both B and C
| D/B/A/R | => It has only B
| C/E | => It has only C
+-------------+
Now, if you want to get all Urls that have BOTH ids, then try this:
select string_agg(t.tokenValue, '/' order by ot.anOrder) as OriginalUrl
from OrderedTokens as ot
join Tokens t on t.tokenId = ot.tokenId
group by urlid
having count(distinct case when ot.tokenId in (2, 3) then ot.tokenId end) = 2
Add in the count all the ids you want to filter and then equal that count the the amount of ids you added. The previous query will result in:
+-------------+
| OriginalUrl |
+-------------+
| A/B/C | => It has both B and C
+-------------+
The funny thing is that none of the solutions I provided results in your expected result. So, have I misunderstood your requirements or is the expected result you provided wrong?
Let me know if this is correct.
It really depends on what you mean by efficient. It will be a trade-off between query performance and storage.
If you want to efficiently store this information, then your current approach is appropriate. You can query the data by doing something like this:
SELECT DISTINCT
u.url
FROM
urls u
INNER JOIN
dictionary d
ON
d.id IN (3, 2)
AND u.url ~ E'\\m' || d.url_component || E'\\m'
This query will take some time, as it will be required to do a full table scan, and perform regex logic on each URL. It is, however, very easy to insert and store data.
If you want to optimize for query performance, though, you can create a reference table of the URL components; it would look something like this:
/A/B/C A
/A/B/C B
/A/B/C C
/C/E C
/C/E E
/D/B/A/R D
/D/B/A/R B
/D/B/A/R A
/D/B/A/R R
You can then create a clustered index on this table, on the URL component. This query would retrieve your results very quickly:
SELECT DISTINCT
u.full_url
FROM
url_components u
INNER JOIN
dictionary d
ON
d.id IN (3, 2)
AND u.url_component = d.url_component
Basically, this approach moves the complexity of the query up front. If you are doing few inserts, but lots of queries against this data, then that is appropriate.
Creating this URL component table is trivial, depending on what tools you have at your disposal. A simple awk script could work through your 2M records in a minute or two, and the subsequent copy back into the database would be quick as well. If you need to support real-time updates to this table, I would recommend a non-SQL solution: whatever your app is coded in could use regular expressions to parse the URL and insert the components into the component table. If you are limited to using the database, then an insert trigger could fulfill the same role, but it will be a more brittle approach.