I have a database table of my own that I am trying to construct a query for that seems simple enough, but I feel like I am messing up somewhere because the results are not what they should be.
I basically have a table that is like the following:
Table: Data
Columns:
Row ID Profile Import ID Field ID Product
1 5 Null 5 60 Can
2 0 Null 5 65 Hat
3 0 Null 5 70 Box
4 6 Null 6 60 Fish
I basically want to take the word "Hat" in row 2 and place it into the "Profile" column of row 1, replacing the null value there. I am doing this for multiple rows.
In the case of the multiple rows I want to take the "Profile" column and make it equal to the "Product" column. I only want this to happen in the rows where the "ID" value matches the "Import ID", and where the "Field ID" is 65 specifically. In the example above the "ID" 5 matches the "Import ID" 5, so I want to take the "Product" value "Hat" where the "Field ID" is 65, and place that value into the "Profile" column where the ID is 5. My table has over 9000 rows and 600 would have to be changed in this way, with various ID's needing various products inserted.
The result I would like would be:
Row ID Profile Import ID Field ID Product
1 5 Hat 5 60 Can
2 0 Null 5 65 Hat
3 0 Null 5 70 Box
4 6 Null 6 60 Fish
I pray that makes sense...
My query was this
UPDATE 'Data'
SET 'Profile'='Product'
WHERE 'ID'='Import ID' AND 'Field ID'=65;
I have also tried a subquery
UPDATE 'Data'
SET 'Profile'= (SELECT 'Product' FROM Data WHERE 'Field ID'=65)
WHERE 'ID'='Import ID';
This did not work and I am just wondering if there is some logic I missing. Thank you to anyone who can help, I have been up for a bit trying to understand this...
You need to join the data; something like:
UPDATE d1
SET d1.Profile = d2.Product
FROM [Data] d1 -- dest
INNER JOIN [DATA] d2 -- source
ON d2.[Import ID] = d1.[ID] AND d2.[Field ID] = 65
(note swapped 2 columns...)
A couple thing to keep in mind when learning sql:
it isnt a good idea to have spaces in column names. although they might be easier to read, it makes your queries more difficult. most databases dont allow them at all, and those that do have different ways to specify the columns in queries.
to work around your problem, perhaps you should try to enclose the column name in backticks (`), or in square brackets ([ ]).
in any case, instead of a space, please consider an underscore.
with that in mind you should also remember that not to put column names in quotes. something like
SELECT 'Product' FROM Data WHERE 'Field ID'=65
would not work for two reasons:
a. Selecting quoted text will return that quoted text. so were the where clause to return two rows, you would get the text 'Product' returned twice.
b. here your where clause is comparing the text 'Field ID' with the number 65, which would always be false.
hope that helps
Related
I´m currently working stuck on a SQL issue (well, mainly because I can´t find a way to google it and my SQL skills do not suffice to solve it myself)
I´m working on a system where documents are edited. If the editing process is finished, users mark the document as solved. In the MSSQL database, the corresponding row is not updated but instead, a new row is inserted. Thus, every document that has been processed has [e.g.: should have] multiple rows in the DB.
See the following situation:
ID
ID2
AnotherCondition
Steps
Process
Solved
1
1
yes
Three
ATAT
AF
2
2
yes
One
ATAT
FR
2
3
yes
One
ATAT
EG
2
4
yes
One
ATAT
AF
3
5
no
One
ABAT
AF
4
6
yes
One
ATAT
FR
5
7
no
One
AVAT
EG
6
8
yes
Two
SATT
FR
6
9
yes
Two
SATT
EG
6
10
yes
Two
SATT
AF
I need to select the rows which have not been processed yet. A "processed" document has a "FR" in the "Solved" column. Sadly other versions of the document exist in the DB, with other codes in the "Solved" columns.
Now: If there is a row which has "FR" in the "Solved" column I need to remove every row with the same ID from my SELECT statement as well. Is this doable?
In order to achieve this, I have to remove the rows with the IDs 2 | 4 (because the system sadly isn´t too reliable I guess) | and 6 in my select statement. Is this possible in general?
What I could do is to filter out the duplicates afterwards, in python/js/whatever. But I am curious whether I can "remove" these rows directly in the SQL statement as well.
To rephrase it another time: How can I make a select statement which returns only (in this example) the rows containing the ID´s 1, 3 and 5?
If you need to delete all rows where every id doesn't have any "Solved = 'no'", you can use a DELETE statement that will exclude all "id" values that have at least one "Solved = 'no'" in the corresponding rows.
DELETE FROM tab
WHERE id NOT IN (SELECT id FROM tab WHERE Solved1 = 'no');
Check the demo here.
Edit. If you need to use a SELECT statement, you can simply reverse the condition in the subquery:
SELECT *
FROM tab
WHERE id NOT IN (SELECT id FROM tab WHERE Solved1 = 'yes');
Check the demo here.
I'm not sure I understand your question correct:
...every document that has been processed has [...] multiple rows in the DB
I need to find out which documents have not been processed yet
So it seems you need to find unique documents with no versions, this could be done using a GROUP BY with a HAVING clause:
SELECT
Id
FROM dbo.TableName
GROUP BY Id
HAVING COUNT(*) = 1
I am not sure this can be done, and tried numerous searches but no real result yet.
I have a SQL Server database with a table where I want to output results from a single table both horizontally and vertically. I realise this will be a complex SQL statement and have managed part of the vertical using a UNION but the horizontal eludes me.
The table has a field called "reference" and contains a string of characters such as "A03ACCEVEN18JS-SN1AA" or "A02ACCVCOM18JS-FN1AA". I want to create an output with a row for the count of references commencing A02 then a row for A03, A04 etc that also contain "18". Then expand horizontally to count the references with different letters after the hyphen, i.e. "-s" and "-f" etc. So the output would look like below,
S_Count | F_Count | J_Count etc
---------------------------------
A02 Row --> 58 | 23 | 16
A03 Row --> 22 | 43 | 53
A04 Row --> 7 | 31 | 23
etc
I managed to get one column so far with multiple where clauses and UNIONS like below but I now need the vertical. Can this be done please?
SELECT COUNT(reference) FROM mytable
WHERE reference LIKE 'A02%' AND reference LIKE '%%18%%' AND PATINDEX('%-P%',
reference) <> 0
UNION
SELECT COUNT(reference) FROM mytable
WHERE reference LIKE 'A03%' AND reference LIKE '%%18%%' AND PATINDEX('%-P%',
reference) <> 0
UNION
SELECT COUNT(reference) AS TOTAL FROM mytable
WHERE reference LIKE 'A04%' AND reference LIKE '%%18%%' AND PATINDEX('%-P%',
reference) <> 0;
Let's do it all in one hit :)
SELECT
LEFT(reference, 3) as ao_number,
SUM(CASE WHEN reference LIKE '%-S%' THEN 1 ELSE 0 END) as s_count,
SUM(CASE WHEN reference LIKE '%-F%' THEN 1 ELSE 0 END) as j_count,
SUM(CASE WHEN reference LIKE '%-J%' THEN 1 ELSE 0 END) as s_count
FROM
table
WHERE
reference like 'A0%18%'
GROUP BY
LEFT(reference, 3)
Notes:
LEFT(reference, 3) pulls the A0x number off the start. Grouping by this will give us one row per distinct A0x number, so if a thousand variations of A00 to A09 are present, we'll get 10 rows
You don't need to (and shouldn't) say WHERE reference LIKE 'A03%' AND reference LIKE '%%18%%' etc.. I just combine them to 'A0%18%'. Note that I didn't combine them to 'A03%18%' as that would restrict our data too much. Don't double up your percent signs when doing a like
The SUM performs a count; the case when looks a the reference and if it has e.g. an -S in it, then it returns 1 else 0. Summing these effectively counts the reference patterns
By th way, for future searching purposes, this type of query is called a PIVOT. Most databases have some proprietary syntax to carry out pivoting, but I tend to remember/utilize this pattern because it's a bit more flexible and is cross-db compatible
I have a table that looks like this with three columns From, To, and Symbol:
From To Symbol
0 2 dog
2 5 dog
5 9 cat
9 15 cat
15 20 dog
20 40 dog
40 45 dog
I was trying to write an SQL query that groups records in a way that produces the following result:
From To Symbol
0 5 dog
5 15 cat
15 45 dog
That is, if the From and To values are continuous for the same Symbol, one result record is created with the smallest From and the largest To values and the Symbol. In the above example table, since the second record has a value of 2 in the To column which is not the same as the From value in the next record with the same Symbol (15, 20, dog), two result records are created for the same Symbol (dog).
I have tried to join the table to itself, then group by. But I could not figure out how exactly that can be done. I have to do this in Microsoft Access. Any help would be greatly appreciated. Thanks!
Assuming the values have no overlaps and that gaps separate values, you can do this in MS Access with a trick. You need to identify the adjacent symbols that are the same. Well, you can identify them by counting the number of previous rows with different symbols (using a subquery). Once you have this information, the rest is aggregation:
select symbol, min(from) as from, max(to) as to
from (select t.*,
(select count(*)
from t as t2
where t2.from < t.from and t2.symbol <> t.symbol
) as grp
from t
) t
group by symbol, grp;
Gaps would make this problem much harder in MS Access.
Note: Don't use reserved words or keywords for column names. This code uses the names supplied in the question, but doesn't bother to escape them. I think that just makes it harder to understand the query.
I have a switch statement in a crosstab query:
Switch([Age]<20, "Under 20", [Age]>=20 and <=25, "Between 20 and 25")
AS **Age_Range**
The switch statement evaluates my row heading like this:
1 2 3 4 5 <-- Columns
Under 20 0 0 0 3 2
Between 20 and 25 1 2 0 4 0
Where the value =
Total: Nz(Count(Demo.ID))+0
Okay, all is good so far. However, I am trying to make a left join with the switch statement so ALL of the age ranges will show up, regardless of whether or not there is data. I know I need a table with all of the age ranges, but I am confused.
Here is what I have tried that is currently not working.
Joining the switch statement Age_Range to the table Age Range, where the correlating values in the table are the "Under 20" and "Between 20 and 25" strings in the switch. Not working.
Instead of putting the string values in the table, putting the conditions ([Age]<20, etc). However, this fails because in order to put the conditions in the table, it has to be a text field. There is a data mismatch.
Can someone please let me know if this can be done and how?
Thanks,
Make the crosstab a separate query. Then left join that query to your table of item in #1 ("Under 20" and "Between 20 and 25").
I have a database in MS-Access which has field names as "1", "2", "3", ... "10".
I want to select column 1 then when I click a button, column 2, column 3, ... and so on.
How to do that?
Actually i want to make a project in JSP.
Those are impractical field names, but you can use brackets to use them in a query:
select [1] from SomeTable where SomeId = 42
A basic rule for designing databases is that data should be in the field values, not in the field names. You should consider redesigning the table so that you store the values in separate rows instead, and have a field that specifies which item is stored in the row:
select Value from SomeTable where SomeId = 42 and ValueType = 1
This will enable you to use a parameterised query instead of creating the query dynamically.
Also, this way you don't have empty fields if you use less than ten items, and the database design doesn't limit you to only ten items.
Suppose I have a table like this
id name
1 name1
2 name2
3 name3
4 name4
5 name5
Now suppose I want to choose record 1 when button 1 is clicked, second record when button 2 is clicked and so on.
So I will write a query like
select * from MyTbl where id = #btnId .
Note:- #btnId will have the value 1 for Button 1, 2 for Button 2 etc.
Or you can use case statement.
This is just an idea for accomplishing the work but as others mentioned, you should be more specific for an accurate answer.