SQL UPDATE SET interchanges values - sql

I update a View to get in two columns the same value, but it interchanges the two values instead of just setting it. My (reduced for so) view UpdateADAuftrag2 is this.
SELECT dbo.CSDokument.AD1, dbo.UpdateAS400zuSellingBenutzer2.BenutzerNr
FROM dbo.AS400Auftrag
INNER JOIN
dbo.CSDokument ON dbo.AS400Auftrag.Angebotsnummer = dbo.CSDokument.Angebotsnummer
INNER JOIN
dbo.UpdateAS400zuSellingBenutzer2 ON dbo.AS400Auftrag.AD = dbo.UpdateAS400zuSellingBenutzer2.SchluesselWert
AND
dbo.CSDokument.AD1 <> dbo.UpdateAS400zuSellingBenutzer2.BenutzerNr
WHERE (dbo.AS400Auftrag.AD IS NOT NULL)
The important part is dbo.CSDokument.AD1 <> dbo.UpdateAS400zuSellingBenutzer2.BenutzerNr
AD1 is user number for external workers and BenutzerNr means user number. So e.g. the person Charlie Brown is an external worker and has the user number 31. When in AD1 is 31 - Charlie Brown is the external worker for this document (order in this case).
The Update statement loos like this
UPDATE [dbo].[UpdateADAuftrag2]
SET [AD1] = [BenutzerNr]
I have for example these values
AD1 | BenutzerNr
31 | 54
99 | 384
112 | 93
after the update the result is this
AD1 | BenutzerNr
54 | 31
384 | 99
93 | 112
Why not this?
AD1 | BenutzerNr
54 | 54
384 | 384
93 | 93
edit: UpdateAS400zuSellingBenutzer is also a View, but as far as I can see it includes only BenutzerNr and not AD1.

Firstly, you're never going to see your expected results in the view. Your UPDATE statement is effectively a DELETE statement (as far as the view is concerned). Rows only appear in the view if AD1 <> BenutzerNr, but you're setting them to be equal.
However, the documentation for updatable views states "Any modifications, including UPDATE, INSERT, and DELETE statements, must reference columns from only one base table." Your update statement references columns from more than one table.
https://msdn.microsoft.com/en-us/library/ms187956.aspx#Updatable Views
I'm not sure what you're trying to achieve here, but in my experience it's usually easier to issue the UPDATE statement against the base tables directly.

There were 2 bugs - Bug 1 View UpdateAS400zuSellingBenutzer2 had 2 results sometimes for one entry in CSDokument and Bug 2 There were 2 entries in Table AS400Auftrag and then it switched between these two entries. So it just looked like the SET switched the two entries but it was just by chance. Thanks for reading.

Related

django database design when you will have too many rows

I have a django web app with postgres db; the general operation is that every day I have an array of values that need to be stored in one of the tables.
There is no foreseeable need to query the values of the array but need to be able to plot the values for a specific day.
The problem is that this array is pretty big and if I were to store it in the db, I'd have 60 million rows per year but if I store each row as a blob object, I'd have 60 thousand rows per year.
Is is a good decision to use a blob object to reduce table size when you do not want to query with the row of values?
Here are the two options:
option1: keeping all
group(foreignkey)| parent(foreignkey) | pos(int) | length(int)
A | B | 232 | 45
A | B | 233 | 45
A | B | 234 | 45
A | B | 233 | 46
...
option2: collapsing the array into a blob:
group(fk)| parent(fk) | mean_len(float)| values(blob)
A | B | 45 |[(pos=232, len=45),...]
...
so I do NOT want to query pos or length but I want to query group or parent.
An example of read query that I'm talking about is:
SELECT * FROM "mytable"
LEFT OUTER JOIN "group"
ON ( "group"."id" = "grouptable"."id" )
ORDER BY "pos" DESC LIMIT 100
which is a typical django admin list_view page main query.
I tried loading the data and tried displaying the table in the django admin page without doing any complex query (just a read query).
When I get pass 1.5 millions rows, the admin page freezes. All it takes is a some count query on that table to cause the app to crash so I should definitely either keep the data as a blob or not keep it in the db at all and use the filesystem instead.
I want to emphasize that I've used django 1.8 as my test bench so this is not a postgres evaluation but rather a system evaluation with django admin and postgres.

Access query, if two values exist in one column, omit one

I have a series of queries that generate reports that contain chemical data. There are two compounds A and B where A is the total amount and B is a speciated amount (like total iron and ferrous iron, for example).
There are about one hundred total compounds in the query result, and I need a criteria to filter the results such that if both Compounds A and B are present, only Compound B is displayed. So far I've tried adding a few iif statements to the criteria section in the query builder with no luck.
Here is what I have so far:
SELECT Table1.KEY_ANLT
FROM Table1
WHERE (((Table1.KEY_ANLT)=IIf([Table1].[KEY_ANLT]=1223 And [Table1].[KEY_ANLT]=70,70,1223)));
This filters out Compound A but does not include the rest of the compounds. How can I modify the query to also include the other compounds?
So, to clarify some of the comments above, the problem here is you don't have (or haven't specified above) a way to identify values that go together. You gave 70 and 1223 as an example, but if you gave us a list of all the numbers, how would we be able to identify which ones go together? You might say "chemistry expertise", but that's based on another column with the compounds' names, right? So really, your query should use that column. But then there's still the problem of how to connect associated names (e.g., "total iron" and "ferrous iron" might be connected because they both have the word "iron", but what about "permanganate" and "manganese"?). In short, you need another column to specify the thing in common between these separate rows, whether it's element, ion, charge, etc. You would also need a column identifying which row in each "group" you would want to include in your query (or, which ones to exclude). For example:
+----------+-----------------+---------+---------+
| KEY_ANLT | Compound | Element | Primary |
+----------+-----------------+---------+---------+
| 70 | total iron | Fe | Y |
| 1223 | ferrous iron | Fe | |
| 1224 | ferric iron | Fe | |
| 900 | total manganese | Mn | Y |
| 901 | permanganate | Mn | |
+----------+-----------------+---------+---------+
Then, to get a query that shows just the "primary" rows, it's pretty trivial:
SELECT * FROM Table1 WHERE Primary='Y';
Without that [Primary] column, you'd have to decide how to choose each row. Perhaps you'd want the one with the smallest KEY_ANLT?
SELECT Table1.*
FROM
(SELECT Element, min(KEY_ANLT) AS MinKey FROM Table1 GROUP BY Element) AS Subquery
INNER JOIN Table1 ON
Subquery.Element=Table1.Element AND
Subquery.MinKey=Table1.KEY_ANLT
The reason your query doesn't work is that the WHERE clause operates row-by-row, and doesn't compare different rows to one another. So in your SQL:
IIf([Table1].[KEY_ANLT]=1223 And [Table1].[KEY_ANLT]=70,70,1223)
NONE of the rows will evaluate this as 70, because no single row has KEY_ANLT=1223 AND KEY_ANLT=70. Each row only has one value for KEY_ANLT. So then that IIF expression evaluates as 1223 for every row, and your condition will only return rows where KEY_ANLT=1223 (compound B).

Single record buffering in SAP ABAP

My table is stud.
+-----+------+-------+
| no | name | grade |
+-----+------+-------+
| 101 | naga | A |
| 102 | raj | A |
| 103 | john | A |
+-----+------+-------+
The query I'm using is:
SELECT * FROM stud WHERE no = 101 AND grade = 'A'.
If am using single record buffering, how much data is being stored in the buffer area?
This query doesn't do anything. There is no "into" clause. meaning it wont store anything selected.
You are probably looking to do something like this....
SELECT * FROM stud into wa_stud WHERE no = 101 AND grade = 'A'.
"processing of each single row is performed here
endselect.
or perhaps something like this, where only 1 row (the first rows ordered by primary key) is selected...
select single * from stud into wa_stud where no = 101 and grade = 'A' .
or perhaps you want everything brought in to a table, meaning number and grade does not include the full primary key.
select * from stud into table it_stud where no = 101 and grade = 'A'.
this is from ABAP Keyword documentation in SE38:
SAP Buffer - Single Record Buffering
Only those rows in the table are buffered that are actually accessed.
This requires less space in the buffer than when using generic or full
buffering. On the other hand, more administration work is required and
significantly more direct database accesses.
So since your query returns a single record (based on the data you displayed) it should just get one row and hold in the buffer.
I'd suggest looking at SAP help and Google - also have a look at SELECT SINGLE and incompletely specified keys - there used to be a problem with the buffer being bypassed in some situations - have a read for reference.

PostgreSQL query to get events occurring within microseconds of each other

Ruby on Rails' ActiveRecord's destroy_all method generates one SQL DELETE query per record. We have a page in a web application that allows a user to delete associated records. Since the deleted_at timestamps are off by a few microseconds, it becomes challenging to group all the records that were deleted at the same time. Here is some example data:
id | deleted_at
----+----------------------------
71 |
45 | 2014-04-29 18:35:00.676153
46 | 2014-04-29 18:35:00.685657
47 | 2014-04-30 21:11:00.73096
48 | 2014-04-30 21:11:00.738533
49 | 2014-04-30 21:11:00.745232
50 |
51 |
(8 rows)
So you can see there were two events here, one affecting 2 rows (with ids 45 and 46) and one affecting 3 rows (with ids 47, 48, and 49). My question is how can I query this table so that each event is grouped into a single row? I've considered using extract(microseconds from r.deleted_at) but that would fail if the seconds were wrapped. I want a query, or even a function that compares each record and groups those with deleted_at within a certain threshold of each other.
Edit: I should mention that I can't use delete_all because I want callbacks to be run. We are using the paranoia gem to soft-delete records.
I found a workaround. Instead of depending on ActiveRecord callbacks, I can simply set the timestamp manually using update_all. So instead of Model.where(...).delete_all I do Model.where(...).update_all(deleted_at: Time.current). Now I can easily group by timestamp.
If anyone wants to answer the original PostgreSQL query question, I'll be glad to choose the correct answer for points.

Database design for a step by step wizard

I am designing a system containing logical steps with some actions associated (but the actions are not part of the question, but they are crucial for each step in the list)!
The ting is that I need to create a way to define all the logical steps in an ordered way, so that I can get the list by query, and also make modifications later on!
Anyone with some experience in this kind of database design?
I have been thinking of having a column named wizard_steps (or something similar), and then use priority to make the order, but for some reason i feel that this design at some point will fail (due to items with same priority, adding new items would then have to rearrange the rest of the items, and so forth)!
Another design I have been thinking about is the use of "next item" as a column in the wizard_step column, but I don't feel this is the correct step eighter!
So to summarize; I am trying to make a list (and the design should be open enought to support multiple lists) of elements where the order is crucial!
Any ideas on how the database should look like?
Thanks!
EDIT: I found this yii component I will check out: http://www.yiiframework.com/extension/simpleworkflow/
Might be a good solution!
If I get you well, your main concern is to create a schema that supports ordered lists and can provide easy insert/reordering of items.
The following table design:
id_list item_priority foreign_itemdef_id
1 1 245
1 2 32
1 3 45
2 1 156
2 2 248
2 3 127
coupled to a table with item definition will be easily queried but will be difficult to maintain, especially for insertions
That one:
id_list first_item_id
1 45
2 38
coupled to the linked list:
item_id next_item foreign_itemdef_id
45 381 56
381 NULL 59
38 39 89
39 42 78
42 NULL 45
Will be both difficult to query and update (you should update the linked list inside a transaction, otherwise your linked list can get corrupted).
I would prefer the first solution for simplicity.
Depending on your update frequency, you may consider using large increments between item_priority to help insertion:
id_list item_priority foreign_itemdef_id
1 1000 245
1 2000 32
1 3000 45
2 1000 156
2 2000 248
2 3000 127
1 2500 46 -- late insertion
1 2750 47 -- late insertion
EDIT:
Here's a query that will hopefully make room for an insertion: it increments priority of all rows above the argument
$query_make_room_for_new_item = "UPDATE item_priority_table SET item_priority = item_priority + 1 WHERE item_priority > ". $new_item_position_priority ." AND id_list = ".$id_list;
Then insert your item with priority $new_item_position_priority