We have a table Property in our database containing a counter in every row stored in the NextVoucherNumber integer column. There are about 2000 rows there.
ID ... {other columns} ... NextVoucherNumber
-----------------------------------------------
1 112
2 34
3 29
4 9456
.... ....
2000 233
We have an issue with a concurrent access to the table.
To improve the performance we would like to extract those columns to a separate table PropertyVoucherNumbers with a 1:1 relation between the rows.
ID NextVoucherNumber
------------------------
1 112
2 34
3 29
4 9456
.... ....
2000 233
Alternatively, we could maintain sequences for every row.
Seq_VoucherNumber_1, Seq_VoucherNumber_2, ... Seq_VoucherNumber_2000.
Looks like the same triggers just a little dynamic SQL there.
Could you please describe what the problems we will face using the second solution?
Can you suggest any better solution?
Related
I've Benchmarking table like this
BMID TestID BMTitle ConnectedTestID
---------------------------------------------------
1 5 My BM1 0
2 6 My BM2 5
3 7 My BM3 5,6
4 8 My BM4 10,12,8
5 9 My BM5 0
6 10 My BM6 3,6
7 5 My BM7 8,3,12,9
8 3 My BM8 7,10
9 8 My BM9 0
10 12 My BM10 9
---------------------------------------------
Explaining the table a little
Here the TestID and the connected TestID is playing the roles. If the user wants all the benchmarks for the TestID 3
It should return rows where testID=3 and also if any rows having connectedTestID column having that testID in it among the comma separated values
That means if the user specify the value 3 as the testID, it should return
---------------------------------------------
8 3 My BM8 7,10
7 5 My BM7 8,3,12,9
6 10 My BM6 3,6
--------------------------------------------
Hope its clear how those 3 rows returned. Means First row is because the testID 3 is there. the other two rows because 3 is in their connectedIDs cell
You should fix the data structure. Storing numeric ids in a comma-delimited list is a bad, bad, bad idea:
SQL Server doesn't have the best string manipulation functions.
Storing numberings as character strings is a bad idea.
Having undeclared foreign key relationships is a bad idea.
The resulting queries cannot make use of indexes.
While you are exploring what a junction table is so you can fix the problem with the data structure, you can use a query such as this:
where testid = 3 or
',' + ConnectedTestID + ',' like '%,3,%'
I'm using MS Access 2013.
I need to display AND EDIT a grid of data based on three tables:
UnitID UnitName
1 Unit1
2 Unit2
3 Unit3
ProdID ProdName
1 Furniture
2 Food
3 Other
UnitID ProdID Forecast
1 1 10
1 2 20
1 3 30
2 1 40
2 2 50
2 3 60
3 1 70
3 2 80
3 3 90
so it looks like:
Unit1 Unit2 Unit3
Furniture 10 40 70
Food 20 50 80
Other 30 60 90
Furthermore, the query must be editable (user should be able to enter his forecast data).
Any idea how to do this in Access 2010? I've looked into pivots and crosstab queries, but they use aggregate functions and thus aren't editable... but in my case, the source of the data is unambiguous so an editable option should exist? Anyone has an idea how to get the data in editable format?
Thanks!
Jur.
Create a temp table and fill it with the data from your crosstab query. Use that table as the source for a form, which will be editable. In the beforeupdate event of the form, add code to update the original source table.
Thanks all,
distributing any kind of exe is not an option due to security measurements in the client's environment (they can run Office and little else). So I'm going for the temp table option anyway... any pointers for a template solution to modify to my needs?
Thanks again!
Jur.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Oracle: how to “group by” over a range?
Let's say I have data that looks like this:
Item Count
======== ========
1 123
2 1
3 47
4 117
5 18
6 466
7 202
I want to create a query that gives me this:
Count Start Count End Occurrences
=========== =========== ===========
0 100 3
101 200 2
201 300 1
301 400 0
401 500 1
Basically, I want to take a bunch of counts and group them into ranges for statistical rollups. I don't think I'm using the right keywords to find the answer to this. I am going against Oracle, though if there is an ANSI SQL answer I'd love to have it.
select
a.mini,
a.maxi,
count(a.item)
from
(
select
table.item,
case (table.counter)
when counter>=0 and counter<=100 then 0
when counter>100 and counter<200 then 101
when ....
end as mini
table.item,
case (table.counter)
when counter>=0 and counter<=100 then 100
when counter>100 and counter<200 then 201
when ....
end as maxi
from
table
) a
group by
a.mini,
a.maxi
One way is using CASE statements. If you want it to be scalable, try having the range in a separate table and use JOIN to count the occurence.
Here is the deal. I have a table T with many columns but two of interest: gen_ID, ordernumber.
Records in this table are always by groups of 5 with the gen_ID being the same and the ordernumber being blank.
So in essence, it looks like this:
Gen_ID ordernumber
233
233
233
233
233
234
234
234
234
234
Now I have a query Q that, when executed, randomizes the numbers 1, 2, 3, 4, and 5.
I want to update ordernumber with the random numbers of Q so it looks like this:
Gen_ID ordernumber
233 3
233 4
233 1
233 2
233 5
234 4
234 5
234 3
234 2
234 1
Etc...
Any idea on how to do this using MS Access 2010 SQL?
Udate query would be fine but I cannot join the two since I don't have a common ID.
Any suggestions? Note that I can run this magic query once a set of 5 records are created in the table (I don't need to have that done once I have more than one set).
I don't think this can be achieved by SQL alone and will need some VB running alongside. My approach would be to get your 1 - 5 numbers in a random order stored in an "Array", you can then open up a recordset to "T" and step through one by one assigning a number from your array. You could also loop this process to begin again whenever it detects a new Gen_ID in "T" and thus populate the whole table in one pass.
I have a table containing hierarchical data. There are currently ~8 levels in this hierarchy.
I really like the way the data is structured, but performance is dismal when I need to know if a record at level 8 is a child of a record at level 1.
I have PL/SQL stored functions which do these lookups for me, each having a select * from tbl start with ... connect by... statement. This works fine when I'm querying a handful of records, but I'm in a situation now where I need to query ~10k records at once and for each of them run this function. It's taking 2-3 minutes where I need it to run in just a few seconds.
Using some heuristics based on my knowledge of the current data, I can get rid of the lookup function and just do childrecord.key || '%' LIKE parentrecord.key but that's a really dirty hack and will not always work.
So now I'm thinking that for this hierarchically-defined table I need to have a separate parent-child table, which will contain every relationship...for a hierarchy going from level 1-8 there would be 8! records, associating 1 with 2, 1 with 3,...,1 with 8 and 2 with 3, 2 with 4,...,2 with 8. And so forth.
My thought is that I would need to have an insert trigger where it will basically run the connect by query and for every match going up the hierarchy it will insert a record in the lookup table. And to deal with old data I'll just set up foreign keys to the main table with cascading deletes.
Are there better options than this? Am I missing another way that I could determine these distant ancestor/descendant relationships more quickly?
EDIT: This appears to be exactly what I'm thinking about: http://evolt.org/working_with_hierarchical_data_in_sql_using_ancestor_tables
So what you want is to materialize the transitive closures. That is, given this application table ...
ID | PARENT_ID
------+----------
1 |
2 | 1
3 | 2
4 | 2
5 | 4
... the graph table would look like this:
PARENT_ID | CHILD_ID
-----------+----------
1 | 2
1 | 3
1 | 4
1 | 5
2 | 3
2 | 4
2 | 5
4 | 5
It is possible to maintain a table like this in Oracle, although you will need to roll your own framework for it. The question is whether it is worth the overhead. If the source table is volatile then keeping the graph data fresh may cost more cycles than you will save on the queries. Only you know your data's profile.
I don't think you can maintain such a graph table with CONNECT BY queries and cascading foreign keys. Too much indirect activity, too hard to get right. Also a materialized view is out, because we cannot write a SQL query which will zap the 1->5 record when we delete the source record for ID=4.
So what I suggest you read a paper called Maintaining Transitive Closure of Graphs in SQL by Dong, Libkin, Su and Wong. This contains a lot of theory and some gnarly (Oracle) SQL but it will give you the grounding to build the PL/SQL you need to maintain a graph table.
"can you expand on the part about it
being too difficult to maintain with
CONNECT BY/cascading FKs? If I control
access to the table and all
inserts/updates/deletes take place via
stored procedures, what kinds of
scenarios are there where this would
break down?"
Consider the record 1->5 which is a short-circuit of 1->2->4->5. Now what happens if, as I said before, we delete the the source record for ID=4? Cascading foreign keys could delete the entries for 2->4 and 4->5. But that leaves 1->5 (and indeed 2->5) in the graph table although they no longer represent a valid edge in the graph.
What might work (I think, I haven't done it) would be to use an additional synthetic key in the source table, like this.
ID | PARENT_ID | NEW_KEY
------+-----------+---------
1 | | AAA
2 | 1 | BBB
3 | 2 | CCC
4 | 2 | DDD
5 | 4 | EEE
Now the graph table would look like this:
PARENT_ID | CHILD_ID | NEW_KEY
-----------+----------+---------
1 | 2 | BBB
1 | 3 | CCC
1 | 4 | DDD
1 | 5 | DDD
2 | 3 | CCC
2 | 4 | DDD
2 | 5 | DDD
4 | 5 | DDD
So the graph table has a foreign key referencing the relationship in the source table which generated it, rather than linking to the ID. Then deleting the record for ID=4 would cascade deletes of all records in the graph table where NEW_KEY=DDD.
This would work if any given ID can only have zero or one parent IDs. But it won't work if it is permissible for this to happen:
ID | PARENT_ID
------+----------
5 | 2
5 | 4
In other words the edge 1->5 represents both 1->2->4->5 and 1->2->5. So, what might work depends on the complexity of your data.