looking up the control value and run time in a sql query - sql

I have a query in MS Access that in where clause I have:
WHERE (((tb_KonzeptFunktionen.Konzept)=[Formulare]![frm_Fahrzeug]![ID]));
It takes long time to run, but when I delete this where clause the query runs less than a second.
Can I say that pass the [Formulare]![frm_Fahrzeug]![ID] as a parameter does not efficient? Or looking up the control value is slowing it down? If yes how can I solve this problem?

The db engine should retrieve the control's value almost instantaneously. If that WHERE condition slows down your query significantly, it is more likely due to extra work the db engine must perform to retrieve the matching rows. You can check this assumption by temporarily substituting a static known value in place of the control's value.
WHERE tb_KonzeptFunktionen.Konzept=1;
If the version with the static value is equally slow, create an index on tb_KonzeptFunktionen.Konzept and try again.

Related

Is there a way to check if a query hit a cached result or not in BigQuery?

We are performance tuning, both in terms of cost and speed, some of our queries and the results we get are a little bit weird. First, we had one query that did an overwrite on an existing table, we stopped that one after 4 hours. Running the same query to an entirely new table and it only took 5 minutes. I wonder if the 5 minute query maybe used a cached result from the first run, is that possible to check? Is it possible to force BigQuery to not use cache?
If you run query in UI - expand Options and make sure Use Cached Result properly set
Also, in UI, you can check Job Details to see if cached result was used
If you run your query programmatically - you should use respective attributes - configuration.query.useQueryCache and statistics.query.cacheHit

Oracle: Poor performance on smaller result sets?

I am running into a very strange bit of behavior with a query in Oracle The query itself is enormous and quite complex...but is basically the same every time I run it. However, it seems to execute more slowly when returning a smaller result-set. The best example I can give is that if I set this filter on it,
and mgz_query.IsQueryStatus(10001,lqo.query_id)>0
which returns 960 of 12,429 records, I see an execution time of about 1.9 seconds. However, if I change the filter to
and mgz_query.IsQueryStatus(10005,lqo.query_id)>0
which returns 65 of 12,429 records, I see an execution time of about 6.8 seconds. When digging a bit deeper, I found that it seems the smaller result set was performing considerably more buffer gets than the larger result set. This seems completely counter-intuitive to me.
The query this is being run against is roughly 8000 characters long (Unless someone wants it, I'm not going to clutter this post with the entire query), includes 4 'Union All' statements, but otherwise filters primarily on indexes and is pretty efficient, apart from its massive size.
The filter in use is executed via the below function.
Function IsQueryStatus(Pni_QueryStateId in number,
Pni_Query_Id in number) return pls_integer as
vn_count pls_integer;
Begin
select count(1)
into vn_count
from m_query mq
where mq.id = Pni_Query_Id
and mq.state_id = Pni_QueryStateId;
return vn_count;
End;
Any ideas as to what may be causing the smaller result set to perform so much worse than the large result set?
I think you are facing a situation where determining that something is not in the set takes a lot longer than determining if it is in the set. This can occur quite often. For instance, if there is an index on m_query(id), then consider how the where clause might be executed:
(1) The value Pni_Query_Id is looked up in the index. There is no match. Query is done with a value of 0.
(2) There are a bunch of matches. Now, let's fetch the pages where state_id is located and compare to Pni_QueryStateId. Ohh, that's a lot more work.
If that is the case, then having an index on m_query(id, state_id) should help the query.
By the way, this is assuming that the only change is in function call in the where clause. If there are other changes to get fewer rows, you might just be calling this function fewer times.

What state is saved between rerunning queries in Linqpad?

What state is saved between rerunning queries in Linqpad? I presumed none, so if you run a script twice it will have the same results both time.
However run the C# Program below twice in the same Linqpad tab. You'll find the first it prints an empty list, the second time a list with the message 'hey'. What's going on?
System.ComponentModel.TypeDescriptor.GetAttributes(typeof(String)).OfType<ObsoleteAttribute>().Dump();
System.ComponentModel.TypeDescriptor.AddAttributes(typeof(String),new ObsoleteAttribute("hey"));
LINQPad caches the application domain between queries, unless you request otherwise in Edit | Preferences (or press Ctrl+Shift+F5 to clear the app domain). This means that anything stored in static variables will be preserved between queries, assuming the types are numerically identical. This is why you're seeing the additional type description attribute in your code, and also explains why you often see a performance advantage on subsequent query runs (since many things are cached one way or another in static variables).
You can take advantage of this explicitly with LINQPad's Cache extension method:
var query = <someLongRunningQuery>.Cache();
query.Select (x => x.Name).Dump();
Cache() is a transparent extension method that returns exactly what it was fed if the input was not already seen in a previous query. Otherwise, it returns the enumerated result from the previous query.
Hence if you change the second line and re-execute the query, the query will execute quickly since will be supplied from a cache instead of having to re-execute.

Need for long and dynamic select query/view sqlite

I have a need to generate a long select query of potentially thousands of where conditions like (table1.a = ? OR table1.a = ? OR ...) AND (table2.b = ? OR table2.b = ? ...) AND....
I initially started building a class to make this more bearable, but have since stopped to wonder if this will work well. This query is going to be hammering a table of potentially 10s of millions of rows joined with 2 more tables with thousands of rows.
A number of concerns are stemming from this:
1.) I wanted to use these statements to generate a temp view so I could easily transfer over existing code base, the point here is I want to filter data that I have down for analysis based on selected parameters in a GUI, so how poorly will a view do in this scenario?
2.) Can sqlite even parse a query with thousands of binds?
3.) Isn't there a framework that can make generating this query easier other than with string concatenation?
4.) Is the better solution to dump all of the WHERE variables into hash sets in memory and then just write a wrapper for my DB query object that gets next() until a query is encountered this satisfies all my conditions? My concern here is, the application generates graphs procedurally on scrolls, so waiting to draw while calling query.next() x 100,000 might cause an annoying delay? Ideally I don't want to have to wait on the next row that satisfies everything for more than 30ms at a time.
edit:
New issue, it came to my attention that sqlite3 is limited to 999 bind values(host parameters) at compile time.
So it seems as if the only way to accomplish what I had originally intended is to
1.) Generate the entire query via string concatenations(my biggest concern being, I don't know how slow parsing all the data inside sqlite3 will be)
or
2.) Do the blanket query method(select * from * where index > ? limit ?) and call next() until I hit what valid data in my compiled code(including updating index variable and re-querying repeatedly)
I did end up writing a wrapper around the QSqlQuery object that will walk a table using index > variable and limit to allow "walking" the table.
Consider dumping the joined results without filters (denormalized) into a flat file and index it with Fastbit, a bitmap index engine.

What index would speed up my XQuery in X-Hive / Documentum xDB?

I have approx 2500 documents in my test database and searching the xpath /path/to/#attribute takes approximately 2.4 seconds. Doing distinct-values(/path/to/#attribute) takes 3.0 seconds.
I've been able to speed up queries on /path/to[#attribute='value'] to hundreds or tens of milliseconds by adding a Path value index on /path/to[#attribute<STRING>] but no index I can think of gets picked up for the more general query.
Anybody know what indexes I should be using?
The index you propose is the correct one (/path/to[#attribute]), but unfortunately the xDB optimizer currently doesn't recognize this specific case since the 'target node' stored in the index is always an element and not an attribute. If /path/to/#attribute has few results then you can optimize this by slightly modifying your query to this: distinct-values(/path/to[#attribute]/#attribute). With this query the optimizer recognizes that there is an index it can use to get to the 'to' element, but then it still has the access the target document to retrieve the attribute for the #attribute step. This is precisely why it will only benefit cases where there are few hits: each hit will likely access a different data page.
What you also can do is access the keys in the index directly through the API: XhiveIndexIf.getKeys(). This will be very fast, but clearly this is not very user friendly (and should be done by the optimizer instead).
Clearly the optimizer could handle this. I will add it to the bug tracker.