This one has got me stumped but good. Using VBA in MS Access, I sometimes get different results when running the same code against the same table. I can run the code 2, 5, 6, 10 times and get the same results, then run it again and get a different result. I can run the code twice and get the same results and then I can run the code twice and get different results - all with the same code against the same table.
The code is used to group trips so they can be billed correctly. I do this by taking the raw SQL data and putting it into an Access table, then via several sorts and some cross-checking, I label each trip in the access table with a GR or an ML in the last field of the table. The result set is all trips for the specified time frame which are now labeled: ML (multi-loaded), GR (Grouped) or blank (demand).
I have even tried putting in MoveLast/MoveFirst to make sure the table is fully loaded each time (per suggestion from others).
Here is a link to the code and data after 2 runs of the same code on the same data:
Code&DataI removed the trip ID and client ID data for privacy concerns. The trip ID is unique but the client id will be used many times depending on how many trips the client took during the time period.Any and all help you can give to make this code produce the same results each time it is run is GREATLY appreciated. I don't want to have to go back to doing this report labeling by hand. This is the smallest of 4 that must be done twice a month.Thanks!David R. Mohr.................................................end of line........................................................................
When opening t_BillableTrips, I do not think it is safe to assume that the data will be sorted the way you want. That could potentially change from run to run. I would suggest to use a query with an explicit sort order instead of opening the table directly. My second suggestion is to use the Recordset Clone method to get Intable2 and Intable3. The recordsets will be sharing the same underlying in memory data but will be able to be positioned at different records.
First of all, Thanks to everyone that gave an answer. Your prompts guided me to find the solution.
1st problem is that I was directly editing the TABLE while being under the impression that my MAKE TABLE ORDER BY command was actually creating a table in the order I specified - it only worked most of the time and we can't have that.
So, after digging deeper I found more and more evidence that trying to sort the actual table - especially with a MAKE TABLE command is not good practice and can give unpredictable results as well as generates a lot more overhead. I am now basing my positioning and updating on a QUERY of the table and not the actual table. I.E. changed this:
Set InTable = dbsBilling.OpenRecordset("t_BillableTrips", dbOpenTable)
Set InTable2 = dbsBilling.OpenRecordset("t_BillableTrips", dbOpenTable)
Set InTable3 = dbsBilling.OpenRecordset("t_BillableTrips", dbOpenTable)
Set InTable4 = dbsBilling.OpenRecordset("t_BillableTrips", dbOpenTable)
to this:
Set InTable = dbsBilling.OpenRecordset("q_BillableTripsSort1B", dbOpenDynaset)
Set InTable2 = dbsBilling.OpenRecordset("q_BillableTripsSort1B", dbOpenDynaset)
Set InTable3 = dbsBilling.OpenRecordset("q_BillableTripsSort1B", dbOpenDynaset)
Set InTable4 = dbsBilling.OpenRecordset("q_BillableTripsSort1B", dbOpenDynaset)
So far, this seems to have fixed the problem and, of course, the proc runs much faster since it does not have to create the table twice to run/update for two different sorts.
Related
Thanks in advance for putting up with me.
Pulling a 33,000-record recordset from the database took LESS execution time than using Count() in the SQL and just grabbing 20 rows.
How is that possible?
A bit more detail:
Before, we were grabbing the entire recordset yet only displaying 20 rows of it on a page at a time for pagination. That was cringeworthy and wasteful, so I redesigned the page to only grab 20 rows at a time and to simply use an index variable to grab the next page, and so on.
All well and good, but that lacked a record count, which our people needed.
So after the record query, I added (what I thought would be) a quick query just on the index of the table using the Count(index) function in Structured Query Language.
A side by side comparison of the original page and my new page indicates my new page takes roughly 10% longer to execute than the original! I was flabbergasted. I thought for sure it would be lightning fast, way faster than the original.
Any thoughts on why and what I might do to remedy that?
Is it because the script has to run two queries, regardless of the data retrieved?
Update:
Here is the SQL.
(Table names and field names are fictionalized in this post for security, but the structure is the same as the real page).
The main recordset select query contains:
SELECT
top 21 roster_id, roster_pplid, roster_pplemailid, roster_emailid, roster_firstname,
roster_lastname, roster_since, roster_pplsubscrid, roster_firstppldone, roster_pmtcurrent,
roster_emailverified, roster_active, roster_selfcanceled, roster_deactreason
FROM roster
WHERE
roster_siteid = 22
AND roster_isdeleted = false
order by roster_id desc
The record count query contains:
SELECT
COUNT(roster_id)
FROM
roster
WHERE
roster_siteid = 22
AND roster_isdeleted = false
The first query runs, then the second. The second always dynamically has the same matching WHERE filter.
I think I know why it is slower, I'm using GetRows to grab the recordset in the new page, was not using that in the old page. That seems to be the slowdown. But I have to use it, cannot step beyond the 21st record otherwise.
Nick.McDermaid : The SQL shown is selecting the TOP 21 rows, that is how it is grabbing just 20 rows (number 21 is just to populate the index for the "Next" page link).
this is my first question here.
I struggled days and days trying to find a solution everywhere with no success.
Basically I have a standard stored procedure pulling out a report dataset in a few seconds (5-6 seconds).
It aggregates (GROUPING BY and SUMMING) 23000 rows.
Indeed, my final dataset comes out with 4 rows and 33 columns executing, as said, in 5-6 seconds.
Unfortunately, while trying to load it via ReportBuilder, it loads endlessly (querying SQL Server, the StoredProcedure remains stuck in a RUNNING status forever).
Everything on ReportBuilder (DB Accesses, Dataset, Parameters, Matrix....) is right configured: I was indeed able to load it until I added a few additional (4) fields.
The SQL dataset is basically something like:
PARAMETERS DECLARATION
SELECT
FIELDS
FROM
(SELECT
FIELD A
SUMS
FROM
TABLE
JOIN TABLES
WHERE
PARAMETERS MATCHING
GROUP BY A
) AS B
ORDER BY FIELD
An "external layer" SELECT was needed to make some calculations on some FIELDS, also in some cases using some PARAMETERS.
That's it.
I use to work with huge datasets, sometimes pulling out 30,000 rows with 110 fields, but if something loads via SQL it also does always via ReportBuilder: this is the very first time it behaves in this different way.
So I'm asking if there are some strange SSRS/ReportBuilder limitations I never faced in my experience.
Any help would be really really appreciated!
Thanks in advance to everyone who'll spend time :)
"meta/background about the use of code and person using it"
1.site built by professional that left company,
2.I am inexperienced but trying/ want to learn,
3.Customer support site for service reps,
................................................
What im trying to do exactly per stackoverflows parameters.
We have a drop down box listing issues that the customer had in a column labeled "issue_type". I can export via csv entire table load onto excel then give to boss for overall review of what the issues were. However data base has a "hide" column. Its function is that when the row is updated the record is kept but the same "job or call" has only one viewable report on site (the most recently updated one). Hide is a boolean. In conclusion I want to export rows that only has the "hide" column Boolean status at 0, AND to only export the columns "customer", and "issue_type". I can seem to only do one or the other. and have researched a minimum of 4 hours to find answer myself and cannot find a syntax to do both at the same time with phpmyadmin.
I dont want an enormous data that is mostly useless but for issue type and customer but i will have to manually delete all the rows with hide = 1?
Thanks anyone 1st attempt question sorry if not correct for stackflow.
SELECT Customer,Issue_type FROM tickets where hide =0;
Elaborating on what is above for anyone that may be looking for a similar answer, SQL supports the "where" clause of which you can when properly syntaxed select many of your columns and their associated strings, booleans, and numbers to = what your looking for. Wildcards I found later for other uses work as well.
Sorry about the self answer but hopefully someone finds this usefull
I have few tables as shown below
Polls
PollId Question Option
1 What 1
2 Why 4
Updates
UpdateId Text
1 Sleep
2 Play
Polls and updates are just two sample tables (In reality there are more tables like ,photos, videos,links etc). But when a user visit his home (like facebook new feed) he must be displayed with data relevant to him (no such data included in this example). ie I want to select data from all tables with less number of query executions. (ie, I want to present a mixture of datas, ie polls, photos, videos etc )
Currently, I'm fetching only ids and type (ie which table) from all of the tables and gather further data while iterating through this resultset. (ie from c# calling another SqlQuery) .
Is there a way to query the data from whole tables at once? (OUTER JOIN?, UNION?)
Or simply,
How can I select different type of entities at once in a single sql Query?
You could write your query so that you have one long select list for everything you want and it all comes back in one result set but I suspect that wouldn't work too well because you might have varying numbers of different types of items per user.
If you really must have it all in one hit then you can issue multiple queries in one go and get multiple result sets back. To handle this you can use an ADO.Net DataSet. See this SO example (but not the accepted answer - see Vikram Dibyal's answer as that gives a very basic overview of what I think you're asking for).
I won't copy and paste the stuff from the linked thread, just head over and take a look.
I am experiencing a problem with mirrored datasets. This situation occured because the data model was switched a few months past and I just got recently assigned to this project, which already had a new application and data model done.
I was tasked with importing all the data from the old MS Access application to the new one and here's where the error has its source. The old data model was written in a way that every dataset was also stored as its mirrored counterpart. Imagine a database table like this:
pk | A | B
1 | hello | world
2 | world | hello
I imported the data via a self made staging process via Excel and VBA coding and that worked fine. The staging was necessary because I wanted to create insert statements and therefore had to map all the old IDs, names, ... to the news ones.
While testing the application after the import was done, I realized that the GUI showed all datasets twice. (The reason for it being shown twice and not once and then once again in mirrored form, is the way we fill the ListBox that shows the results)
I found the reason for that error in the mirrored data and now would like to get rid of it. The first idea I had is rather long and probably over-complicated, that's why I am posting here, in hope of finding a shorter solution.
So, my idea is as follows and would use solely VBA coding:
Filling recordSet with a SELECT * FROM mirroredDataTable
Write a SQL-Statement and check if the recordCount of that statements result is >1 for each record in the recordSet from 1.)
If the resultCount is >1 then one of the IDs in that result is written into a new recordSet or Array
The recordSet / array from 4.) is parsed again and for each ID in there I create a DELETE statement
???
profit
Now I already have an idea for the SQL statement in 2.), but before I begin I'd just like to ensure that there is no "easy" way that I haven't considered yet or just have overlooked.
Would greatly appreciate any help/info/tips you can provide.
PS: It is NOT an option to redesign the whole data model or something among the lines of this (not my decision)
Thanks to #Gord Thompson I was able to solve this issue on a purely SQL basis. See the answer of this subthread for the detailed solution: How to INTERSECT in MS Access?