HI,
I have some Tables with a lot of records , for a report I have to join these tables.
If I want to get all rows I get the Time out error, I used Paging query in SQL Server 2005 , and can get the result page by page.
but I need to know the count of results or the count of pages of my query.
on a paged query , if I use count() I got the page size , not the all result count, and if I try to get count() on all records also I get Timeout error message.
Is there any method that can help to find the page counts of a query?
Thanks
Normally page-aware select stored procedures (created by for instance .netTiers CodeSmith template) return a multiple result. The first result set is one page of data and the second set is number of records.
It means you must have two SELECT statements in your SP that both have the same WHERE clause that applies the same filter over the rows of the query.
Related
As you may (or may not) already know, SQLite does not provide information about total number of results from the query. One has to wrap the query in SELECT count(*) FROM (original query); in order to get row count.
This worked perfectly fine for me, until one of users created custom SQL function (you can define your own functions in SQLite) that does INSERT into another, unrelated table. Then he executes query:
SELECT customFunction() FROM primaryTable WHERE primaryKeyColumnId = 1;
The query returns always 1 row, that is certain. It turns out that customFunction() was called twice (and inserted to that other table 2 rows) and that's because my application called his query as usuall and then called count(*) on that query as a followup.
How to approach this problem? How to execute only the original query and still have a row count from SQLite?
I'm using SQLite (3.13.0) C API.
You either have to remove such function calls from the query, or you cannot get the row count before actually having stepped through all the result rows.
In big query I am running a query on exported tables from GA.
I can not seem to get big query to limit the results. Here is a sample query, quite basic.
SELECT * FROM [1111111.ga_sessions_20140318] LIMIT 20000
The result set returns but with 7 million+ rows! I have tried this several different ways, ie. out to a table, just return result set, use cache results, don't use cached results, etc.
No matter which table I try to query it always returns the entire table.
This is basically the same as the sample query big query gives when clicking on the query table button except I changed the limit value from 1000 to 20000.
Anyone have any insight?
As noted by the comment on the original question:
"Is it possible that the number of rows shown at the bottom of the
result set returned in big query is my 20000 main object records plus
all the nested records?"
The answer is yes: BigQuery will apply the limit to the number of rows in the response, but if there are nested records involved, those will be flattened in the output.
I have a scenario where I get a count and then pass the count as a variable to a similar query to get the paginated records. So basically I am doing a full query to get all the count by internally creating the full table and then using that count to display the same table with 10 per page. What solutions do I have to avoid this sort of multiple query?
Something like this is a Pseudo language .
select count {big table}
select big table where records are between count and count+10
Is there a sensible way to get the COUNT variable in the same query?
I am wondering how would Google handle a search, would it first find all the records or just fetch the records without tracking the no: of pages? Page numbers can't be computed prior as it is dependent on the variable sent by the user.
Edit: I have a similar question here https://dba.stackexchange.com/questions/161586/including-count-of-a-result-in-the-main-query
Regarding Google, they are likely to generate only the requested amount of results (like 10) and to estimate the count. The estimated count is very imprecise.
You can't have SQL Server count all results and get only a subset of them. There 3 strategies to deal with this:
execute a counting and a data query
execute an unlimited data query and discard all but ten results on the client
execute an unlimited data query into a temp-table whose primary key is the row number. You can then count instantly (get the last row) and select any subset by rownumber with a single seek
Counting the data can be significantly cheaper because SQL Server can use different indexes or discard joins.
I have a Linq query which is working fine but i have noticed when i use take keyword with that query it does not return the same top selected records.
When i saw the Sql profiler query they are totally same excepts just top keyword in that what may be the problem. One more thing i have noticed is when i give a no greater then records in database it works fine with take as well.
I am attaching the query and records attachment
and when I apply top 10 in this query it shows this records
What could be the problem im using Sql Server 2008 R2.
Using TOP keyword without ordering does not guarantee repeatability of resultset.
From here
If a SELECT statement that includes TOP also has an ORDER BY clause,
the rows to be returned are selected from the ordered result set. The
whole result set is built in the specified order and the top n rows in
the ordered result set are returned.
Try forcing the query to order the records by using ORDER BY (or orderby in linq).
The default ordering may differ, try explicitly ordering by a column.
I have an application where I create a big SQL query dynamically for SQL server 2008. This query is based on various search criteria which the user might give such as search by lastname, firstname, ssn etc.
The requirement is that if the user gives a condition due to which the formed query might return a lot of rows(configurable for max N rows), then the application must send back a message instead to the user saying that he needs to refine his search query as the existing query will return too many rows.
I would not want to bring back say, 5000 rows to the client and then discard that data just to show the user an error. What is an efficient way to tackle this issue?
why not just show the first N rows, AND the message? limit the rows returned to N+1 and if the count of returned rows is > N then show the message :)
if you just want to check how many rows WOULD be returned by a query then select count(id) (or some column name) instead of select *