Display endless amount of data in sql [closed] - sql

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
If I have an endless amount of data, can I display all of it in sql?
I know there is obviously select *, but then it will never complete.
Is there a command for this?

You can use TOP to select subset of total records
SELECT TOP 100 * from table
This selects top 100 records.
By using Order By clause , you can specify the basis on which subset of records is returned.
Now if you are asking about limits of Sql Server database management system then please see this link - Maximum Capacity Specification of Sql Server
Eg
Max Databases per instance of SQL Server ( both 32 bit and 64 bit ) = 32,767

Usually, you will prefer to use some kind of paging, since you cannot actually show "endless amount of data" in user-friendly way on application.

Related

How to you use Max and Min expression in Ms Access SQL? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
CrimeData Table for 12 months
Crime Took place in Easternmost
I need to find the following:
Q.7 What type of Crime takes place in the …
(a) Easternmost ………………..
(b) Westernmost ………………..
(c) Northernmost………………..
(d) Southernmost………………..
I tried to find the crime took place in the Easternmost using the following SQL code
SELECT Max(CrimeData.Easting) AS MaxOfEasting, CrimeData.Type
FROM CrimeData
GROUP BY CrimeData.Type;
but I got more than one crime and also other Easting numbers. Can you please tell me if there are other good ways to find the solution.
Please see the attached pictures :)
Rather than using Max/Min, have a look at the TOP keyword in SQL. Some SQL might look like:
SELECT TOP 1 CD.*
FROM CrimeData CD
ORDER BY CD.Easting DESC;
Regards,

Resolve performance overhead for Oracle order by query [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a query that fetches over a million of records. With regular select, I dont see any issue and its taking under 1 sec to returns the records.
My requirement is to get top 10 rows after applying order by clause. Its taking around 1 minute to do even after having necessary indexes on tables involved.
Could someone recommend a solution to get top 10 rows after applying sorting?
Have you tried :
Select * from (
/*Your query goes here
With order by part*/)
where rownum <=10;

Find Record in Recordset SQL Server [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am using classic asp to produce a website and am using the below SQL statement, the database is SQL Server 2000.
SELECT * from dbo.PDBproductview where product LIKE '" & partnumbersearch &"%';"
However we now also require forward and back buttons to move forward and backwards by part number - not sure how to achieve this - my initial thought was to run another sql query and somehow get the placement of the part in the part table (product) then to pick out the part before and after it, is it possible to do this ?
You don't say what version of SQL Server you are using. However, if you are using SQL Server 2012 or higher, the LEAD and LAG functions will allow you to achieve what you want to do.
Here is a pretty good article you can use as a guide. Essentially it looks something like this:
SELECT LAG(p.FirstName) OVER (ORDER BY p.BusinessEntityID) PreviousValue,
p.FirstName, LEAD(p.FirstName) OVER (ORDER BY p.BusinessEntityID) NextValue
FROM Person.Person p
With the LEAD and LAG functions, you can indicate how far back or forward you want to look.

Replaces Nulls with 0 without using a function? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to replace null values in a table but without using a function such as isnull because its dealing with a large amount of data and slowing it down.
everywhere online says isnull and coalesce but is there any way without using such functions.
I need this because the query
OPENING_OTHER + OPENING_FEE + OPENING_INT AS TOTAL_BALANCE
If one value is NULL then the total balance is always null
Cheers
No, how can you do something without doing anything?
You could permanenetly replace the NULL values with 0 but that would waste a lot of storage.
Transforming your data in a SELECT statement is not terribley costly if you use built in functions designed for that purpose.
The use of coalesce will be the quickest, most effiecient and expediant way to do this.
coalesce(OPENING_OTHER, 0) + coalesce(OPENING_FEE, 0) +
coalesce(OPENING_INT, 0) AS TOTAL_BALANCE
In fact, I'd suggest the actual cost of coalesce is so small that its hard to measure.

To write a query where it would extract the number of requests on various fields . [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want write a query which would give two columns :
1> Type of Query 2> Count
This result set should have following structure
Here the 1st column values should be predefined and the count has to
be calculated . I want to check the Request column of source table and
find specific pattern . If that pattern is found increase the count .
For Example :
If there is a word "greenhopper" found in the request column then that
belongs to type GREENHOPPER .
OR
If there is a word "gadgets" found it is of the type DASHBOARD . and
so on ...
So I want to analyze the usage of various categories by using the log
table .
Hence Finally I can get the amount of usage and after that I can build
a pie chart out of it .
SELECT 'Greenhopper' AS TypeOfQuery, COUNT(*) AS Cnt
FROM YourTable
WHERE Request LIKE '%Greenhopper%'
UNION ALL
SELECT 'Dashboard', COUNT(*)
FROM YourTable
WHERE Request LIKE '%gadgets%'
-- And so forth
You said they were predefined right? So you'd have ~10 different statements UNION'd together.
WITH Requests AS
(
SELECT
CASE
WHEN Request LIKE '%Greenhopper%' THEN 'GreenHopper'
WHEN Request LIKE '%gadgets%' THEN 'Gadgets'
-- and so on
ELSE 'Misc'
END RequestType
FROM YourTable
)
SELECT
RequestType,
COUNT(*) RequesCount
FROM Requests
GROUP BY RequestType
;
No data to test but I believe this approach will perform better as the table will be scanned less times. Performance is never going to be ideal though because of the LIKE and first wild card. This will prevent seeks.
Further explanation of why LIKE does not perform here
Having just re looked at the question you may be able to improve performance further by changing the search string so it only has a wildcard on the right
e.g. LIKE 'GET /rest/gadget%' and so on.