I frequently do a static analysis of SQL databases, during which I have the luxury of nobody being able to change the data except me.
However, I have not found a way to 'tell' this to SQL in order to prevent running the same query multiple times.
Here is what I would like to do, first I start with a complicated query that has a very small output.
SELECT * FROM MYTABLE WHERE MYPROPERTY = 1234
Then I run a simple query from the same window (Mostly using SQL server studio if that is relevant)
SELECT 1
Now I suddenly realize that I forgot to save the results from my first complicated (slow) query.
As I know the underlying data did not change (or even if it did) I would like to look one step back and simply get the result. However at the moment I don't know any trick to do this and I have to run the entire query again.
So the question summary is: How can I (automatically store/)get the results from recently executed queries.
I am particulary interested in simple select queries, and would be happy to allocate say 100MB memory for automated result storage. Would prefer a solution that works in SQL server studio with T-SQL, but other SQL solutions are also welcome.
EDIT: I am not looking for a way to manually prevent this from happening. In the cases where I can anticipate the problem it will not happen.
This can't be done in Microsoft SQL Server. SQL Server does not cache results, instead it caches data pages that were accessed by your query. This should make your query go a lot faster the second time around so it won't be as painful to re-run it.
In other databases, such as Oracle and MySQL, they do have a query caching mechanism that will allow you to retrieve the results directly the second time around.
I run into this frequently, I often just throw the results of longer-running queries into a temp table:
SELECT *
INTO #results1
FROM MYTABLE WHERE MYPROPERTY = 1234
SELECT *
FROM #results1
If the query is very long-running I might use a 'real' table. It's a good way to save on re-run time.
Downside is that it adds to your query.
You can also send query results to a file in SSMS, info on formatting the output is here: SSMS Results to File
The easiest way to do this is to run each query in its own SSMS window, the results will stay there until you close it, or run out of memory - besides that, I am not sure there is a way to accomplish what you want.
Once you close the SSMS window, I don't believe there is a way to get back 'cached' results.
This isn't a technical answer to your question. Having written queries and looking at results for many years, I am in the habit of saving the results in Excel, regardless of the database/query tool I'm using.
The format in Excel is rather methodical:
Each worksheet has the date. (Called something like "1 Jul".)
Each spreadsheet contains one month. (Typically with the month name like "work-201307".)
In the "B" column I copy and paste the query.
Underneath, in the "C" column, I copy and paste the results.
The next query starts a few lines after, one after the other.
I put the queries in the "B" column, so I can go to the "A" column and use to get to the first row. I put the results in the "C" column, so I can go to the "B" column and use to move between queries.
I mostly do this so I can go back and see the work I did many months ago. For instance, someone sends an email from February and says "do this again". I can go back to the February spreadsheet, go to the day it was created, and see what I was doing at that time.
In your question, though, I realize that I now instinctively solve this problem, because the "right click on the grid, copy with column headers, alt-tab to excel, alt-V" is a behavior that I comes quite naturally.
I was going to suggest you to run each query into a script with a counter (stored in a table) increased each time the query is executed (i.e. i++) and storing each query in a Temp Table called "tmpTable" + i, but it sounds very complicated to manage. Am I right?
Then I googled and I've found this Tool Pack: I didn't try it but you could take a look:
http://www.ssmstoolspack.com/Features
Hope it helps.
EDIT: added the folliwing link. There's the option to output as XML file and they mention SQL Server Integration Services as a possible solution too.
http://michaeljswart.com/2012/03/sending-query-results-to-others/#method5
SECOND EDIT: There's this DBMS-Independent tool too, it sounds interesting:
http://www.sql-workbench.net/
i am not sure this is what you want. Anyway check my answer
In sql server management studio you can open multiple tabs for executing queries. Open new tab for each query, then the result of executed queries will be available under that tab.
After executing one query in a tab dont use that tab for new query, open new tab for that job.
Have you considered using some kind of offline SQL client such as Excel? Specifically, Excel will retrieve the results into the spread sheet (using the Data ribbon/menus) where they are stored pretty much permanently as results. It will prompt you to refresh when necessary or you can do it on demand.
Your question as to whether it can be done in T/SQL or other databases depends on the database and results cache and even then they are options that the query processor can use not guarantees to the individual query.
Related
I have this report and need to add totals for each person (the red circle)
existing report
new report
I cannot change the existing report so I export data from MS SQL to MS Access and create a new report there. I got it working for one employee but have trouble with a query which would for multiple employees.
This query extract data use as input:
SELECT [TIME].[RCD_NUM], [TIME].[EMP_ID], [TIME].[PPERIOD], [TIME].[PRUN], [TIME].[TDATE], [TIME].[PC], [TIME].[RATE], [TIME].[HOURS], [TIME].[AMOUNT], [TIME].[JOB_ID], [TIME].[UPDATED], [TIME].[UPDATED_BY], [TIME].[LOG_DATE], [TIME].[ORIGINAL_REC_NUM]
FROM [TIME]
WHERE ((([TIME].[EMP_ID])=376) And (([TIME].[TDATE])<=#12/31/2006# And ([TIME].[TDATE])>=#1/1/2006#) And (([TIME].[PC])<599));
this query populates the report:
SELECT *
FROM TIME1
WHERE RCD_NUM = (SELECT Max(RCD_NUM) FROM [TIME1] UQ WHERE UQ.PPERIOD = [TIME1].PPERIOD AND UQ.PC = [TIME1].PC);
the problem is if I remove EMP_ID from the first query like this
SELECT [TIME].[RCD_NUM], [TIME].[EMP_ID], [TIME].[PPERIOD], [TIME].[PRUN], [TIME].[TDATE], [TIME].[PC], [TIME].[RATE], [TIME].[HOURS], [TIME].[AMOUNT], [TIME].[JOB_ID], [TIME].[UPDATED], [TIME].[UPDATED_BY], [TIME].[LOG_DATE], [TIME].[ORIGINAL_REC_NUM]
FROM [TIME]
WHERE ((([TIME].[TDATE])<=#12/31/2006# And ([TIME].[TDATE])>=#1/1/2006#) And (([TIME].[PC])<599));
then the second query doesn't work and ms access freezes when running this query.
any help/idea please?
Caveat: I won't pretend to know the precise cause of the problem, but I have had to repeatedly refactor queries in Access to get them working even though the original SQL statements are completely valid in regards to syntax and logic. Sometimes I've had to convolute a sequence of queries just to avoid bugs in Access. Access is often rather dumb and will simply (re)execute queries and subqueries exactly as given without optimization. At other times Access will attempt to combine queries by performing some internal optimizations, but sometimes those introduce frustrating bugs. Something as simple as a name change or column reordering can be the difference between a functioning query and one that crashes or freezes Access.
First consider:
Can you leave the data on SQL Server and link to the results in Access (rather than export/importing it into Access)? Even if you need or prefer to use Access for creating the actual report, you could use all the power of SQL Server for querying the data--it is likely less buggy and more efficient.
Common best practice is to create SQL Server stored procedures that return just what data you need in Access. A pass-through query is created in Access to retrieve the data, but all data operations are performed on the server.
Perhaps this is just a performance issue where limiting the set by [EMP_ID] selects a small subset, but the full table is large enough to "freeze" Access.
How long have you let Access remain frozen before killing the process? Be patient... like many, many minutes (or hours). Start it in the morning and check after lunch. :) It might eventually return a result set. This does not imply it is tolerable or that there is no other solution, but it can be useful to know if it eventually returns data or not.
How many possible records are there?
Are the imported data properly indexed? Add indexes to all key fields and those which are used in WHERE clauses.
Is the database located on a network share or is it local? Try copying the database to a local drive.
Other hints:
Try the BETWEEN operator for dates in the WHERE clause.
Try refactoring the "second" query by performing a join in the FROM clause rather than the WHERE clause. In doing this, you may also want to save the subquery as a named query (just as [TIME1] is saved). Whether or not a query is saved or embedded in another statement CAN change the behavior of Access (see caveat) even though the results should be identical.
Here's a version with the embedded aggregate query. Notice how all column references are qualified with the source. Some of the original query's columns do not have a source alias prefixing the column name. Remember the caveat... such picky details can affect Access behavior.:
SELECT TIME1.*
FROM TIME1 INNER JOIN
(SELECT UQ.PPERIOD, UQ.PC, Max(UQ.RCD_NUM) As Max_RCD_NUM
FROM [TIME1] UQ
GROUP BY UQ.PPERIOD, UQ.PC) As TIMEAGG
ON (TIME1.PPERIOD = TIMEAGG.PPERIOD) And (TIME1.PC = TIMEAGG.PC)
AND (TIME1.RCD_NUM = TIMEAGG.Max_RCD_NUM)
I have a Microsoft Access 2010 form with dropboxes and a checkbox which represent certain parameters. I need to run a query with conditions based on these parameters. It should also be a possibility for no criteria from the dropdown boxes and checkbox in order to pull all data.
I have two working ways of implementing this:
I build a query with IIf statements in the WHERE clause, nesting statements until I have accounted for every combination of criteria. I reference the criteria in the SQL logic by using Forms!frmMyFrm!checkbox1 for example or by using a function FormFieldValue(formName,fieldName) which returns the value of a control with the input of the form and control name (This is because of previous issues). I set this query to run with the press of the form's button.
I set a vba sub to run with the press of the button. I check the conditions and set the query SQL to a predetermined SQL string based on the control criteria (referenced in the same way as the previous method). This also involves many If...Else statements, but is a little easier to read than a giant query.
What is the preferred method? Which is more efficient?
I don't believe you would find one way is more efficient over the other, at least not noticeably. For the most part it is simply personal preference.
I generally use VBA and check the value of each dropdown/checkbox and build pieces of the SQL query then put together at the end. The issue that you may run into with this method though is that if you have a large number of dropdowns and checkboxes the code is easy to get "lost" in.
If time to run is very key though you could always use some of the tips How do you test running time of VBA code? to see which way is faster.
After a lot of experimentation, and a bit of new information indicating having a pre-built query is faster than having SQL compiled in VBA, the most efficient and clear solution in the context of Microsoft Access is to build and save a number of dependent queries beforehand.
Essentially, build a string of queries each with an IIf dependent on a different criterium. Then you only need to run the final query. The only case where you would have to incorporate a VBA If...Else is if you need to query something more complicated than SELECT...WHERE(IIf(...)).
This has a few advantages:
The SQL is already compiled in the saved query, speeding things up.
No more getting lost in code:
There is no giant, nearly-impossible-to-edit query with way too many IIfs.
The minimal VBA code is even easier to follow.
At least for me, who's not an expert in SQL, it's convenient that I can often use the MS Access visual query builder for each part.
I am changing the command text for a data set inside the .rdl ffile:
I would like to know how can I update the resulting fields that are returned by the select statement:
I know that these fields must be automatically generated, so I was wondering if it's possible to update them right after editing the SQL code inline??
Usually when someone wants to have a look at the data in command text they are wanting it for reference to an end user(from what I have seen). You may want to amend it but ultimately with reporting your first goal should be: "What am I doing this for?" If your goal is dynamic creation at runtime then I would avoid this and offer a few other suggestions:
Procertize it. Making a stored procedure if you have the know how in SQL Server is a convenient and fast way to get what you want and you can optimize it if you know what you are doing with your SQL FU to get good results. The downside would be if you work with multiple environments you have to deploy your code for the TSQL as well as the RDL file.
Use an expression to build the dataset at runtime. In cases where I have been told that the query itself was not properly optimized by other developers they have mentioned doing this. I myself do not always see the advantage of doing this versus just having your predicate construction work well with good indexing on the source engine. Regardless you can build your dataset at runtime. It would be similar to hitting 'fx' next to the text and then putting in something like this(assuming you have a variable named #Start):
="Select thing
from table
Where >= " & Parameters!Start.Value
Again I have not really seen if this is really that much faster than:
Select thing
from table
Where >= #Start
But it is there if you just want to build it dynamically.
You can try to build your expression dynamically from parameters being PART of the select statement. SSRS is all about the 'expressions' and what you can do with them. Once you jump in and learn how they apply to everything you can go nuts so to speak on using them. A general rule though is the more of them you use and rely on the slower your reports will become.
I hope some of this may help, I would ask first is something dynamic due to a need to be event driven or is performance related.
I have a relatively simple select statement in a VB6 program that I have to maintain. (Suppress your natural tendency to shudder; I inherited the thing, I didn't write it.)
The statement is straightforward (reformatted for clarity):
select distinct
b.ip_address
from
code_table a,
location b
where
a.code_item = b.which_id and
a.location_type_code = '15' and
a.code_status = 'R'
The table in question returns a list of IP addresses from the database. The key column in question is code_status. Some time ago, we realized that one of the IP addresses was no longer valid, so we changed its status to I (invalid) to exclude it from appearing in the query's results.
When you execute the query above in SQL Plus, or in SQL Developer, everything is fine. But when you execute it from VB6, the check against code_status is ignored, and the invalid IP address appears in the result set.
My first guess was that the results were cached somewhere. But, not being an Oracle expert, I have no idea where to look.
This is ancient VB6 code. The SQL is embedded in the application. At the moment, I don't have time to rewrite it as a stored procedure. (I will some day, given the chance.) But, I need to know what would cause this disparity in behavior and how to eliminate it. If it's happening here, it's likely happening somewhere else.
If anyone can suggest a good place to look, I'd be very appreciative.
Some random ideas:
Are you sure you committed the changes that invalidate the ip-address? Can someone else (using another db connection / user) see the changed code_status?
Are you sure that the results are not modified after they are returned from the database?
Are you sure that you are using the "same" database connection in SQLPlus as in the code (database, user etc.)?
Are you sure that that is indeed the SQL sent to the database? (You may check by tracing on the Oracle server or by debugging the VB code). Reformatting may have changed "something".
Off the top of my head I can't think of any "caching" that might "re-insert" the unwanted ip. Hope something from the above gives you some ideas on where to look at.
In addition to the suggestions that IronGoofy has made, have you tried swapping round the last two clauses?
where
a.code_item = b.wich_id and
a.code_status = 'R' and
a.location_type_code = '15'
If you get a different set of results then this might point to some sort of wrangling going on that results in dodgy SQL actually be sent to the database.
There are Oracle bugs that result in incorrect answers. This surely isn't one of those times. Usually they involve some bizarre combination of views and functions and dblinks and lunar phases...
It's not cached anywhere. Oracle doesn't cache results until 11 and even then it knows to change the cache when the answer may change.
I would guess this is a data issue. You have a DISTINCT on the IP address in the query, why? If there's no unique constraint, there may be more than one copy of your IP address and you only fixed one of them.
And your Code_status is in a completely different table from your IP addresses. You set the status to "I" in the code table and you get the list of IPs from the Location table.
Stop thinking zebras and start thinking horses. This is almost certainly just data you do not fully understand.
Run this
select
a.location_type_code,
a.code_status
from
code_table a,
location b
where
a.code_item = b.which_id and
b.ip_address = <the one you think you fixed>
I bet you get one row with an 'I' and another row with an 'R'
I'd suggest you have a look at the V$SQL system view to confirm that the query you believe the VB6 code is running is actually the query it is running.
Something along the lines of
select sql_text, fetches
where sql_text like '%ip_address%'
Verify that the SQL_TEXT is the one you expect and that the FETCHES count goes up as you execute the code.
I have 150+ SQL queries in separate text files that I need to analyze (just the actual SQL code, not the data results) in order to identify all column names and table names used. Preferably with the number of times each column and table makes an appearance. Writing a brand new SQL parsing program is trickier than is seems, with nested SELECT statements and the like.
There has to be a program, or code out there that does this (or something close to this), but I have not found it.
I actually ended up using a tool called
SQL Pretty Printer. You can purchase a desktop version, but I just used the free online application. Just copy the query into the text box, set the Output to "List DB Object" and click the Format SQL button.
It work great using around 150 different (and complex) SQL queries.
How about using the Execution Plan report in MS SQLServer? You can save this to an xml file which can then be parsed.
You may want to looking to something like this:
JSqlParser
which uses JavaCC to parse and return the query string as an object graph. I've never used it, so I can't vouch for its quality.
If you're application needs to do it, and has access to a database that has the tables etc, you could run something like:
SELECT TOP 0 * FROM MY_TABLE
Using ADO.NET. This would give you a DataTable instance for which you could query the columns and their attributes.
Please go with antlr... Write a grammar n follow the steps..which is given in antlr site..eventually you will get AST(abstract syntax tree). For the given query... we can traverse through this and bring all table ,column which is present in the query..
In DB2 you can append your query with something such as the following, but 1 is the minimum you can specify; it will throw an error if you try to specify 0:
FETCH FIRST 1 ROW ONLY