In the above image what you can see is that I have a table, that when I query a max value of a field from it, I get different results based on a where clause that the rest of the queries seem to rule out as irrelevant.
Back end is MSDE 2000, front end is application written in VB.NET 2008, verification performed using SSMS 2008R2 attached to MSDE instance over VPN.
It is a closed system from application development, however if I could correct whatever is causing this I believe both DB and application would resume operation.
The problem is is causing is when it requests Max([record_index]) + 1 where the [station_id] = 10, the value is coming up as a record that already exists in that table, and the insert is failing because of a unique constraint.
Reindex of the PK index solved the problem and makes the above queries for Max([record_index]) return the same number as Max([record_index]) WHERE... return the same numbers, as they should. So at this point index corruption is the only logical answer. The DB engine is 12 years old, and this is the only time it has ever happened to us, guess I will just have to accept it
Related
I have this report and need to add totals for each person (the red circle)
existing report
new report
I cannot change the existing report so I export data from MS SQL to MS Access and create a new report there. I got it working for one employee but have trouble with a query which would for multiple employees.
This query extract data use as input:
SELECT [TIME].[RCD_NUM], [TIME].[EMP_ID], [TIME].[PPERIOD], [TIME].[PRUN], [TIME].[TDATE], [TIME].[PC], [TIME].[RATE], [TIME].[HOURS], [TIME].[AMOUNT], [TIME].[JOB_ID], [TIME].[UPDATED], [TIME].[UPDATED_BY], [TIME].[LOG_DATE], [TIME].[ORIGINAL_REC_NUM]
FROM [TIME]
WHERE ((([TIME].[EMP_ID])=376) And (([TIME].[TDATE])<=#12/31/2006# And ([TIME].[TDATE])>=#1/1/2006#) And (([TIME].[PC])<599));
this query populates the report:
SELECT *
FROM TIME1
WHERE RCD_NUM = (SELECT Max(RCD_NUM) FROM [TIME1] UQ WHERE UQ.PPERIOD = [TIME1].PPERIOD AND UQ.PC = [TIME1].PC);
the problem is if I remove EMP_ID from the first query like this
SELECT [TIME].[RCD_NUM], [TIME].[EMP_ID], [TIME].[PPERIOD], [TIME].[PRUN], [TIME].[TDATE], [TIME].[PC], [TIME].[RATE], [TIME].[HOURS], [TIME].[AMOUNT], [TIME].[JOB_ID], [TIME].[UPDATED], [TIME].[UPDATED_BY], [TIME].[LOG_DATE], [TIME].[ORIGINAL_REC_NUM]
FROM [TIME]
WHERE ((([TIME].[TDATE])<=#12/31/2006# And ([TIME].[TDATE])>=#1/1/2006#) And (([TIME].[PC])<599));
then the second query doesn't work and ms access freezes when running this query.
any help/idea please?
Caveat: I won't pretend to know the precise cause of the problem, but I have had to repeatedly refactor queries in Access to get them working even though the original SQL statements are completely valid in regards to syntax and logic. Sometimes I've had to convolute a sequence of queries just to avoid bugs in Access. Access is often rather dumb and will simply (re)execute queries and subqueries exactly as given without optimization. At other times Access will attempt to combine queries by performing some internal optimizations, but sometimes those introduce frustrating bugs. Something as simple as a name change or column reordering can be the difference between a functioning query and one that crashes or freezes Access.
First consider:
Can you leave the data on SQL Server and link to the results in Access (rather than export/importing it into Access)? Even if you need or prefer to use Access for creating the actual report, you could use all the power of SQL Server for querying the data--it is likely less buggy and more efficient.
Common best practice is to create SQL Server stored procedures that return just what data you need in Access. A pass-through query is created in Access to retrieve the data, but all data operations are performed on the server.
Perhaps this is just a performance issue where limiting the set by [EMP_ID] selects a small subset, but the full table is large enough to "freeze" Access.
How long have you let Access remain frozen before killing the process? Be patient... like many, many minutes (or hours). Start it in the morning and check after lunch. :) It might eventually return a result set. This does not imply it is tolerable or that there is no other solution, but it can be useful to know if it eventually returns data or not.
How many possible records are there?
Are the imported data properly indexed? Add indexes to all key fields and those which are used in WHERE clauses.
Is the database located on a network share or is it local? Try copying the database to a local drive.
Other hints:
Try the BETWEEN operator for dates in the WHERE clause.
Try refactoring the "second" query by performing a join in the FROM clause rather than the WHERE clause. In doing this, you may also want to save the subquery as a named query (just as [TIME1] is saved). Whether or not a query is saved or embedded in another statement CAN change the behavior of Access (see caveat) even though the results should be identical.
Here's a version with the embedded aggregate query. Notice how all column references are qualified with the source. Some of the original query's columns do not have a source alias prefixing the column name. Remember the caveat... such picky details can affect Access behavior.:
SELECT TIME1.*
FROM TIME1 INNER JOIN
(SELECT UQ.PPERIOD, UQ.PC, Max(UQ.RCD_NUM) As Max_RCD_NUM
FROM [TIME1] UQ
GROUP BY UQ.PPERIOD, UQ.PC) As TIMEAGG
ON (TIME1.PPERIOD = TIMEAGG.PPERIOD) And (TIME1.PC = TIMEAGG.PC)
AND (TIME1.RCD_NUM = TIMEAGG.Max_RCD_NUM)
I know this question has probably been asked before I just can't manage to get mine going. I set up my SQL to have two tables but in this instance I will only be using one called 'Book'. It has various columns but the ones I want to work with is called 'WR', 'Customer', 'Contact', 'Model', 'SN', 'Status', 'Tech', 'WDone' and 'IN'.
I want to enter text into a editbox called edtWR and I want the button btnSearch to search the 'WR' column until it has a match (all of the entries will be different). Once it has that it must write 'Customer', 'Contact', 'Model', 'SN', 'Status' to labels, lets call them lblCustomer lblContact lblModel lblSN & lblStatus.
Once the person has verified that that is the 'WR' that they want the must enter text into edit boxes and one memo called edtTech, mmoWDone and edtIN and click on btnUpdate. that should then update that record.
I have 3 ADO Connections on called dtbOut thats my ADOConnection1, tableOut thats my ADOTable and dataOut thats by ADODataSet. dataOut's command text is Select * From Book if it helps.
I can get the whole process to work perfectly on a access database but with almost no experience on SQL I need help. I will add code for the Access database in case it is needed for reference.
procedure TFOut.btnSearchClick(Sender: TObject);
begin
dataout.Filter := 'WR = ''' + 'WR ' + edtwr.Text + '''';
dataout.Filtered := True;
dataout.First;
lblcustomer.Caption := 'Customer: ' + dataout.FieldByName('Customer').AsString;
lblcontact.Caption := 'Contact: ' + dataout.FieldByName('Contact').AsString;
lblSN.Caption := 'SN: ' + dataout.FieldByName('SN').AsString;
lblModel.Caption := 'Model: ' + dataout.FieldByName('Model').AsString;
lblstatus.Caption := 'Status: ' + dataout.FieldByName('Status').AsString;
procedure TFOut.btnUpdateClick(Sender: TObject);
begin
dataout.Edit;
dataout.FieldByName('Tech').AsString := edtTech.Text;
dataout.FieldByName('WDone').AsString := mmoWDone.Lines.GetText;
dataout.FieldByName('IN').AsString := edtIN.Text;
dataout.Post;
end
Do I need any additional components on my form for me to be able to do this in SQL, what do I need and how do I even start. Ive read a lot of things and it seems line I will need to get a ADOQuery1 but when it comes to the ADOQuery1.SQL part I fall off the wagon. I have also tried it the Access way and I can search but as soon as I try to update I get a "Insufficient key column information for updating or refreshing" Error, witch I also have no idea how to address.
If I need to state the question otherwise, please explain how to change to make it more clear and if I need to add anything in the whole explanation or code, please inform me of what.
SO isn't really the place for database tutorials, so I'm not going to
attempt one but instead focus on one basic thing that it's crucial to understand and get right in your database design before you even begin
to write a Delphi db app. (I'm going to talk about this in terms of
Sql Server, not MS Access.)
You mentioned getting an error "Insufficient key column information for updating or refreshing" which you said you had no idea how to address.
A Delphi dataset (of any sort, not just an ADO one) operates by maintaining a logical cursor which which points at exactly one row in the dataset. When you open a (non-empty) dataset, this cursor is pointing at the first row in the dataset and you can move the cursor around using various TDataSet methods such as Next & Prior, First, Last and MoveBy. Some, but not all, types of TDataSet implement its Locate method which enables you to go to a row which matches criteria you specify, other types, not. Delphi's ADO components do implement Locate (btw, Locate operates on rows you're already retrieved from the server, it's not for finding rows on the server).
One of the key ideas of Sql-oriented TDataSets such as TAdoQuery is that you can leave it to automatically generate Sql statements to do Updates, Deletes and Inserts. This is not only a significant productivity aid, but it avoids coding errors and omissions when you try to do it yourself.
If you observe ADO doing its stuff against an MS Sql Server table using SS's Profiler utility, then with a well-designed database, you'll find that it does this quite nicely and efficiently provided the database design follows one cardinal rule, namely that there must be a way to uniquely identify a particular row in a table. The most common way to do this is to include in each table, usually as the first column, an int(eger) ID column, and to define it as the "Primary key" of the table. Although there are other methods to generate a suitable ID value to go in this column, Sql Server has a specific column type, 'Identity' which takes care of this on the server.
Once a table has such a column, the ADO layer (which is a data-access layer provided by Windows that dataset components such as TAdoQuery sit upon) can automatically generate Sql statements to do Updates and Deletes, e.g.
Delete from Table1 where Table1ID = 999
and
Update Table1 set SomeCharField = 'SomeValue' where Table1ID = 666
and you can leave it to the AdoQuery to pick up the ID value for a newly-inserted row from the server.
One of the helpful aspects of leaving the Sql to be generated automatically is that it ensures that the Sql only affects a single row and so avoids affecting more rows than you intend.
Once you've got this key aspect of your database design correct, you'll find that Delphi's TDataSet descendants such as TAdoQuery and its DB-aware components can deal with most simple database applications without you having to write any Sql statements at all to update, insert or delete
rows. Usually, however, you do still need to write Sql statements to retrieve the rows you want from the server by using a 'Where' clause to restrict the rows retrieved to a sub-set of the rows on the server.
Maybe your next step should be to read up on parameterized Sql queries, to reduce your exposure to "Sql Injection":
https://en.wikipedia.org/wiki/SQL_injection
as it's best to get into the habit of writing Sql queries using parameters. Btw, Sql Injection isn't just about Sql being intercepted and modified when it's sent over the internet: there are forms of injection where a malicious user who knows what they're doing can simply type in some extra Sql statements where the app "expects" them simply to specify some column value as a search criterion.
I frequently do a static analysis of SQL databases, during which I have the luxury of nobody being able to change the data except me.
However, I have not found a way to 'tell' this to SQL in order to prevent running the same query multiple times.
Here is what I would like to do, first I start with a complicated query that has a very small output.
SELECT * FROM MYTABLE WHERE MYPROPERTY = 1234
Then I run a simple query from the same window (Mostly using SQL server studio if that is relevant)
SELECT 1
Now I suddenly realize that I forgot to save the results from my first complicated (slow) query.
As I know the underlying data did not change (or even if it did) I would like to look one step back and simply get the result. However at the moment I don't know any trick to do this and I have to run the entire query again.
So the question summary is: How can I (automatically store/)get the results from recently executed queries.
I am particulary interested in simple select queries, and would be happy to allocate say 100MB memory for automated result storage. Would prefer a solution that works in SQL server studio with T-SQL, but other SQL solutions are also welcome.
EDIT: I am not looking for a way to manually prevent this from happening. In the cases where I can anticipate the problem it will not happen.
This can't be done in Microsoft SQL Server. SQL Server does not cache results, instead it caches data pages that were accessed by your query. This should make your query go a lot faster the second time around so it won't be as painful to re-run it.
In other databases, such as Oracle and MySQL, they do have a query caching mechanism that will allow you to retrieve the results directly the second time around.
I run into this frequently, I often just throw the results of longer-running queries into a temp table:
SELECT *
INTO #results1
FROM MYTABLE WHERE MYPROPERTY = 1234
SELECT *
FROM #results1
If the query is very long-running I might use a 'real' table. It's a good way to save on re-run time.
Downside is that it adds to your query.
You can also send query results to a file in SSMS, info on formatting the output is here: SSMS Results to File
The easiest way to do this is to run each query in its own SSMS window, the results will stay there until you close it, or run out of memory - besides that, I am not sure there is a way to accomplish what you want.
Once you close the SSMS window, I don't believe there is a way to get back 'cached' results.
This isn't a technical answer to your question. Having written queries and looking at results for many years, I am in the habit of saving the results in Excel, regardless of the database/query tool I'm using.
The format in Excel is rather methodical:
Each worksheet has the date. (Called something like "1 Jul".)
Each spreadsheet contains one month. (Typically with the month name like "work-201307".)
In the "B" column I copy and paste the query.
Underneath, in the "C" column, I copy and paste the results.
The next query starts a few lines after, one after the other.
I put the queries in the "B" column, so I can go to the "A" column and use to get to the first row. I put the results in the "C" column, so I can go to the "B" column and use to move between queries.
I mostly do this so I can go back and see the work I did many months ago. For instance, someone sends an email from February and says "do this again". I can go back to the February spreadsheet, go to the day it was created, and see what I was doing at that time.
In your question, though, I realize that I now instinctively solve this problem, because the "right click on the grid, copy with column headers, alt-tab to excel, alt-V" is a behavior that I comes quite naturally.
I was going to suggest you to run each query into a script with a counter (stored in a table) increased each time the query is executed (i.e. i++) and storing each query in a Temp Table called "tmpTable" + i, but it sounds very complicated to manage. Am I right?
Then I googled and I've found this Tool Pack: I didn't try it but you could take a look:
http://www.ssmstoolspack.com/Features
Hope it helps.
EDIT: added the folliwing link. There's the option to output as XML file and they mention SQL Server Integration Services as a possible solution too.
http://michaeljswart.com/2012/03/sending-query-results-to-others/#method5
SECOND EDIT: There's this DBMS-Independent tool too, it sounds interesting:
http://www.sql-workbench.net/
i am not sure this is what you want. Anyway check my answer
In sql server management studio you can open multiple tabs for executing queries. Open new tab for each query, then the result of executed queries will be available under that tab.
After executing one query in a tab dont use that tab for new query, open new tab for that job.
Have you considered using some kind of offline SQL client such as Excel? Specifically, Excel will retrieve the results into the spread sheet (using the Data ribbon/menus) where they are stored pretty much permanently as results. It will prompt you to refresh when necessary or you can do it on demand.
Your question as to whether it can be done in T/SQL or other databases depends on the database and results cache and even then they are options that the query processor can use not guarantees to the individual query.
I've experienced today strange issue. One of my projects is running .NET + SQL Server 2005 Express.
There is one query I use for some filtering.
SELECT *
FROM [myTable]
where UI = 2011040773395012950010370
GO
SELECT *
FROM [myTable]
where UI = '2011040773395012950010370'
GO
UI column is nvarchar(256) and UI value passed to filter is always 25 digits.
On my DEV environment - both queries return same row and no errors. However at my customers, after few months of running fine, first version started to return type conversion error.
Any idea why?
I'm not looking for solution - I'm looking for explanation why on one environment it works and on other doesn't and why out of sudden it started to return errors instead of results. I'm using same tools on both (SQL Server Management Studio Express and 2 different .NET clients)
Environments are more or less the same (W2k3 + SQL Server 2005 Express)
This is completely predictable and expected because of Datatype precedence
For this, the UI column will be changed to decimal(25,0)
where UI = 2011040773395012950010370
This one is almost correct. The right hand side is varchar and is changed to nvarchar
where UI = '2011040773395012950010370'
This is the really correct version where both types are the same
where UI = N'2011040773395012950010370'
Errors will have started because the UI column now contains a value that won't CAST to decimal(25,0).
Some unrelated notes:
if you have an index on the UI column it would be ignored in the first version because of the implicit CAST required
do you need unicode to store numeric digits? There is a serious overhead with unicode data types in storage and performance
why not use char(25) or nchar(25) is values are always fixed length? Your queries use too much memory as the optimiser assumes an average length of 128 characters based on nvarchar(256)
Edit, after comment
Don't assume "why does it works sometimes" when you don't know that it does work
Examples:
The value could have been deleted then added later
A TOP clause or SET ROWCOUNT could mean the offending value is not reached
The query was never run so it couldn't fail
The error is silently ignored by some other code?
Edit 2 for hopefully more clarity
Chat
gbn:
When you run WHERE UI = 2011040773395012950010370, you do not know the order of row access. So if one row does have "bicycle" you may or may not hit that row.
Random:
So the problem may be not in the row which i was trying to access but other one with corrupted value?
gbn
different machines will have different order of reads based on service pack level, index and table fragmentation, number of CPUs, parallelism maybe
correct
and TOP even. That kind of stuff
As Tao mentions, it's important to understand that another unrelated can break the query even if this one is OK.
data type precedence can cause ALL the data in that column to be converted before the where clause is evaluated
I have a relatively simple select statement in a VB6 program that I have to maintain. (Suppress your natural tendency to shudder; I inherited the thing, I didn't write it.)
The statement is straightforward (reformatted for clarity):
select distinct
b.ip_address
from
code_table a,
location b
where
a.code_item = b.which_id and
a.location_type_code = '15' and
a.code_status = 'R'
The table in question returns a list of IP addresses from the database. The key column in question is code_status. Some time ago, we realized that one of the IP addresses was no longer valid, so we changed its status to I (invalid) to exclude it from appearing in the query's results.
When you execute the query above in SQL Plus, or in SQL Developer, everything is fine. But when you execute it from VB6, the check against code_status is ignored, and the invalid IP address appears in the result set.
My first guess was that the results were cached somewhere. But, not being an Oracle expert, I have no idea where to look.
This is ancient VB6 code. The SQL is embedded in the application. At the moment, I don't have time to rewrite it as a stored procedure. (I will some day, given the chance.) But, I need to know what would cause this disparity in behavior and how to eliminate it. If it's happening here, it's likely happening somewhere else.
If anyone can suggest a good place to look, I'd be very appreciative.
Some random ideas:
Are you sure you committed the changes that invalidate the ip-address? Can someone else (using another db connection / user) see the changed code_status?
Are you sure that the results are not modified after they are returned from the database?
Are you sure that you are using the "same" database connection in SQLPlus as in the code (database, user etc.)?
Are you sure that that is indeed the SQL sent to the database? (You may check by tracing on the Oracle server or by debugging the VB code). Reformatting may have changed "something".
Off the top of my head I can't think of any "caching" that might "re-insert" the unwanted ip. Hope something from the above gives you some ideas on where to look at.
In addition to the suggestions that IronGoofy has made, have you tried swapping round the last two clauses?
where
a.code_item = b.wich_id and
a.code_status = 'R' and
a.location_type_code = '15'
If you get a different set of results then this might point to some sort of wrangling going on that results in dodgy SQL actually be sent to the database.
There are Oracle bugs that result in incorrect answers. This surely isn't one of those times. Usually they involve some bizarre combination of views and functions and dblinks and lunar phases...
It's not cached anywhere. Oracle doesn't cache results until 11 and even then it knows to change the cache when the answer may change.
I would guess this is a data issue. You have a DISTINCT on the IP address in the query, why? If there's no unique constraint, there may be more than one copy of your IP address and you only fixed one of them.
And your Code_status is in a completely different table from your IP addresses. You set the status to "I" in the code table and you get the list of IPs from the Location table.
Stop thinking zebras and start thinking horses. This is almost certainly just data you do not fully understand.
Run this
select
a.location_type_code,
a.code_status
from
code_table a,
location b
where
a.code_item = b.which_id and
b.ip_address = <the one you think you fixed>
I bet you get one row with an 'I' and another row with an 'R'
I'd suggest you have a look at the V$SQL system view to confirm that the query you believe the VB6 code is running is actually the query it is running.
Something along the lines of
select sql_text, fetches
where sql_text like '%ip_address%'
Verify that the SQL_TEXT is the one you expect and that the FETCHES count goes up as you execute the code.