SQL query giving wrong result on linked server - sql

I'm trying to pull user data from 2 tables, one locally and one on a linked server, but I get the wrong results when querying the remote server.
I've cut my query down to
select * from SQL2.USER.dbo.people where persId = 475785
for testing and found that when I run it I get no results even though I know the person exists.
(persId is an integer, db is SQL Server 2000 and dbo.people is a table by the way)
If I copy/ paste the query and run it on the same server as the database then it works.
It only seems to affect certain user ids as running for example
select * from SQL2.USER.dbo.people where persId = 475784
works fine for the user before the one I want.
Strangely I've found that
select * from SQL2.USER.dbo.people where persId like '475785'
also works but
select * from SQL2.USER.dbo.people where persId > 475784
brings back records with persIds starting at 22519 not 475785 as I'd expect.
Hope that made sense to somebody
Any ideas ?
UPDATE:
Due to internal concerns about doing any changes to the live people table, I've temporarily moved my database so they're both on the same server and so the linked server issue doesn't apply. Once the whole lot is migrated to a separate cluster I'll be able to investigate properly. I'll update the update once this happens and I can work my way through all the suggestions. Thanks for your help.

The fact that LIKE operates is not a major clue: LIKE forces integers to string (so you can say WHERE field LIKE '2%' and you will get all records that start with a 2, even when field is of integer type). Your incorrect comparisons would lead me to think your indexes are corrupt, but you say they work when not used via the link... however, the selected index might be different depending on the use? (I seem to recall an instance when I had duplicate indexes and only one was stale, although that was too long ago to recall the exact cause).
Nevertheless, I would try rebuilding your index using the DBCC DBREINDEX (tablenname) command. If it turns out that doing so fixes your query, you may want to rebuild them all: here is a script for rebuilding them all easily.

Is dbo.people a table or a view? I've seen something similar where the underlying table schema had been changed and dropping and recreating the view fixed the problem, although the fact that the query works if run directly on the linked server does indicate something index based..

Is the linked server using the same collation? Depending on the index used, I could see something like this perhaps happening if the servers were not collation compatible, but the linked server was set up with collation compatible (which tells Sql Server it can run the query on the remote server).

I would check the following:
Check your definition on the linked server, and confirm that SQL2 is the
server you expect it to be
Check and compare the execution plans both from the remote and local servers
Try linking by IP address rather than name, to ensure you have the proper machine
Put the code into a stored procedure on the remote machine, and try calling that instead

Sounds like a bug to me - I;ve read of some issues along these lines, btu can't remember specifically what. What version of SQL Server are you running?
select * from SQL2.USER.dbo.people where persId = 475785
for a PersID which fails how does:
SELECT *
FROM OpenQuery(SQL2, 'SELECT * FROM USER.dbo.people WHERE persId = 475785')
behave?

Related

How to get the query displayed when a change is made to a table or a field in a table in Postgresql?

I have used mysql for some projects and recently I moved to postgresql. In mysql when I alter a table or a field the corresponding query will be displayed in the page. But such a feature was not found in postgresql(kindly excuse me if I'm wrong). Since the query was readily available it was very helpful for me to test something in the local database(without explicitly typing the query), copy the printed query and run it in the server. Now it seems like I've to manually do all the trick. Even though I'm familiar with the query operations,at times it can be pretty time consuming process. Can anybody help me? How can I get the corresponding query to get displayed in postgresql(like in mysql) whenever a change is made to the table?
If you use SELECT * FROM ... there should not be any reason for your output to not include newly added columns, no matter how you get your results - would that be psql in command line, PgAdmin3 or any other IDE.
After you add new columns, it is possible that these changes are still in open transaction in other window or SQL command - be sure to COMMIT such transaction. Note that your changes to data or schema will not be visible to any other database clients until transaction commits.
If your IDE still does not show changes, maybe you need to refresh list of tables or if that option is not available, restart your IDE. If that does not work still, maybe you should use better IDE.
If you have used SELECT field1, field2, ... FROM ... then you must add new fields into your SELECT statement(s) - but this would be true for any other SQL implementation, MySQL included.
You could use the LISTEN / NOTIFY mechanism in PostgreSQL to notify your client on altering the database schema.

SQL Server reports 'Invalid column name', but the column is present and the query works through management studio

I've hit a bit of an impasse. I have a query that is generated by some C# code. The query works fine in Microsoft SQL Server Management Studio when run against the same database.
However when my code tries to run the same query I get the same error about an invalid column and an exception is thrown. All queries that reference this column are failing.
The column in question was recently added to the database. It is a date column called Incident_Begin_Time_ts .
An example that fails is:
select * from PerfDiag
where Incident_Begin_Time_ts > '2010-01-01 00:00:00';
Other queries like Select MAX(Incident_Being_Time_ts); also fail when run in code because it thinks the column is missing.
Any ideas?
Just press Ctrl + Shift + R and see...
In SQL Server Management Studio, Ctrl+Shift+R refreshes the local cache.
I suspect that you have two tables with the same name. One is owned by the schema 'dbo' (dbo.PerfDiag), and the other is owned by the default schema of the account used to connect to SQL Server (something like userid.PerfDiag).
When you have an unqualified reference to a schema object (such as a table) — one not qualified by schema name — the object reference must be resolved. Name resolution occurs by searching in the following sequence for an object of the appropriate type (table) with the specified name. The name resolves to the first match:
Under the default schema of the user.
Under the schema 'dbo'.
The unqualified reference is bound to the first match in the above sequence.
As a general recommended practice, one should always qualify references to schema objects, for performance reasons:
An unqualified reference may invalidate a cached execution plan for the stored procedure or query, since the schema to which the reference was bound may change depending on the credentials executing the stored procedure or query. This results in recompilation of the query/stored procedure, a performance hit. Recompilations cause compile locks to be taken out, blocking others from accessing the needed resource(s).
Name resolution slows down query execution as two probes must be made to resolve to the likely version of the object (that owned by 'dbo'). This is the usual case. The only time a single probe will resolve the name is if the current user owns an object of the specified name and type.
[Edited to further note]
The other possibilities are (in no particular order):
You aren't connected to the database you think you are.
You aren't connected to the SQL Server instance you think you are.
Double check your connect strings and ensure that they explicitly specify the SQL Server instance name and the database name.
In my case I restart Microsoft SQL Sever Management Studio and this works well for me.
If you are running this inside a transaction and a SQL statement before this drops/alters the table you can also get this message.
I eventually shut-down and restarted Microsoft SQL Server Management Studio; and that fixed it for me. But at other times, just starting a new query window was enough.
If you are using variables with the same name as your column, it could be that you forgot the '#' variable marker. In an INSERT statement it will be detected as a column.
Just had the exact same problem. I renamed some aliased columns in a temporary table which is further used by another part of the same code. For some reason, this was not captured by SQL Server Management Studio and it complained about invalid column names.
What I simply did is create a new query, copy paste the SQL code from the old query to this new query and run it again. This seemed to refresh the environment correctly.
In my case I was trying to get the value from wrong ResultSet when querying multiple SQL statements.
In my case it seems the problem was a weird caching problem. The solutions above didn't work.
If your code was working fine and you added a column to one of your tables and it gives the 'invalid column name' error, and the solutions above doesn't work, try this: First run only the section of code for creating that modified table and then run the whole code.
Including this answer because this was the top result for "invalid column name sql" on google and I didn't see this answer here. In my case, I was getting Invalid Column Name, Id1 because I had used the wrong id in my .HasForeignKey statement in my Entity Framework C# code. Once I changed it to match the .HasOne() object's id, the error was gone.
I've gotten this error when running a scalar function using a table value, but the Select statement in my scalar function RETURN clause was missing the "FROM table" portion. :facepalms:
Also happens when you forget to change the ConnectionString and ask a table that has no idea about the changes you're making locally.
I had this problem with a View, but the exact same SQL code worked perfectly as a query. In fact SSMS actually threw up a couple of other problems with the View, that it did not have with the query. I tried refreshing, closing the connection to the server and going back in, and renaming columns - nothing worked. Instead I created the query as a stored procedure, and connected Excel to that rather than the View, and this solved the problem.

Select * not returning all columns - Coldfusion / SQL Server 2008

I am getting some strange behavior involving database queries that I have never seen before and I'm hoping you can illuminate the issue.
I have a table of data called myTable with some columns; thus far everything involving it has been fine. Now, I've just added a column called subTitle; and II notice that the SELECT * Query that pulls in the data for a given record is not aware of that column (it says the returned query does not have a subTitle column), but if I explicitly name the column (select subTitle) it is. I thought perhaps that the Coldfusion server might be caching the query so I tried to work around with cachedwithin="#CreateTimeSpan(0, 0, 0, 0)#" but no dice.
Consider the below code:
<cfquery name="getSub" datasource="#Application.datasourceName#">
SELECT subTitle
FROM myTable
WHERE RecordID = '674'
</cfquery>
<cfoutput>#getSub.subTitle#</cfoutput>
<cfquery name="getInfo" datasource="#Application.datasourceName#">
SELECT *
FROM myTable
WHERE RecordID = '674'
</cfquery>
<cfoutput>#getInfo.subTitle#</cfoutput>
Keeping in mind that record 674 has the string "test" in it's subTitle column the about of the above is
test
[[CRASH WITH ERROR]]
This doesn't make sense to me unless SQL Server 2008 has somehow cached the SELECT * query with the previous incarnation of the table, but the strange thing is if I run the query right from within SQL Management Studio there are no problems and it shows all columns with the select *
Frankly this one has me baffled; I know I can get around this by explicitly naming all the desired columns in the select query instead of using * (which is best practice anyway), but I want to understand why this is occurring.
I've worked with SQL Server 2005 for many years and never had something like this happen, which leads me to believe it might involve something new in SQL Server 2008; but then again the fact that the query works fine inside of the management studio doesn't jive with that either.
===UPDATE===
Clearing the Template Cache in the CF admin will solve the issue
Yes, ColdFusion caches the <cfquery> SQL string. If the underlying table structure changes, the result might be an exception like you see it.
Work-arounds:
Recommended solutiuons:
If you have the development or enterprise version you can view your query cache in the server moniter and clear only the queries there. (comment from #Dpolehonski, thanks)
Otherwise, click Clear Template Cache Now in the ColdFusion Administrator (under Server Settings/Caching).
This will invalidate all cached CFML-Templates on the server and CF will re-compile them when necessary.
Quick and dirty:
Subtly change the query SQL, for example add a space somewhere. When you are on a development machine it's the quickest way to fix the issue.
This will invalidate the compiled version of this query only and force a re-compile.
(Note that removing the subtle change will trigger the error again since the old query text will remain cached.)
Brute-force:
Re-start the ColdFusion server. Brutal, but effective.
Or the quick and super dirty method:
<cfquery name="getInfo" datasource="#Application.datasourceName#">
SELECT
*, #createUUID()# as starQueryCacheFix
FROM
myTable
WHERE
RecordID = '674'
</cfquery>
Don't leave in production code though... it'll obsolete all of the query caching ColdFusion does. I did say it was super dirty ;)

Empty XML Columns during SQL Server replication

We have a merge replication setup on SQL Server that goes like this: 1 SQL server at the office, another SQL server traveling around the world. The publisher is the SQL server at the office.
In about 1% of the cases, two of our tables with a column of XML Data type (not bound to a schema) are replicated with rows containing empty XML columns. ( This only happened when data is sent from the "traveling server" back home, but then again, data seems to be changed more often there ). We only have this in prod. environment ( WAN replication ).
Things i have verified:
The row is replicated, as the last modification date on the row is refreshed but the xml column is empty. Of course it is not empty on the other SQL Server.
No conflicts are displayed in the replication conflicts UI.
It is not caused by the size of the data inside the XML Column as some are very small.
Usually, the problem occurs in batch. ( The xml column of 8-9 consecutive rows will be empty )
The problem occurs if a row was inserted OR updated. No pattern there.
The problem seems to occur, but this is pure speculation on my part when the connection is weaker. ( We've seen this problem happen more often when the server was far away as compared to when it was close by. )
Sorry if i have confused some things, I am not really a DBA, more of a DEV with knowledge of SQL but since the application using the database keeps getting blamed for the problems ( the XML column must not be empty!! ) I have taken it at heart to try and find the problem instead of just manually patching the data each time ( Whats the use of replication if you have to do that? )
If anyone could help out with this problem, or at least suggest some ways of being able to debug / investigate this it would be greatly appreciated.
I did search alot on google and I did find this: Hot Fix . But we do have the latest service pack and the problem seems a bit different.
fyi: We have a replication setup locally here but the problem never occurs. We will be trying a WAN simulator on it as well to see if that can help.
Thanks
Edit: hot fix is now available for my issue: http://support.microsoft.com/kb/2591902
After logging this issue with Microsoft, we were able to reproduce the problem without a slow link ( Big thanks to the competent escalation engineer at Microsoft ). The repro is a bit different from our scenario, but highlights the timing issue we were getting perfectly.
Create 2 tables – One parent one child (have a PK-FK relationship)
Insert 2 rows in the parent table
Set up replication – configure merge agent to run ON DEMAND
Sync
Once all is replicated:
On the PUBLISHER: delete one row from the parent table
On the SUBSCRIBER: Insert 2 rows of data that references the parentid you deleted above
Insert 5 rows of data that references the parentid that will stay in the table
Sync, Merge agent will fail, Sync again, Merge agent will succeed
Missing XML data on the publisher on the 5 rows.
Seems it is a bug that is in SQL Server 2005/2008 and 2008R2.
It will be addressed in a hot fix in 2008 and up. ( As SQL Server 2005 is no longer being altered )
Cheers.
You may want to start out by slapping a bandaid on this perplexing situation to buy some time to fully investigate and fix (or more likely get MS to fix it). SQL Data Compare is an excellent tool that might help.
Figured i'd put an update here as this issue got me a few gray hairs and I am somewhat closer to a solution now.
I finally had some time to work on this and managed to reproduce this issue in our test environment, using a WAN simulator and slowing down the link and injecting some random packet loss. ( to best simulate the production environment where the server is overseas on a really bad line ).
After doing some SQL tracing, and some verbose logging here are my conclusions:
When replicating a row with an XML column, the process is done in 2 steps. First an insert is done of the full row but with an empty string for the XML column. Right after, an update is done this time with the XML column having data. Since the link is slow, in some situations a foreign key violation occured.
In this scenario, Table2 depends on Table1. After finishing replicating table1, and starting to replicate table2 (Enumration of insert/updates which takes time on a slow link), some entries were added to table1 and table2. Therefore some inserts on Table2 failed because Table1 entries were not in the database and were only going to be replicated next batch. The next time the replication occured, no more foreign key violations occured, however when it tried to insert the row that had previously failed in Table2 ( XML column row ), the update part of it was missing ( I could see that in the SQL profiler ) and that is why the row ended up after all was done with an empty XML.
Setting "Enforce for replication" to false on the foreign keys seems to address the problem, however I do still think that this whole process should work with the option set to true.
I logged a support call with Microsoft for this. I have sent the traces and logs to Microsoft and will see what they have to say.
I've read this article: http://msdn.microsoft.com/en-us/library/ms152529(v=SQL.90).aspx. But for me, setting this option to false is kind of a work around, no?
What do you guys think?
ps: Hope this is clear, tried to explain it the best I could. English is not my first language.

Problem with a MS Access query after a "Compact and repair" operation

I have an Access application that use the classical front-end/back-end approach. Yesterday, the backend got corrupted for a reason I don't know. So I opened the backend with Access 2003 and access asked me if I wanted to repair the file, I said yes and it seemed to work.
I can open the database see the tables contents and run most of the queries.
However there is an access query that doesn't work with a specific where clause.
Example :
// This works in the original DB, but not in the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3 AND tbl2.f = 1;
// This works in both the original and the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3;
When I try to run the queries, nothing happens. The access process start to use most of the CPU and the GUI stop responding. If I run the query from the query editor, I can use Ctrl+Break to stop the execution. I tried to give the query lot of time and it didn't help.
I've checked the execution plan in showplan.out and it seems correct (at least it should not takes forever to execute)
I tried to compact the DB again. I tried to import the tables in a new DB. I even tried to import the tables and their data in a mdb file that was in a now good state (from a backup).
Anyone have an idea?
Sounds like an index was corrupted and when that happens, it's dropped during the compact. Check for a system table called MSysCompactErrors -- you'll have to show hidden objects and/or system objects in Tools | Options | VIEW.
Never compact a Jet MDB without making a backup beforehand. Because of that rule, the COMPACT ON CLOSE function is completely useless, as it's not cancellable, so you always make sure it's turned off in all MDBs.
I don't know what type of meta data Access brings along when it imports a table from one database into another one. If the meta data is corrupted, importing the table to another database wouldn't necessarily resolve the problem. If practical, you might try creating the tables from scratch in a brand new database and then just exporting and importing (or copying and paste appending) the data into the new database.
I've never seen a table get corrupted like this in such a small database, although with Access anything is possible. Could there be something wrong with the data?
I'd try recreating the query fresh (new name, etc.), and see what happens.
You could even try copying it (even within the same DB or to a brand new one). If that works, the worst case scenario is you have to copy all the objects across to a new DB.
Is there an index on the field tbl2.f?
Also try going into that table in datasheet view, sort tbl2.f in ascending sequence and see if there is anything really strange in the first or last records.
Do you have access to a SQL Server installation? You could use the Upsizing Wizard under the Tools -> Database Utilities menu to copy the data to SQL Server, and see if you get the same problem there.