Issue with Delphi legacy code. Added one line of code to correct one error and created a new error.
New error is causing the same executable to yield different results on different servers(switched the pointer from dev to prod environment on the executable).
code:
sEscapedString:=stringreplace(sStringIn,'[','''+char(27)+''[',[rfReplaceAll]);
sEscapedString:=stringreplace(sEscapedString,']','''+char(27)+'']',[rfReplaceAll]);
sEscapedString:=stringreplace(sEscapedString,'''','''''',[rfReplaceAll]);// this line created new
bug
result:=' like ''' + Trim(sEscapedString) + '%'''+' escape char(27) ';
When running the code against dev this query finds objects with the characters '[' and ']' in it
Against prod the query does not find those items:
The first thing I checked was the data: Exactly identical in both cases
The second thing I checked was SQL server versions (11.0.3128 on BOTH servers)
The third thing I am checking is settings on those servers:
DBCC USEROPTIONS; -- same on both
SELECT name, collation_Name FROM sys.databases -- same on both
select ##OPTIONS -- same on both.
Quoted identifiers are 'ON' for both servers
It comes down to the fact that I know one server is treating the escape character (chr(27)) differently than the other but I do knot know why.
Does anyone have a theory(or answer) as to why the 2 similar servers are treating the escape characters differently?
The goal here is getting the prod server to return values with '[' and ']', as setting up my system to work with the legacy code will take a LOT of additional time. I do have a fix for the code
sEscapedString:=stringreplace(sStringIn,'[','[[]',[rfReplaceAll]);
But the faster option would seem to be getting the server to read the values the same.
Update: We found the root cause of the difference and it was more mundane than what we expected, turns out the query we were running was actually executed twice. The second execution was missing the key piece on the production server.
The issue was resolved by moving the new line of code so that it executed first rather than last.
I would first try to find out if this SQL only causes different behaviour when it is sent from the application: by sending the SQL from an interactive SQL client tool to both servers.
To make sure that the manually tested SQL is exactly the same as in the application, I would try to log or capture the exact SQL as sent from the application as a text file and then paste its content to the SQL client tool.
If the server is the culprit, then using the SQL from a different client tool should cause the same difference with the two servers. If the client tool shows the same (correct) result on both servers, then something is going on in the Delphi application.
p.s. upvoted, it is an interesting phenomenon
Related
Earlier we were using Sybase as the back end. Now we are migrating to SQL server where as front end remains the same i.e. PowerBuilder.
Issue :
I have a DataWindow code which takes two retrieval arguments adt_from_date and adt_to_date. Both Date format. This works fine for PB-Sybase combination, but throws an error 37000 for SQL server.
Here is our code. If I hard-code the dates. i.e. for e.g. if I replace :adt_from_date by '20141001'. The code works fine.
SELECT A.MEMBER_NO AS 'MEMBER_NO',
ROUND(SUM(TRANSACTION_CHARGES),2) as 'AMOUNT',
SUBSTRING(DATENAME(MM, :adt_from_date ),1,3) +'-'+substring(convert(varchar,datepart(YY, :adt_from_date )),3,2) as 'REASON'
FROM TRAN_SERVICE_TAX_DRV A,
MEMBER_MASTER B
WHERE A.MEMBER_NO = B.MEMBER_NO
AND A.TRADE_DATE BETWEEN :adt_from_date AND :adt_to_date
GROUP BY A.MEMBER_NO
Please suggest something on this.
I'd suggest you look at the error text being loaded into the transaction instead of just the number; it will probably be much more helpful. If we're talking about SQLState 37000, that doesn't narrow it down much.
If that doesn't shed some light on it, I'd look at tracing either on the PowerBuilder side, or on the database side. Starting on the PB side probably makes most sense. If you're just running this DataWindow from the IDE, the trace is just a checkbox on the connection properties. If in the app, replace your "DBMS='xxx'" with "DBMS='tra xxx'" (there are other options described in the Connecting to your Database manuals).
Good luck.
I'm developing an application that pulls information from a Firebird SQL database (accessed via the network) that sits behind an existing application.
To get an idea of what the most commonly used tables are in the application, I've run Wireshark while using the application to capture the SQL statements that are transmitted to the database server when the program is running.
I have no problem viewing what tables are being accessed via the application, however some of the query values passed over the network are not being displayed in the captured SQL packets. Instead these values are replaced with what I assume is a variable of some sort.
Heres a sample query:
select * from supp\x0d\x0aWHERE SUPP.ID=? /* BIND_0 */ \x0d\x0a
(I am assumming \x0d\x0a is used to denote a newline in the SQL query)
Has anyone any idea how I may be able to view the values associated with BIND_0 or /* BIND_0 */?
Any help is much appreciated.
P.S. The version of Firebird I am using is 1.5 - I understand there are syntactical differences in the SQL used in this version and more recent versions.
That /* BIND_0 */ is simply a comment (probably generated by the tool that generated the query), the placeholder is the question mark before that. In Firebird statements are - usually - first prepared by sending the query text (with or without placeholders) to the server with operation op_prepare_statement = 68 (0x44). The server then returns a description of the bind variables and the datatypes of the result set.
When the query is actually executed, the client will send all bind variables together with the execute request (usually in operation op_execute = 63 (0x3F)) in a structure called the XSQLDA.
Context
We're changing our install scripts to use ant's "sql" task and jdbc rather than proprietary sql clients sqlplus (oracle) and osql (msft).
Updated: added more context. Our "base data" (seed data) consists of a collection of .sql files containing "vendor-neutral"(i.e. works both in oracle and mssql) sql statements.
The Problem
The scripts run fine, with one exception:
This sql fails in Oracle. Specifically, something (ant or jdbc driver) treats the dashes/hyphens as "beginning of a comment"--even though they are embedded in a string. Note that the same sql works fine with ant/sql and microsoft's jdbc driver.
INSERT INTO email_client (email_client_id,generated_reply_text) VALUES(100002,'----- Original Message -----');
Related Bug
This ant bug appears to identify the problem. As it's still open (after 8 years), I'm not hoping for a fix soon. However, because the problem appears only in oracle, it may lie with the driver.
The oracle driver: jdbc thin driver, version 10.2.0.1.0
The Question
Does anyone have a workaround which works in both mssql and oracle? (e.g. changing the offending lines to define an escape character? I don't see an 'escape' on the 'insert' sql92 syntax)
thanks
After viewing the 'SQLExec' source and turning on verbose logging, I found a workaround:
Workaround
if the sql statement includes a string containing '--', place the delimiter (semi-colon) on the next line.
This Fails
INSERT INTO email_client (email_client_id,generated_reply_text) VALUES(100002,'----- Original Message -----');
This Succeeds
Note that semi-colon is on a separate line
INSERT INTO email_client (email_client_id,generated_reply_text) VALUES(100002,'----- Original Message -----')
;
Details
Turning on verbose logging, I saw that when Ant came across the offending sql statement, it actually passed three sql statements in at once to the jdbc driver. The offending statement, the next statement (which also included an embedded '--'), and the subsequent statement (which did not include an embedded '--').
I gave the Ant code a quick glance and didn't see any obvious errors. Since I wasn't planning to patch Ant, I looked for a workaround.
Tweaking with it I found that if I simply moved the delimiter (semicolon) to the next line for the statements with embedded '--', the scripts executed successfully.
thanks everyone for weighing in
You could try this:
INSERT INTO email_client (email_client_id,generated_reply_text)
VALUES(100002,LPAD('-',5,'-') || ' Original Message ' || LPAD('-',5,'-'));
I was tasked with migrating an ancient (2007) supposedly ASP.NET site from a "private" server in the office of the people who own it onto my company's hosting account on rackspace. Everything went smoothly until we switched the DNS. It turned out the original programmer had hardcoded references to files, specifically to the file that generates and formats the navigation menu. When we replaced the hardcoded references, it suddenly wasn't behaving at all like it should. I tracked down the query he used to generate the XML table for the menu.
SELECT
parent.id,
parent.title,
'/page.aspx?id=' + isnull(cast(parent.id as varchar),'') + '&name=' + parent.name url,
siteMapNode.id, siteMapNode.title,
'/page.aspx?id=' + isnull(cast(siteMapNode.id as varchar),'') + '&name=' + siteMapNode.name url,
siteMapSubNode.title,
'/page.aspx?id=' + isnull(cast(siteMapSubNode.id as varchar),'') + '&name=' + siteMapSubNode.name url
FROM page parent
right join page siteMapNode on siteMapNode.pageid=parent.id and siteMapNode.active=1 and siteMapNode.hidden=0
left join page siteMapSubNode on siteMapSubNode.pageid=siteMapNode.id and siteMapSubNode.active=1 and siteMapSubNode.hidden=0
where SiteMapNode.name <> 'home' and parent.menu = '1' and parent.active = 1 and parent.hidden <> 1
order by parent.orderby, siteMapNode.orderby
for xml auto
I had backed up the local db, also on that box in their office, "restored" to the backup on my local testing db, and then imported to rackspace's db from my testing db. (All this middleman stuff is to get around their firewalls.) So to all intents and purposes, the source code, tables and queries used across all 3 servers are exact copies.
When I run that query in MSSQL, here's a short excerpt of the results I get:
Their server (Version unknown right now, I have to go through teamviewer to find out.) and my Server (MSSQL 2008 Server 10.0.2531 - I think maybe SP1)
Rackspace's Server (MSSQL 2008 server 10.0.4064 I think maybe SP2)
Any advice, hints, ideas on why the rackspace one acts so weirdly is greatly appreciated. It seems like it's obvious that it's something to do with the difference in servers but I can't tell if it's the version, the SP, a setting, or what. If anyone has ever seen something similar I'd love to hear what you learned from it. I'm just a humble programmer, definitely not an SQL expert.
EDIT: Here is the schema of the table, id is the primary key, the poorly named pageid is actually more of a parent-page-id.
I've tried looking at it with and without for xml auto. When I take off for xml auto it returns the same results in a slightly different order, but when I change the 4th line of the query from siteMapNode.id to parent.pageid then the results show the same order. Adding xml auto back in shows the same results as the above images. I'll try experimenting with for xml path, thanks for the suggestion!
I've hit a bit of an impasse. I have a query that is generated by some C# code. The query works fine in Microsoft SQL Server Management Studio when run against the same database.
However when my code tries to run the same query I get the same error about an invalid column and an exception is thrown. All queries that reference this column are failing.
The column in question was recently added to the database. It is a date column called Incident_Begin_Time_ts .
An example that fails is:
select * from PerfDiag
where Incident_Begin_Time_ts > '2010-01-01 00:00:00';
Other queries like Select MAX(Incident_Being_Time_ts); also fail when run in code because it thinks the column is missing.
Any ideas?
Just press Ctrl + Shift + R and see...
In SQL Server Management Studio, Ctrl+Shift+R refreshes the local cache.
I suspect that you have two tables with the same name. One is owned by the schema 'dbo' (dbo.PerfDiag), and the other is owned by the default schema of the account used to connect to SQL Server (something like userid.PerfDiag).
When you have an unqualified reference to a schema object (such as a table) — one not qualified by schema name — the object reference must be resolved. Name resolution occurs by searching in the following sequence for an object of the appropriate type (table) with the specified name. The name resolves to the first match:
Under the default schema of the user.
Under the schema 'dbo'.
The unqualified reference is bound to the first match in the above sequence.
As a general recommended practice, one should always qualify references to schema objects, for performance reasons:
An unqualified reference may invalidate a cached execution plan for the stored procedure or query, since the schema to which the reference was bound may change depending on the credentials executing the stored procedure or query. This results in recompilation of the query/stored procedure, a performance hit. Recompilations cause compile locks to be taken out, blocking others from accessing the needed resource(s).
Name resolution slows down query execution as two probes must be made to resolve to the likely version of the object (that owned by 'dbo'). This is the usual case. The only time a single probe will resolve the name is if the current user owns an object of the specified name and type.
[Edited to further note]
The other possibilities are (in no particular order):
You aren't connected to the database you think you are.
You aren't connected to the SQL Server instance you think you are.
Double check your connect strings and ensure that they explicitly specify the SQL Server instance name and the database name.
In my case I restart Microsoft SQL Sever Management Studio and this works well for me.
If you are running this inside a transaction and a SQL statement before this drops/alters the table you can also get this message.
I eventually shut-down and restarted Microsoft SQL Server Management Studio; and that fixed it for me. But at other times, just starting a new query window was enough.
If you are using variables with the same name as your column, it could be that you forgot the '#' variable marker. In an INSERT statement it will be detected as a column.
Just had the exact same problem. I renamed some aliased columns in a temporary table which is further used by another part of the same code. For some reason, this was not captured by SQL Server Management Studio and it complained about invalid column names.
What I simply did is create a new query, copy paste the SQL code from the old query to this new query and run it again. This seemed to refresh the environment correctly.
In my case I was trying to get the value from wrong ResultSet when querying multiple SQL statements.
In my case it seems the problem was a weird caching problem. The solutions above didn't work.
If your code was working fine and you added a column to one of your tables and it gives the 'invalid column name' error, and the solutions above doesn't work, try this: First run only the section of code for creating that modified table and then run the whole code.
Including this answer because this was the top result for "invalid column name sql" on google and I didn't see this answer here. In my case, I was getting Invalid Column Name, Id1 because I had used the wrong id in my .HasForeignKey statement in my Entity Framework C# code. Once I changed it to match the .HasOne() object's id, the error was gone.
I've gotten this error when running a scalar function using a table value, but the Select statement in my scalar function RETURN clause was missing the "FROM table" portion. :facepalms:
Also happens when you forget to change the ConnectionString and ask a table that has no idea about the changes you're making locally.
I had this problem with a View, but the exact same SQL code worked perfectly as a query. In fact SSMS actually threw up a couple of other problems with the View, that it did not have with the query. I tried refreshing, closing the connection to the server and going back in, and renaming columns - nothing worked. Instead I created the query as a stored procedure, and connected Excel to that rather than the View, and this solved the problem.