Oracle 9i index error - sql

I have the following query:
select *
from gps_servicio ser
where ser.id in (select idserv from gps_agentes where idagen = 8073061);
This query works perfectly until I make an index in the table gps_agentes, on the field idserv (asc one). If I do that, the query brokes and I get no results from it. Is this a bug?
Both gps_servicio.id and gps_agentes.idserv are number(10,0) fields, and I have a FK on gps_agentes.idserv that points to gps_servicio.id.
Thx for your time!

If the results of a query change when you create an index, that indicates a bug, yes. If you are encountering a bug, you would need to report that to Oracle Support to determine whether the bug you are encountering is already fixed by an existing patch or whether it is a new bug that no one has encountered before.
However, given that you say that you are using 9i, a version of the database that is at least 5 major releases old and has been out of primary support for many years, my wager is that you are running without a support contract and without access to Oracle Support. Are you at least running the latest patchset of whatever version of Oracle you are using ("9i" covers two major releases, 9.0.1 and 9.2)?

Related

Jasper Reports: How to use a SQL result in the title block and use it for subreporting?

In Jasper Reports, how do I use a SQL result in the title block and use it for subreporting?
I have previously successfully used a SQL result in the title block when there is only one result, and am not depending on it to spawn multiple subreport sections. In this instance I would like to do both use the SQL result and have multiple subreports.
My current simplified query (Oracle SQL)
SELECT DISTINCT
LOC."NAME" AS LOCATION,
coalesce($P{input_date},($P!{default_date})) as ACTUAL_TIME
FROM
"CF"."LOCATION" LOC
WHERE
(LOWER(LOC."NAME") LIKE '%'||LOWER($P{input_location})||'%')
There are three locations, and each generates two subreports grouped by location.
default_date is defined as the result of an oracle function
"SELECT START_TIME FROM CF.LAST_OPERATIONAL_DAY"
Operational days do not run 24 hours, from midnight to midnight.
1) I have been using this default_date technique with most of the reports, hopefully easing maintainability. However, evaluating the actual-time 3 times could cause the reports to run for different days if run at the wrong moment.
2) If I put $F{ACTUAL_TIME} in my Title, it shows as null. (In our simpler reports, the primary query returns one value, and this field is usable in the Title.)
3) My searching has lead me to believe that subqueries are not permitted without a list, table, or subreport.
There is the possibility of reimplementing the Oracle function in Java to set a new parameter. If input_date is null, use the new variable, but I fear the maintenance headache as we proceed.
There is the possibility of creating another layer of sub-subreport. That would make the few most complicated reports more complicated.
Is there a simpler option? Can queries be done at the parameter level? In the header? Is there some setting I'm missing?
---We are using Jasper Server 6.1.1 and Jaspersoft Studio 6.1.1 and have the flexibility to upgrade if necessary. The database is Oracle 12c.

Error: Schema changed for Timestamp field (additional)

I am getting an error message when I query a specific table in my data set that has a nullable timestamp field. In the BigQuery web tool, I run simple query, e.g.:
SELECT * FROM [reztrack.201401] LIMIT 100
The result I get is: Error: Schema changed for Timestamp field date
Example Job ID: esiteisthebomb:job_6WKi7ZhSi8D_Ewr8b5rKV-a5Eac
This is the exact same issue that was noted here: Error: Schema changed for Timestamp field.
Also logged this under: https://code.google.com/p/google-bigquery/issues/detail?id=307 but I was unsure since it said we should be logging everything in Stackoverlfow.
Any information on how to fix this for this or other tables would be greatly appreciated.
Note: The original answer states to contact google support, but Google support for BigQuery was moved to StackOverflow. Therefore I assume that means to open it as a new question in hopes the engineers will respond.
BigQuery recently improved the representation of its internal timestamp format (there had previously been a lot of cases where timestamps broke in strange ways and this change should fix that). Your table still was using the old timestamp format, and you tickled a bug in the old format when schemas changed (in this case, the field went from REQUIRED to OPTIONAL).
We have an automated process that coalesces tables to make their storage more efficient. I scheduled this to run over your table, and have verified that it has rewritten your table using the new timestamp format.
You should now be able to query this field of your table without further problems.

Why are dot-separated prefixes ignored in the column list for INSERT statements?

I've just come across some SQL syntax that I thought was invalid, but actually works fine (in SQL Server at least).
Given this table:
create table SomeTable (FirstColumn int, SecondColumn int)
The following insert statement executes with no error:
insert SomeTable(Any.Text.Here.FirstColumn, It.Does.Not.Matter.What.SecondColumn)
values (1,2);
The insert statement completes without error, and checking select * from SomeTable shows that it did indeed execute properly. See fiddle: http://sqlfiddle.com/#!6/18de0/2
SQL Server seems to just ignore anything except the last part of the column name given in the insert list.
Actual question:
Can this be relied upon as documented behaviour?
Any explanation about why this is so would also be appreciated.
It's unlikely to be part of the SQL standard, given its dubious utility (though I haven't checked specifically (a)).
What's most likely happening is that it's throwing away the non-final part of the column specification because it's superfluous. You have explicitly stated what table you're inserting into, with the insert into SomeTable part of the command, and that's the table that will be used.
What you appear to have done here is to find a way to execute SQL commands that are less readable but have no real advantage. In that vein, it appears similar to the C code:
int nine = 9;
int eight = 8;
xyzzy = xyzzy + nine - eight;
which could perhaps be better written as xyzzy++; :-)
I wouldn't rely on it at all, possibly because it's not standard but mostly because it makes maintenance harder rather than easier, and because I know DBAs all over the world would track me down and beat me to death with IBM DB2 manuals, their choice of weapon due to the voluminous size and skull-crushing abilities :-)
(a) I have checked non-specifically, at least for ISO 9075-2:2003 which dictates the SQL03 language.
Section 14.8 of that standard covers the insert statement and it appears that the following clause may be relevant:
Each column-name in the insert-column-list shall identify an updatable column of T.
Without spending a huge amount of time (that document is 1,332 pages long and would take several days to digest properly), I suspect you could argue that the column could be identified just using the final part of the column name (by removing all the owner/user/schema specifications from it).
Especially since it appears only one target table is possible (updatable views crossing table boundaries notwithstanding):
<insertion target> ::= <table name>
Fair warning: I haven't checked later iterations of the standard so things may have changed. But I'd consider that unlikely since there seems to be no real use case for having this feature.
This was reported as a bug on Connect and despite initially encouraging comments the current status of the item is closed as "won't fix".
The Order by clause used to behave in a similar fashion but that one was fixed in SQL Server 2005.
I get errors when I try to run the script on SQL Server 2012 as well as SQL Server 2014 and SQL Server 2008 R2. So you can certainly not rely on the behavior you see with sqlfiddle.
Even if this were to work, I would never rely on undocumented behavior in production code. Microsoft will include notice of breaking changes with documented features but not undocumented ones. So if this were an actual T-SQL parsing bug that was later fixed, it would break the mal-formed code.

IBM Informix SQL Syntax assistance

Wondering if anyone may be able to help me with some SQL here. I'm tasked with retrieving some data from a legacy DB system - It's an IBM Informix DB running v7.23C1. It may well be that what I'm trying to do here is pretty simple, but for the life of me I can't figure it out.
I'm used to MS SQL Server, rather than any other DB system and this one seems quite old: http://publib.boulder.ibm.com/epubs/pdf/3731.pdf (?)
Basically, I just want to run a query that includes nesting, but I can't seem to figure out how to do this. So for example, I have a query that looks like this:
SELECT cmprod.cmp_product,
(stock.stk_stkqty - stock.stk_allstk) stk_bal,
stock.stk_ospurch,
stock.stk_backord,
'Current Sales Period',
'Current Period -1',
'Current Period -2',
cmprod.cmp_curcost,
stock.stk_credate,
stock.stk_lastpurch,
stock.stk_binref
FROM informix.stock stock,
informix.cmprod cmprod
WHERE stock.stk_product = cmprod.cmp_product
AND (cmp_category = 'VOLV'
OR cmp_category = 'VOLD'
OR cmp_category = 'VOLA')
AND stk_loc = 'ENG';
Now, basically where I have values like 'Current Period -1' I want to include a nested field which will run a query to get the sales within a given date range. I'm sure I can put those together separately, but can't seem to get the compiler to be happy with my code when executed altogether.
Probably something like (NB, this specific query is for another column, but you get the idea):
SELECT s.stmov_product, s.stmov_trandate, s.stmov_qty
FROM informix.stmove s
WHERE s.stmov_product = '1066823'
AND s.stmov_qty > 0
AND s.stmov_trandate IN (
SELECT MAX(r.stmov_trandate)
FROM informix.stmove r
WHERE r.stmov_product = '1066823'
AND r.stmov_qty > 0)
What makes things a little worse is I don't have access to the server that this DB is running on. At the moment I have a custom C# app that connects via an ODBC driver and executes the raw SQL, parsing the results back into a .CSV.
Any and all help appreciated!
Under all circumstances, Informix 7.23 is so geriatric that it is unkind to be still running it. It is not clear whether this is an OnLine (Informix Dynamic Server, IDS) or SE (Standard Engine) database. However, 7.23 was the version prior to the Y2K-certified 7.24 releases, so it is 15 years old or thereabouts, maybe a little older.
The syntaxes supported by Informix servers back in the days of 7.23 were less comprehensive than they are in current versions. Consequently, you'll need to be careful. You should have the manuals for the server — someone, somewhere in your company should. If not, you'll need to try finding it in the graveyard manuals section of the IBM Informix web pages (start at http://www.informix.com/ for simplicity of URL; however, archaic manuals take some finding, but you should be able to get there from http://pic.dhe.ibm.com/infocenter/ifxhelp/v0/index.jsp choosing 'Servers' in the LHS).
If you are trying to write:
SELECT ...
(SELECT ... ) AS 'Current - 1',
(SELECT ... ) AS 'Current - 2',
...
FROM ...
then you need to study the server SQL Syntax for 7.23 to know whether it is allowed. AFAICR, OnLine (Informix Dynamic Server) would allow it and SE probably would not, but that is far from definitive. I simply don't remember what the limitations were in that ancient a version.
Judging from the 7.2 Informix Guide to SQL: Syntax manual (dated April 1996 — 17 years old), you cannot put a (SELECT ...) in the select-list in this version of Informix.
You may have to create a temporary table holding the results you want (along with appropriate key information), and then select from the temporary table in the main query.
This sort of thing is one of the problems with not updating your server for so long.
Sorry to be blunt, but can you at least have mercy on us by shortening the syntax?.. table.columns can be presented AS aliases. Meanwhile, if you cant upgrade to a newer version of Informix, you will have to rely on one or more SELECT INTO temp table queries in order to achieve your objective, which BTW, would make your coding more portable across different versions. You also have to evaluate whether using TEMP tables implies unacceptable processing times.

Strange SQL Server type conversion issue

I've experienced today strange issue. One of my projects is running .NET + SQL Server 2005 Express.
There is one query I use for some filtering.
SELECT *
FROM [myTable]
where UI = 2011040773395012950010370
GO
SELECT *
FROM [myTable]
where UI = '2011040773395012950010370'
GO
UI column is nvarchar(256) and UI value passed to filter is always 25 digits.
On my DEV environment - both queries return same row and no errors. However at my customers, after few months of running fine, first version started to return type conversion error.
Any idea why?
I'm not looking for solution - I'm looking for explanation why on one environment it works and on other doesn't and why out of sudden it started to return errors instead of results. I'm using same tools on both (SQL Server Management Studio Express and 2 different .NET clients)
Environments are more or less the same (W2k3 + SQL Server 2005 Express)
This is completely predictable and expected because of Datatype precedence
For this, the UI column will be changed to decimal(25,0)
where UI = 2011040773395012950010370
This one is almost correct. The right hand side is varchar and is changed to nvarchar
where UI = '2011040773395012950010370'
This is the really correct version where both types are the same
where UI = N'2011040773395012950010370'
Errors will have started because the UI column now contains a value that won't CAST to decimal(25,0).
Some unrelated notes:
if you have an index on the UI column it would be ignored in the first version because of the implicit CAST required
do you need unicode to store numeric digits? There is a serious overhead with unicode data types in storage and performance
why not use char(25) or nchar(25) is values are always fixed length? Your queries use too much memory as the optimiser assumes an average length of 128 characters based on nvarchar(256)
Edit, after comment
Don't assume "why does it works sometimes" when you don't know that it does work
Examples:
The value could have been deleted then added later
A TOP clause or SET ROWCOUNT could mean the offending value is not reached
The query was never run so it couldn't fail
The error is silently ignored by some other code?
Edit 2 for hopefully more clarity
Chat
gbn:
When you run WHERE UI = 2011040773395012950010370, you do not know the order of row access. So if one row does have "bicycle" you may or may not hit that row.
Random:
So the problem may be not in the row which i was trying to access but other one with corrupted value?
gbn
different machines will have different order of reads based on service pack level, index and table fragmentation, number of CPUs, parallelism maybe
correct
and TOP even. That kind of stuff
As Tao mentions, it's important to understand that another unrelated can break the query even if this one is OK.
data type precedence can cause ALL the data in that column to be converted before the where clause is evaluated