We have a .Net application which stores data in PostgreSQL. We have a perfectly working code with PostgreSQL 8.3 and now we are trying to upgrade our PostgreSQL server from version 8.3 to 9.3 and our code seems to break.
For connecting PostgeSQL we are using OLEDB.
The issue we are getting is “Timestamp out of range”. When looked through the logs we are receiving weird timestamp “152085-04-28 06:14:51.818821”.
From our application We are trying to pass a value from .Net code to postgreSQL function which is of type timestamp. As we are using OLEDB for connections, we are using parameter type as OleDbType.DBTimeStamp and sending date time value from .Net code. This code works in PostgreSQL 8.3 but breaks in 9.3. From the logs of Postgresql 9.3 the parameter value which we are receiving is “152085-04-28 06:14:51.818821”.
We tried to execute the same function using npgsql provider from sample .net code by passing Date time value and giving parameter type as NpgsqlDbType.TimestampTZ with this we are getting correct results. From the logs of PostgreSQL the parameter values received at the function is shown as “E'2014-01-30 12:17:50.804220'::timestamptz”.
Tried in other versions of postgresql i.e. 9.1, 9.2, 9.3 and was breaking in all these versions.
Any Idea why this is breaking in other versions of PostgreSQL when perfectly working in 8.3?
Thanks
Related
I have coded a pyspark script to execute a SQL file, it worked perfectly fine on the spark latest version, but the target machine has 2.3.1, and it throws exception:
pyspark.sql.utils.AnalysisException: u"Undefined function: 'array_remove'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'
It seems these are not present in the older versions :( can anyone suggest something, i have searched alot but in vain.
my sql piece which is failing is
SELECT NIEDC.*, array_join(array_remove(split(groupedNIEDC.appearedIn,'-'), StudyCode),'-') AS subjects_assigned_to_other_studies
array_remove and array_join functions were added on spark version 2.4. You can make an UDF and register it to use in a query using this method.
I'm working on integration tests for a JPA project. The tests run on an embedded h2 database. However, I'm getting an error from h2 during hibernate schema generation when I use
#Column(columnDefinition = "INTERVAL HOUR TO MINUTE")
The error is org.h2.jdbc.JdbcSQLException: Unknown data type: "INTERVAL";
The h2 documentation indicates that INTERVAL is supported:
http://www.h2database.com/html/datatypes.html#interval_type
I am using h2 version 1.4.197
Stepping away from JPA and working directly in the h2 console, I have tried the following script which also generates the Unknown data type error:
CREATE TABLE test_interval (id INTEGER, test_hours INTERVAL HOUR TO MINUTE);
I have tried other variations of the INTERVAL type, all of which generate the same error
I can not find any discussion of this issue anywhere.
You need to use a more recent version of H2. H2 supports the standard INTERVAL data type since 1.4.198, but 1.4.198 is a beta-quality version, use a more recent one, such as 1.4.199 or 1.4.200.
The online documentation is actual only for the latest release, currently it is 1.4.200. If you use some older version, you have to use the documentation from its distribution.
Updated question:
I am working with scenario where the source oracle schema do not have a field say "Date of Birth" saved in encrypted format but when using select statement I want output to be in encrypted format. I do not know the version of the oracle to find the appropriate function in the documentations.
I have worked with MySQL and I am familiar with "password()" function. I am looking for similar function for Oracle in SQL not PL/SQL as I cannot use that in the application I am using it should be one line query to fetch the results. I tried using DBMS_CRYPTO as per the documentation https://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_crypto.htm#i1004271 but I am getting error fetching data on my application could be possible that DB version may not be supporting DBMS_CRYPTO.
Any other suggestion on which function can be used to display non-encrypted field in encrypted format when using select on Oracle(thin) query?
I have migrated a Sybase database to SQL server 2008.
The main application that using the database trying to set some of dateTime2 column with data like 1986-12-24 16:56:57:81000 which is giving this error:
Conversion failed when converting date and/or time from character string.
Running the same query using dot(.) instead of colon(:) as millisecond separator like 1986-12-24 16:56:57.81000 or limiting the milliseconds to 3 digits like 1986-12-24 16:56:57:810 will solve the problem.
NOTE:
1- I don't have access to the source of application to fix this issue and there are lots of table with the same problem.
2. Application connect to database using ODBC connection.
Is there any fast forwarding solution or should i write lots of triggers on all tables to fix it using the above solutions?
Thanks in advance
AS Gordon Linoff said
A trigger on the current table is not going to help because the type
conversion happens before the trigger is called. Think of how the
trigger works: the data is available in a "protorow".
But There is a simple answer!
Using SQL Server Native Client Connection instead of basic SQL Server ODBC connection handle everything.
Note:
1. As i used SQL Server 2008 version 10 of SQL server native client works fine but not the version 11 (it's for SQL Server 2012).
2. Use Regional Settings make some other conversion problem so don't use it if you don't need it.
Select REPLACE(getdate(), ':', '.')
But it will Give String Formate to datetime Which is not covert into DateTime formate
Why would you need triggers? You can use update to change the last ':' to '.':
update t
set col = stuff(col, 20, 1, '.');
You also mistakenly describe the column as datetime2. That uses an internal date/time format. Your column is clearly a string.
EDIT:
I think I misinterpreted the question (assuming the data is already in a table). Bring the data into staging tables and do the conversion in another step.
A trigger on the current table is not going to help because the type conversion happens before the trigger is called. Think of how the trigger works: the data is available in a "protorow".
You could get a trigger to work by creating views and building a trigger on a view, but that is even worse. Perhaps the simplest solution would be:
Change the name and data type of the column so it contains a string.
Add a computed column that converts the value to datetime2.
Unfortunately we are using the Advantage Database Server Torture Edition Version 8.1.
After I had finished my project, I heard that the data base is configured to be case sensitive. So I changed the table structure, all Char data types to CIChar, which is a case insensitive char field. But I get this error while executing my program:
Advantage.Data.Provider.AdsException:
Error 7200: AQE Error: State =
HY000; NativeError = 2214;
[Extended Systems][Advantage SQL
Engine]Invalid coercion: Result of
expression is an ambiguous character
type.
I found that the ISNULL(myciChar,'') is causing this problem, but I don't understand, WHY?
How could I solve this problem? Is there other known issues with the cichar data type?
Any help will be appreciated. Thanks.
[update]
I found the reason for this error. There was two points to clarify.
The data base has the version 8.1, but the data architect has the version 7.1 and in local mode it takes the architect engine version 7.1. That means it's a v7.1 issue.
The 2nd parameter in the isnull function is in default case sensitive collation in version 7.1 while my column mytext is cichar and that's the ambiguous character type. So if someone has the same problem, it will work in v7.1 with the collate declaration:
works in v7.1:
select myid, isnull(mytext, '-' COLLATE ads_default_ci) as mytext from mytable
error in v7.1:
select myid, isnull(mytext, '-') as mytext from mytable
Probably need to see more of your expression to tell you why you are getting this error. It doesn't have as much to do with the ISNULL as it does the expression the ISNULL is being used in. The engine likely encountered a comparison and couldn't automatically determine if you wanted the comparison to be case sensitive or not. See the documentation which describes case insensitive precedence and coercion.
If not used in an expression, it might just be a bug (I just tried with 10.1 and it worked), verify you are using the latest version of 8.1, which is 8.10.0.38
I don't think that version 7 supports the cichar data type, version 6 certainly doesn't. I was using version 6 and went straight to 8 and found that everything worked with the version 6 client driver, until I used the cichar data type and then it was game-over for Advantage. I then had to update all the clients to the same version as the server. It is best to keep all the versions the same.