UNDEFINED data type when reading SQL database from Lotus Notes using ODBC: nvarchar - sql

This is the second time it happens to me and before modifying a 3rd party Database structure I wanted to know if anyone knew a better solution:
I'm accessing a MS SQL Server 2008 from a Lotus Notes Agent (Notes 7) to retrieve some data. I use LSXODBC and my "Select" statement works perfect... Except that my agent cannot "understand" Nvarchar SQL Field types. Any other data types work ok (can get the values from number and dates fields without a problem).
It took me a while to figure it out, and I couldn't find a solution (other than modifying the field types on the SQL table to Varchar instead of nVarchar)
I could replicate this both in MS SQL 2005 and 2008.
Last "elegant" solution was to create an SQL view -instead of modifying table structure- with the varchar types instead of nvarchar. Works ok but I have to create a view for each table I'm retrieving data from.
I tried to set the Field type using FieldExpectedDataType Method but didn't work. Still got a DB_TYPE_UNDEFINED.
I thought there might be some configuration issues? or maybe I'm using an old LN Version / ODBC Driver version?
Any hint would be greatly appreciated.
Thank you in advance.
Diego

An old ODBC driver may not support unicode. It was not added until SQL Server 2000 (I'm fairly sure)

Related

MS - Access BigInt SQL SERVER Issues

I have an access database that we use for simple reporting solutions, this pulls data from a remote data base through an ODBC link. The data-warehouse provider has recently added a new data field to all of their tables which is formatted as a 'BIGINT'
Access now shows all records as deleted as it cannot deal with the BIGINT linked table.
As the data warehouse will not change their tables is there anyway that I can get the MS-Access to display correctly and ignore the 'BIGINT' field in the table linking?
I am having to work around this at this moment in time by copying the entire data warehouse minus this column to a MYSQL DB daily which is far from ideal...
I cannot for the life of me work this out.
Instead of using a linked table, just write a passthru query in Access. Eventually CONVERT your BigInt into a string or Integer, depending on the contents.
This link suggests loading the data into a local table with a data type of string:
http://social.msdn.microsoft.com/Forums/office/en-US/fb6f99ec-2ed7-487b-ba39-0777a0b44d5f/the-bigint-problem?forum=accessdev
Perhaps consider that MS Access's usefulness is limited here and it may pay to use SQL Server in future as you will continue to run into these kinds of problems. Is there any reason you can't use the datawarehouse directly?
You may also wish to consider using an .ADP (a file type of MS Access) which has a native OLE DB connection to the SQL Server database (no ODBC flimflammery) but also all the usual forms and reports.
ADP's are deprecated but I have had great success with them.
This is an old thread, but you can create a view and cast the bigint as int, and then Access will link to it.
Greg
This is an old thread but:
Casting your bigint as an int as someone has suggested isn't going to work if any of your values in your bigint column are bigger than the maximum value for an int (and if none of them are bigger than the max value for an int, it makes you wonder why a bigint is being used in the first place).
MS Access (from Access 2000 onward) does have a decimal data type, which is good for numbers of up to the maximum size of a SQL Server bigint and more. So if you make your MS Access field to be of type decimal, it can handle anything a SQL Server bigint can throw at it. In your process of taking the data from the SQL Server database into your MS Access database you would need something done programatically along the way to slurp your bigint values from SQL Server and squirt them into MS Access as decimal

Oracle Database Users in synced database

Ok so I have a little problem...
In my project we have a Oracle SQL Server. In the database I have access to some of an other users tables:
Tables:
|-bla
|-bla
Users:
|-otherUser (let's just call him that)
|-Tables:
|-aTable
In Oracle, to access the aTable table I use SELECT * FROM otherUser.aTable
Now, we also have a MS SQL CE database to which I sync the data from the OracleDB using the MS Sync f/w. And in the CE db - after sync - I get a table otherUser.aTable. This sounds good, so even though the CE doesn't have the User concept it just adds the same table.
BUT the problem is that when calling the same SQL query on CE as on Oracle I get a The table name is not valid error. Instead if I want to get the content of the table, the two ways that I have found to work is surrounding the otherUser.aTable with either [] or "".
However neither of them seem to work with Oracle. The [] seem to be an illegal name, and the "" seem to search for a table called just that (not an other user).
So why don't I just use the one way on Oracle and the other on CE? well I also use NHibernate as a ORM and it kind of needs the same table name for both the databases...
Is there a third way to encapsulate the table name that works with users in Oracle and just works in CE? or do you have any other ways to fix this issue?
I have no experience with MS SQL, but it seems like a problem that might be solved with synonyms on Oracle side.
Try to create synonym "otherUser.aTable" for otherUser.aTable in Oracle.

How to insert LONG BINARY from SQL Server to Oracle

I need to get a copy of a SQL Server 2008 table into an Oracle RDBMS. I have database link for SQL Server, database has a table which contains LONG BINARY type column.
When I issue
create table test_ora as select * from mssqltable#dblink
I get the error
Can't convert LONG
I tried to use to_lob, to_char, hextoraw and a ream of Oracle conversion function but still hasn't defeated the issue. Do you have any ideas?
p.s. I'm out of work now so can't tell exact ORA- error number.
There is a way to do that with undocumented Oracle's package:
http://tonguc.wordpress.com/2008/08/28/how-to-transfer-long-datatype-over-dblink/
I would recommend tool called Pentaho Data Integration. This is free, small and superb ETL tool.
Download page: community(.)pentaho(.)com
It will recreated all tables and types for you. How to do it:
pldwh(.)blogspot(.)co(.)uk/2013/03/pentaho-data-integration-create-tables_1(.)html

Conversion failed when converting the nvarchar value '' to data type int

While this may sound like a beginner 101 problem I think it is a bit more complicated.
I have two instances of SQL server, one is a log-shipped read/only standby copy of the master database that is used for reporting purposes.
They are both 64-bit SQL 2005, SP3.
The LogShipped Instance is: 9.00.4035.00 (Standard Edition)
The Original Instance is: 9.00.4035.00 (Enterprise Edition) in an Active / Passive Cluster.
Server collation is Latin1_General_CI_AI on both and they both run on Server 2003 64 bit.
I have a query that runs and executes fine on the master database server but it fails on the standby / read-only copy with a conversion of nvarchar to int.
The code is identical and i've copied and pasted it from the main instance query window just to double check.
Is there a bug in SQL server somewhere? I can paste the query if needed (its a bog-standard select with some in-line tables)
Just don't understand why it works on the one yet the log-shipped copy fails.
Any pointers is much appreciated.
-- Edit
I have found the culprit.. the transaction log database contains invalid data that isn't in the primary database.. quite why they are out of sych I do not know yet as the transaction log shipping is still working and I have no errors in the job-history.
Just a few orphaned records that are invalid that are not in the primary db.. how odd
Are you sure the data is the same?
If you try
select convert(int, char(10))
it will probably get a similar error, so your query will fail if the value you are converting is char(10), which may not be be obvious when viewing the data.
check any trigger if that is updating any view(having int for the specific column) based on your table

Is there an Access equivalent of the SQL Server NewId() function?

I have written SQL statements (stored in a text document) that load data into a SQL Server database. These statements need to be repeated daily. Some of the statements use the NewId() function to populate a keyed field in the database, and this works fine.
While I'm in the process of writing an application to replicate these statements, I want to use Access queries and macros instead of copying and pasting queries into SQL Server, thus saving me time on a daily basis. All is working fine but I can't find any function that will replace the SQL Server NewId() function. Does one exist or is there a work around?
I'm using SQL Server 2005 and Access 2007.
On top of matt's answer, you could simply use a pass-through query and just use your existing, working queries from MS Access.
A solution would be to insert the stguidgen() function in your code, as you can find it here: http://trigeminal.fmsinc.com/code/guids.bas https://web.archive.org/web/20190129105748/http://trigeminal.fmsinc.com/code/guids.bas
The only workaround I can think of would be to define the column in your access database of type "Replication ID" and make it an autonumber field. That will automatically generate a unique GUID for each row and you won't need to use newid() at all. In SQL server, you would just make the default value for the column "newid()".
Again, there seems to be confusion here.
If I'm understanding correctly:
You have an Access front end.
You have a SQL Server 2005 back end.
What you need is the ability to generate the GUID in the SQL Server table. So, answers taht suggest adding an AutoNumber field of type ReplicationID in Access aren't going to help, as the table isn't a Jet table, but a SQL Server table.
The SQL can certainly be executed as a passthrough query, which will hand off everything to the SQL Server for processing, but I wonder why there isn't a default value for this field in SQL Server? Can SQL Server 2005 tables not have NewId() as the default value? Or is there some other method for having a field populate with a new GUID? I seem to recall something about using GUIDs and marking them "not for replication" (I don't have access to a SQL Server right at the moment to look this up).
Seems to me it's better to let the database engine do this kind of thing, rather than executing a function in your SQL to do it, but perhaps someone can enlighten me on why I'm wrong on that.