I need to connect and send/receive information from an MS SQL server in my Lotus Notes app using #formula in realtime (I can connect using an agent, but I need to use inline code for this).
The commands themselves seem pretty straight forward, but setting up the configurations seems to be a topic with scarce documentation. Apparently I need to install an ODBC driver. Where would I find that, and do I install that onto the server or onto the workstations that will run this app?
If any Lotus gurus could step me through setting this up, it would be greatly appreciated.
Thanks
You'll need to install the ODBC driver on the workstations that run this app, if the users will be triggering the ODBC connections. If at all possible, I highly suggest setting this up on the server side, and having it run via an agent. That'll save you from a few headaches, including having to maintain the ODBC connections on each workstation and worrying if each workstation has access to the data and server.
You first just want to make sure your ODBC setup is correct. You'll need the appropriate driver, of course, and the connection information. This site has a walkthrough to give you an idea of how to setup an ODBC database connection
If you have MS Access you can use it to test querying from the ODBC data source. Once you've tested the connection works, you'll just refer to the data source name (DSN) in your #DbColumn, #DbLookup, or #DbCommand formulas.
Back to my suggestion on setting this up on the server side, that would mean you'd keep a copy of the data you're querying within the Notes database itself, and then users would be interacting with read-only data in Notes. You could schedule updates regularly on the server side of that read-only data and effectively create a cache of the data in your Notes environment. Then that data would replicate around to other replicas of the database, but remove the trouble of the ODBC connection being needed everywhere.
If you need realtime data, though, that solution is out the window and you'll have to go with a local solution. In that case, you might want to look at the LCConnection class or using an ADODB.Connection from script, as both will allow you to create DSN-less connections to data sources. You'd then save the trouble of requiring ODBC data sources on each workstation, and only have to worry about whether they can access the server from their workstation.
I would add another option to Ken's list. It involves having the server do the queries of the external database (therefore you are only setting up ODBC in on the server - you don't have to deal with it on the workstations). You create an agent that is launched on the server using the 'run on server' technique. When the workstation needs to query the external data, the code creates a throw-away document in the database, puts the query criteria into the temporary document, saves the document, then calls the 'run on server' agent passing a reference to the temporary document. The server launches the agent, reads the criteria from the temporary document, does the query, and writes the results back to the temporary document. Then the workstation can access the query results from the temporary document. A scheduled agent can delete the temp docs on a regular basis.
It sounds complicated, and it all has to be done in script, but I've done this in many applications and it is fast, flexible, easy to administer, and gives your applications a lot of power. Note that end users must have the ACL rights to create a document in the db (the temp doc) in order for this to work.
Good luck!
Related
Using the following:
MS Access 2016, Office 365
SQL Server 2012
I have 100+ SQL Server tables and views linked into an Access database via ODBC connection. All of these SQL Server objects are from two SQL Server databases that reside on the same server. All of these connections have been set up using the Access user interface and re-linked via the Linked Table Manager.
I've been experiencing a number of Access database issues lately, so I’m combing through everything with a fine-tooth comb. I noticed that the connection strings for all my SQL Server objects have a number of inconsistencies (see below). There does not seem to be any pattern in terms of when these objects where created and the format of the connection string either.
ODBC;DSN=Database1;Description=Database1;Trusted_Connection=Yes;APP=Microsoft Office 2010;
ODBC;DSN=Database1;Description=Database1;Trusted_Connection=Yes;APP=Microsoft Office 2010;DATABASE=Database1
ODBC;DSN=Database1;Description=Database1;Trusted_Connection=Yes;APP=Microsoft Office 2010;DATABASE=Database1;Network=DBMSSOCN
ODBC;DSN=Database1;Description=Database1;Trusted_Connection=Yes;APP=Microsoft Office 2016;
ODBC;DSN=Database1;Description=Database1;Trusted_Connection=Yes;APP=Microsoft Office 2016;DATABASE=Database1
ODBC;DSN=Database2;Description=Database2;Trusted_Connection=Yes;APP=Microsoft Office 2010;DATABASE=Database2
ODBC;DSN=Database2;Description=Database2;Trusted_Connection=Yes;APP=Microsoft Office 2016;DATABASE=Database2
Is it problematic that there are so many variations of the connection string? I've done some research (i.e., Googling), but I don’t much experience in this area of databases. Some connections have a "Network" specified, but others don’t. Per connectionstrings.com (https://www.connectionstrings.com/define-sql-server-network-protocol/), “Network=DBMSSOC” specifies a Winsock TCP/IP connection, which I believe is the appropriate choice for my network setup. Is it problematic that this parameter is excluded from several of the connection strings?
I would most certainly “harmonize” all of the linked tables to the same connection.
You can use the linked table manager to do this, but likely code is better.
You need to select all tables in the linked table manager, and MAKE sure you click on prompt for new location. This will force you to create (or select) an existing DSN. In fact, I would select all tables from the “one” given database, and then click on the “always” prompt for new location. When you do this, then all will posies the SAME link and connection string.
There are good number of reasons, one good reason is Access will cache the connection for you. So if you have “different” connections for the same database, you will have multiple caches of those connections. This likely will not affect performance “much”, but it still a good idea.
And if you are NOT using a trusted connection, then your connection strings in fact do NOT need to include the uid/password. (However, the cache of the uid/password requires exact string matches (minus the uid/password). In this approach, you can execute a “one time” logon on application start up, and then all linked tables (without the uid/password) will now work. However, you using trusted connections here, so this tip + issue don’t matter.
In your example, you using trusted connections, so issues are “much” less of a worry or problem.
I also STRONG suggest that when you launch the ODBC manager from Access, that you ALWAYS but ALWAYS use a FILE dsn. The reason for this is that then Access will convert the connections to DSN-less for you.
This means that you can now deploy the front end application to any workstation, and you don’t need to setup or have any DSN connection copied, or even setup on the workstation.
So I would in fact select all of the tables for one given database, (check the prompt for new location), and then create a FILE dsn (they are the default anyway). Once you link, then do the same for all the other tables that point to the other database. Again re-link.
The result will be a dsn-less connection, and thus your application will work on any workstation on the network, and do so without having to setup a dsn of any kind on each workstation.
So yes, you don’t have to, but it seems over time, some tables were linked using a different DSN, they should be harmonized. And if you ever introduce some automatic linking code, you want to be able to distinguish between the two databases, and you code will have a rather difficult time doing this with a “hodge” podge of differing connections.
So you can use the linked table manger to harmonizing the connections – just ensure you select all tables from a single given database, and then re-link with a FILE dsn, the result will be a DSN less connection (access will ONLY use the DSN during the linking process – after that, access don’t care, nor does it use or even look at the DSN, or even if it exists.
Having said all of the above, it not clear if this issue is related to your errors, or instabilities in your application. (a good idea is to always distribute a compiled version of your application - (a accDE as opposed to accDB).
I am trying to find the best procedure to get data from our SQL server at headquarters to update apps running on local machines in various locations not connected to our network. Our current data and application is in Foxpro where you simply copied the data file, so I am not very familiar with using SQL databases.
The field app uses localdb and users don't save anything to the database. When the app opens it checks a web site to for updates. I tried detaching our HQ .mdf and .ldf, downloading it and overwriting it on the local machine, but localdb would not attach to the new file (same name). I thought localdb closes and detaches when the application closes , but maybe I am wrong. I also wonder if I need the log file since no changes are made and I dont need to rollback anything. I have searched for a good article on this topic but haven't found anything. This must be a fairly common scenario in many companies.
You want to look into using replication, probably snapshot replication. This allows you to distribute on whatever schedule is applicable to send one or more tables, or other objects, to off site sql server instances. You can use Http to send data.
I have been using MS Access databases via DAO for many years, but feel that I ought to embrace newer techniques.
My main application runs on end user PCs (no server) and uses a shared database that is created and updated on-the-fly. When the application is first run it detects the absence of a database and creates a new empty one.
Any local user running the application is allowed to add or update records in this shared database. We have a couple of other shared databases, that contain templates, regional information, etc., but these are not updated directly by the application.
Updates of the application are released from time to time and each new update checks the main database version and if necessary executes code to bring the database up to the latest specification. This may involve the creation or deletion of tables and/or columns. New copies of the template databases are also included as part of the update.
Our users are not required to be computer-literate and should not need to run any sort of database management software beyond those facilities provided by the application.
It all works very nicely with DAO/Access, but I'm struggling to find how to do it with SQL Express. The databases seem to be squirrelled away in locations that are user-specific and database creation and update seems at best awkward to do by program code alone.
I came across some references "Xcopy deployment" that looks like it could be promising, but there seem to be references to "user instances" that sound suspiciously like something that's not shared. I'd appreciate advice from anyone who has done it.
It sounds to me like you haven't fully absorbed the fundamental difference between the Access Database Engine (ACE/Jet) and SQL Server:
When your users launch your Access application it connects to the Access Database Engine that has been installed on their machine. Their copy of ACE/Jet opens the shared database file (.accdb or .mdb) in the network folder. The various instances of ACE/Jet work together to manage concurrent updates, record locking, and so on. This is sometimes called a "peer-to-peer" or "shared-file" database architecture.
With an application that uses a SQL Server back-end, the copies of your application on each user's machine connect over the network to the same instance of SQL Server (that's why it's called "SQL Server"), and that instance of SQL Server manipulates the database (which is stored on its local hard drive) on behalf of all of the clients. This is called "client-server" or "server-based" database architecture.
Note that for a multi-user database you do not install SQL Server on the client machines, you only install the SQL Server Client components (OleDb and ODBC drivers). SQL Server itself is only installed in one place: the machine that will act as the SQL... Server.
re: "database creation and update seems at best awkward to do by program code alone" -- Not at all, it's just "different". Once again, you pass all of your commands to the SQL Server and it takes care of creating the actual database files. For example, once you've connected to the SQL Server if you tell it to
CREATE DATABASE NewDatabase
it will create the database files (NewDatabase.mdf and NewDatabase_log.LDF) in whatever local folder it uses to store such things, which is usually something like
C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA
on the server machine.
Note that your application never accesses those files directly. In fact it almost certainly cannot do so, and indeed your application does not even care where those files reside or what they are called. Your app simply talks to the SQL Server (e.g. ServerName\SQLEXPRESS) and the server takes care of the details.
Just to update on my progress. Inspired by suggestions here and this article on code project:
http://www.codeproject.com/Articles/63147/Handling-database-connections-more-easily,
I've created a wrapper for the ADO.NET methods that looks quite similar to the DAO stuff that I am familiar with.
I have a class that I can use just like a DAO Database. It wraps ADO methods like ExecuteReader, ExecuteNonQuery, etc. with overloads that can accept a SQL parameter. This allows me to directly replace DAO Recordsets with readers, OpenRecordset with ExecuteReader and Execute with ExecuteNonQuery.
Each method obtains and releases the connection from its parent class instance. These in turn open or close the underlying connection as required depending on the transaction state, if any. So a connection is held open for method calls that are part of a transaction, but closed immediately for a single call.
This has greatly simplified the migration of my program since much of the donkey work can be done by a simple "find and replace". The remaining issues are then relatively easy to find and sort out.
Thanks, once again to Gord and Maxwell for your advice.
This answer is too long to right down... but go to Microsoft page, there they explain how to make it: http://office.microsoft.com/en-us/access-help/move-access-data-to-a-sql-server-database-by-using-the-upsizing-wizard-HA010275537.aspx
I hope this help you!!
I'll try and keep this simple, I have worked on projects in the past whereby we use either Oracle or MS SQL server as the data store with Access as the front-end, rather than linking in the tables I tend to use an ADO connection to the respective database in order to open my recordsets as in most cases this is faster as the query is executed against the server and then the results returned rather than the work been on the local PC.
My question now I've finally got there is, if I place an access .mdb file on a server machine with more processing power than my local PC and then run queries from it using an ADO connection (like Oracle/MS SQL), will it provide better performance due to the .mdb been on the server; or as it's access will the work automatically still be done by the local PC as access is a file type database rather than a database server?
No it will be slower - the queries will still run client side, and you will have network activity on top.
Access applications always run on the client side. Locking takes place by using Windows filesystem byte range locks on the LDB file, to allow multiple instances of Access to modify the same MDB file.
All the code runs on the client, and you will be having to send the data across the network. The only work the server will be doing with an MDB file is acting as a file server.
Just use SQL Server Express if Access is not fast enough. Since SQL Server is a client-server system, putting it on a fast server will help.
Is it possible to monitor what is happening to an Access MDB (ie. what SQL queries are being executed against it), in the same way as you would use SQL Profiler for the SQL Server?
I need logs of actual queries being called.
The answer depend on the technology used from the client which use MDB. There are different tracing settings which you can configure in HKEY_LOCAL_MACHINE\Software\Microsoft\Jet\4.0\Engines\ODBC http://office.microsoft.com/en-us/access/HP010321641033.aspx. If you use OLEDB to access MDB from SQL Server you can use DBCC TRACEON (see http://msdn.microsoft.com/en-us/library/ms187329.aspx). I can continue, but before all you should exactly define which interface you use to access MDB.
MDB is a file without any active components, so the tracing can makes not MDB itself, but the DB interface only.
UPDATED: Because use use DAO (Jet Engine) and OLE DB from VB I recommend you create JETSHOWPLAN regisry key with the "ON" value under HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\JET\4.0\Engines\Debug (Debug subkey you have to create). This key described for example in https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-5064388.html, http://msdn.microsoft.com/en-us/library/aa188211%28office.10%29.aspx and corresponds to http://support.microsoft.com/kb/252883/en allow trace OLE DB queries. If this output will be not enough for you you can additionally use TraceSQLMode and TraceODBCAPI from HKEY_LOCAL_MACHINE\Software\Microsoft\Jet\4.0\Engines\ODBC. In my practice JETSHOWPLAN gives perfect information for me. See also SHOWPLAN commend.
UPDATED 2: For more recent version of Access (like Access 2007) use key like HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\12.0\Access Connectivity Engine\Engines. The tool ShowplanCapturer (see http://www.mosstools.de/index.php?option=com_content&view=article&id=54&Item%20%20id=57, to download http://www.mosstools.de/download/showplan_v9.zip also in english) can be also helpful for you.
If you're accessing it via ODBC, you can turn on ODBC logging. It will slow things down a lot, though. And it won't work for any other data interface.
Another thought is using Jet/ACE as a linked server in SQL Server, and then using SQL Profiler. But that's going to tell you the SQL that SQL Server processed, not what Jet/ACE processed. It may be sufficient for your purposes, but I don't think it would be a good diagnostic for Jet/ACE.
EDIT:
In a comment, the original poster has provided this rather crucial information:
The application I am trying to monitor
is compiled and at a customer's
premises. I am trying to monitor what
queries it is attempting against an
MDB. I cannot modify the application.
I am trying to do what SQL Profiler
would do for a SQL Server.
In that case, I think that you could do this:
rename the original MDB to something else.
use a SQL Server linked server to connect to the renamed MDB file.
create a new MDB with the name of the original MDB and link to the SQL Server with ODBC.
The result will be an MDB file that has the same tables in it as the original, but they are not local, but links to the SQL Server. In that case, all access will be going through the SQL Server and can be viewed with SQL Profiler.
I don't have a clue what this would do to performance, or if it would break any of the data retrieval in the original app. If that app uses table-type recordsets or SEEK, then, yes, it will break. But this is the only way I can see to get logging.
It shouldn't be surprising that there is no logging for Jet/ACE, given that there is no single server process managing access to the data store.
Keep in mind that the file sitting on your hard drive is simply a windows file. So, there is a big difference between a server based system and that of a simple text file, or Power Point file, or in this case a mdb file just sitting on the drive.
However you can get the jet engine to display its query optimizeing via showplan.
How to do this is explained here:
http://www.databasejournal.com/features/msaccess/article.php/3658041/Queries-On-Steroids--Part-IV.htm
The above article also shows how to access the jet disk read statistics, which I also find extremely useful for optimizing things.
Just remember to turn off that data engine logging system when you’re not using it as it creates huge log files…
you could write your own profiler, based on a "transaction" object that will centralize all instructions sent to the database, You'll end up somewhere with a "transaction.execute" method, and a transaction table in your access db. This table can then be used to collect transaction's instructions, start time, end time, user sending the instruction, etc.
I'd suggest upsizing the tables to SQL Server. There is a tool from the SQL Server group that is better than the Upsizing Wizard that is included with Access.
SQL Server Migration Assistant for Access (SSMA Access)
Also see my Random Thoughts on SQL Server Upsizing from Microsoft Access Tips page