I have a lotus notes application that has a document with a Number, and a description field.
Users reserve a series of numbers at which point documents are created. They then fill in the description and a few other things.
Once this is done they go into another, application (Qpulse; not a notes application) that stores its data in an SQL database.
They will create documents in that system with matching numbers.
Once the documents are created in that system (Qpulse) i'd like update the description field in notes.
How i've done it in the past is to have a notes agent running that does a query, looping through the results finding and updating notes documents.
Is there a better way of doing this? It would be nice to have it automatically updating.
Using an agent like you have already is IMHO the simplest solution.
Another thought comes to mind, though, if you have the database running on a domino server. It could be fairly simple to POST the update to the database using HTTP. You'd still have to write some code within Notes (an agent to receive the POST), but it might be simpler to make an HTTP call after the Qpulse application is updated.
You would send over some unique identifier (that Number field I suppose?) and the description in the POST, the agent would receive it via the DocumentContext object, and could find and update the document. This process would be initiated by some trigger on the Qpulse side.
This isn't simple, but it does avoid writing lots of code outside of Notes, and avoids trying to connect to Notes as a datasource externally.
Trying to integrate with relational database systems is a relatively common problem. But a meaningful and straight forward example seems elusive. I would recommend trying JDBC. There are a few tutorials out there. Perhaps this or this will help get you started.
You could try the Lotus Connectors LotusScript Extension Classes. There is API help documentation in you Domino Designer client. I have also found this IBM Lotus Connectivity Redbook useful, despite it being 10 years old.
Related
I am currently creating a vb.net program in which users upload a song file to the program and then it is saved within the programs files. I have set up the actual saving of the files but would also like to store some meta data of each in a SQL database within my program.
I have looked online and although i now understand the basics of SQL, im still a little fuzzy on how you actually implement this within VB.net. I have already added the library- Imports System.Data.SqlClient but failed to work out how to begin coding in SQL.
The basics of what im trying to acheive is a if statement that will determine wether or not a SQL database has been created in a specific location, and if it hasnt it should create it.
All constructive answers appreciated, thanks.
There are a number of different database engines available. The namespace that you have chosen contains the ADO.NET client classes for Microsoft SQL Server. You would use a connection string to specify how to connect to the database. This would often contain connection information, such as server name, user name, password etc, but it sounds like you want to store data locally.
There is a local version of SQL Server called LocalDB, but I think you would still need quite a lot of the SQL Server components installed for that to work. Although you can package these with your application they may be too large for you, so you may want to look at SQL Server Compact Edition, which is much smaller and allows you to package the whole engine as part of your application and is useful for storing data locally. Compact edition doesn't have quite all of the features that LocalDB does, so you may want to compare the features available for each.
Although you can use the ADO.NET objects to connect to a database, I think most people these days would use a layer on top which transfers data back and forwards between objects in memory and the database. This also allows you to use Linq to query the database in most cases. I personally use Entity Framework. You might want to look into that. There are different ways of configuring EF so you may want to look at a tutorial. Once you have it set up, you will probably find it much easier and safer to work with than writing SQL manually though.
I have an Java agent synchronizing Notes documents with rows in a MS SQL table. New documents are inserted and changed dokuments are updated, but I have not found a good way to handle deleted documents. I use JDBC.
The way I do it now is to make a list of unids on both sides and delete rows in the SQL table that does not exist in Notes.
If I could have access to a list of deletion stubs, it would be much easier and better performance. The agent runs daily.
The table contains 500.000 rows and there are 50-100 deletions in each run.
The only 100% foolproof way to be sure you will capture all deletions is to use the Notes C API Extension Manager feature to buiuld a server add-on to handle EM_BEFORE events for NSFNoteDelete calls. You could use the add-on code in the OpenNTF TriggerHappy project to simplify this, and to allow you to write most of your actual logic Java instead of C. Your code would simply have to capture the information you need from the document that is about to be deleted (i.e., the UNID if that's all you need) and record it in another database that you can query via JDBC. You could also clean up that database via JDBC after you've done your synchronization.
Using JDBC you can't.
What you may try to do is writting an agent that will build in a separated DB the list of all the NoteID of deleted document and get this list via JDBC.
I think you can get only NoteID (not UNID), thus I suggest you to also store them in MS SQL.
Look at http://www-10.lotus.com/ldd/nd6forum.nsf/55c38d716d632d9b8525689b005ba1c0/8210fa46540ecbbf852572b40044bb3e?OpenDocument for a start to build your Domino side agent.
So, it looks like I'm gonna have to replicate a couple of reference tables from my SS2k5 db on SP2k7 in order to do dropdown boxes on my document library. Small tables, maybe a hundred entries, and not often updated. Ths SP Server is not the SS server.
I know how to build triggers, but how do I reference the SP table to update it from the SS trigger, and what are the authentication issues?
Anybody do this before?
I know there is a thing called Business Catalog Data or something like that, but I don't have full privs on this SP site, so I'm likely not to be able to get to that, and I've never used it before, hence the trigger idea.
Does it really need to be real time via a trigger? Or can it be delayed and processed via an ETL job? If the latter is acceptable, I recommend taking a look at Extracting and Loading SharePoint Data in SQL Server Integration Services. I have used this adapter on past projects to transfer data between SQL Server and SharePoint.
P.S. I would not recommend writing directly to the SharePoint content database. Making changes directly to a content database is not supported and is not considered a best practice.
I often find myself writing one off queries to either answer someone's question or trouble shoot something and I would like to be able to quickly expose the on demand refreshable results of the query graphically so that I can share these results to others without having to go through the process of creating an SSRS report and publishing it to a reporting services server.
I have thought about using excel to do this or maybe running a local SSRS server but both of these options are still labor intensive and I cannot justify the time it would take to do these since no one has officially requested that I turn this data into a report.
The way I see it the business I work for has invested money in me creating these queries that often return potentially useful data that other people in the organization might want but since it isn't exposed in any way and I don't know that this data is something they want and they may not even realize they want this data, the potential value of the query is not realized. I want to increase the company's return on investment on all these one off queries that I and other developers write by exposing their results graphically so that they can be browsed by others and then potentially turned into more formalized SSRS reports if they provide enough value to justify the development of the report.
What is the fastest way for me to take a query and turn it into a refreshable graph of the results set?
Why dont you simply use what you may already have. Excel...you can import data via an ODBC / Oracle / SQL Connection. Get Data..and bam you can run the query and format it right in the spreadsheet and provide sorting etc. All you need to supply is the database name and user name and password to connect to the db.
JonH is right regarding Excel's built in ODBC support, but I have had tons of trouble with this. In my case, the ODBC connection required the client software to be installed so that it could use the encryption methods, etc. Also, even if that were not the case, the user (I believe) would still have to manually install and set up an ODBC connection.
Now if you just want something on your machine to do the queries and refresh them, JohH's solution is great and my caveats are probably irrelavent. But if you want other users to have access, you should consider having a middle-man app (basically a PHP script, assuming a web server is an option for you), that does a query, transforms the results into XML, and outputs it as "report-xyz.xml". You can then point anybody running a newer version of Excel to that address and they can very easily import the data into Excel with no overhead. (basically a kind of web service).
Keep in mind, I don't think you should have a web script that will allow users to make queries to your Database server! You would have some admin page where you make pass the query in and a new xml file with the results gets made. So my idea is also based on the idea that you want to run the same queries over and over without any specifics passed in. (if that were the case, I'd look into just finding a pre-built web services bridge for your database that already has security features built in. Then you could have users make the limited changes allowed.)
I have large and complex SQL Server 2005 DB used by multiple applications. I want to create a data-dictionary for maintaining not only my DB objects but also cross-reference them against applications that use a specific object.
For example, if a stored procedure is used by 15 diffrent applications I want to record that additional data too.
What are the key elements to be kept in mind so that I get a efficient and scalable Data Dictionary?
So, I recently helped to build a data dictionary for a very large product. We were dealing with documenting more than one-thousand tables using a change request process. I can send you a scrubbed version of the spreadsheet we used if you want. Basically, we captured the following:
Column Name
Data Type
Length
Scale (for decimals)
Whether the column is custom for the application(s) or a default column
Which application(s)/component(s) the column is used in
Release the column was introduced in
Business definition
We also captured information about who requested the addition, their contact information, etc. Our primary focus was on business definition, and clearly identifying why a column was being used or created.
We didn't have stored procedures in our solution, but bear in mind that these would be pretty easy to add to the system.
We used Access for our front-end, even though SQL Server was on the back end. It made it pretty easy for us to build out a rich user interface without much work, using the schema we had already built out.
Hope this helps you get started--feel free to ask if you have additional questions.
I've always been a fan of using the 'extended properties' within SQL Server for storing this kind of meta data. In this way the description of each object lives alongside the object and is accessible by anyone with access to the database itself. I'm sure there are also tools out there that can read these extended properties and turn them into a nicely formatted document.
As far as being "scalable", I don't know of any issues related to adding large amounts of data as extended properties; or I should say I've never had any issues with this.
You can set these extended properties using SQL Server Management Studio 'property' dialog for each table/proc/function/etc and can also use the 'sp_addextendedproperty'.