Identify Records in a SQL Database that have been created or modified since the BW Process last executed - sql

I need to identify records in a database that have been updated or created since the previous execution of my BW process. My process will execute every 5 minutes, and my plan was to identify the rows by comparing the last_modified_timestamp field in the database to the system time minus 5 minutes. However, this would not take into account periods where my process could be offline/down for maintenance etc. So I was thinking that if I can just track the timestamps when my process runs, I could then compare the timestamp to that and not have to worry about periods of the process being down.
What would the proper approach be to get around this issue?
Thanks for the help! (I am new to Tibco and apologies if this is a simple question or if I am missing something fundamental)

not sure which version of Tibco you are using. You can try to leverage Tibco Database Adapter for your needs.
Tibco Database Adapter is used for enabling communication between
TIBCO processes and database system. There are two types of services
that can be used with a database adapter:
Publication Service Adapter Publication service extracts data from the
changed rows of a database table and publishes them on appropriate
subject names which are then subscribed by adapter subscriber process
starter.
Subscription Service Subscription service of a database adapter does
opposite to a publication service. When running as a subscriber,
database adapter listens on a subject, receives messages and updates
the relevant tables in its associated database.
Please see tutorial here:
https://tutorialspedia.com/tibco-database-adapter-step-by-step-tutorial/
Adapter for Database concept document:
https://docs.tibco.com/pub/activematrix-adapter-for-database/6.0.0_april_2009/adbadapter/pdf/tib_adadb_concepts.pdf

Related

Do any ORM frameworks allow choreography of multiple database inserts

I have a requirement to perform the following steps (stages/phases) when updating my database.
Validate business request to ensure it is valid.
Identify what database records need creating.
Create these records as Java Objects, populated with the necessary instance variable values.
Persist multiple records in the required sequence (e.g. Parent(s) then Child(ren)) within one transaction.
Repeat process for next Business request.
My entire database consists of over 600 tables.
Some business requests result in 30 to 40 tables requiring new data to be inserted.
I am looking for a process where i can instantiate all my Java entity objects, populate them with the required data. Then
Begin Transaction,
choreograph inserts,
commit.
am i going to have to be the choreographer?
My database is IBM db2 v10 for z/os.
My development environment is Java 7, IBM Websphere Application Server v8.5.5
z/OS has a built in transaction manager, RRS. You can use it as your transaction manager. I think you do automatically if you connect via WAS, but I'm not that familiar with it, so I can't say for sure. I'd suggest looking at the "DB2 for z/OS and WebSphere Integration for Enterprise Java Applications" redbook, here: http://books.google.com/books?id=UfjHAgAAQBAJ&pg=PA589&dq=DB2+for+z/OS+and+WebSphere+Integration+for+Enterprise+Java+Applications+redbook&hl=en&sa=X&ei=luypU82THI6uyAT7pIGgDQ&ved=0CDcQ6AEwAA#v=onepage&q=DB2%20for%20z%2FOS%20and%20WebSphere%20Integration%20for%20Enterprise%20Java%20Applications%20redbook&f=false

Database Snapshot or Temp Table for long running import?

The application I am working on needs to pull data from a legacy SQL system(Client) and import it into my application's SQL db (server). This methodology is implemented using WCF. I am not able to make any schema changes to the client DB. After the initial import, subsequent imports are generated based upon the clients modified and created timestamp columns. I need to pull data from 3 different tables from the client DB.
Below are the solutions i am researching currently, any feedback on pros or cons would be great. As a proof-of-concept my initial import process over WCF with net.tcp and a page size of 100 records at a time is taking .15 seconds for the WCF transaction and roughly 1.5-2.0 seconds to execute the insert SPROC on my server (trying to optimize this some more, the SPROC has to do a few checks on the server DB before the insert which i think is causing the hold up.
Server Side Processing
Connect to client DB primary table, then for data i require from other tables create another request to the client to provide that data. This requires the most 'round trips' from the server to the client, but requires the least amount of work on the client side since it is simply sending data.
Temp Table
On my server log the transaction start time, create a temp table on the client side that joins together all the tables i need, then return that result sent to the server. After the operation completes, drop the temp table on the client. Im leading this way, even though i will be sending a larger result set across the pipe, my bottleneck is the INSERT statement, and if i process it all into one larger table before sending I can do it all on the server side with 1 modified insert SPROC instead of multiple inserts.
3rd party tool
Use some 3rd party tool that does data diffs to do a point-in-time copy of the client tables to my server db and then process it all locally on my server DB
MS Sync Framework
I excluded this option, I cant make schema changes on the client tables, which it appears to require to track changes. I had a hard time understanding custom sync providers \ interfaces.
Merge Replication
Excluded this again due to not being able to make changes to the client.
Some of my concerns include using the tables created \ modified time stamp rows for subsequent syncs. Most of what i have read on the topic suggests this is not best practice to handle data synchronization, with the primary concern being transactions that start prior to the sync but do not complete until after the sync such that the timestamp appears to be already imported. Is there a better way to track changes?
My sync solution is really a one way sync, in that records created on the server are actually sent down to a client SPROC for insert there, then when importing we compare the PK ID coming from the client to a FK ID on the server to ensure we are not causing loops or importing duplicate records. In the event I do have a conflict, the client always wins. Again currently i'm leading towards a temptable for each import session.
Feedback?

Can my sql server send messages to activemq server without any java app in between

We are using Sql 2012 database server. When ever the db modifies we want it to trigger a message that can be stored in a queue using activemq.
We are not sure how can we code to trigger a db so that it sends a message.
Can we directly make the message generated from db to get queued in activemq without any java interface in between. I would want to know whether we can achieve this or not.
3.Are there any other ways to set up a communication between sql server and activemq say between database services and activemq services(does activemq have that)
PS i am a new user of activemq. Any leads to solve these queries is appreciated.
Please don't as SQL Server to do this. SQL Server is designed to store data. You are asking too much of it. Depending on how many places you would want to add to this queue from, I would choose one of the following solutions:
If you want to add to this queue from a bunch of different places, and don't want to change existing code, create an application to move items from SQL Server to ActiveMQ. The items in SQL Server can be populated by a trigger.
If there are only a few places that add to this queue, add that logic to the application so that every write to SQL Server will also write to ActiveMQ.
If you really still don't want to modify any code, you can configure ActiveMQ to use SQL Server as its persistence database. Then you can modify its data and hope that it plays nice. This is definitely not preferable. I would rather put CLR code into SQL Server to push data to ActiveMQ.

Does it matter if time goes out of sync on merge replication clients?

I'm quite new on merge replication, but a scenario. If I have a server, and two clients with pull subscriptions does it matter if the time on those machines goes out of sync with each other or the server?
When I modify some data on one of those clients does it store the time against that change?
I am using MS SQL 2012.
I will be using a setup as described here,
http://msdn.microsoft.com/en-us/library/ms151329(v=sql.105).aspx
OK, I've found that merge replication doesn't store the time against a specific change.
If you have client subscriptions then the first person to sync will win in the case of a conflict.
If you have server subscriptions then the priority of those server subscriptions will determine who will win.
Neither of these options rely on time.
I currently have a 3rd-party application in production that uses merge replication. This includes a SQL 2008 R2 database server as publisher with ~100 desktop clients interacting with the database via the application, plus another 65 laptops with SQL Server Express 2008 R2 as subscribers to the publisher. The laptops synchronize via wireless when they get to their home base locations, with any deltas being pushed to the subscribers.
Now to get to your question. Time on the clients down to a millisecond likely won't make any difference. What will make a difference is whether or not the publisher and subscriber are out of sync for more than a specific period of time. Using SQL Server Mgmt Studio, look at the server publishing the database. Expand the Replication section, between Server Objects & Management, then right-click on the publisher, under Local Publications, and select Properties.
Under the General page, look at the Subscription expiration area. The default from Microsoft is 14 days and can be moved up/down as you need, or you can specify that it doesn't expire at all. Note the phrase "Replication metadata is never kept longer than this amount of time."
In other words, if one of your subscribers is off the reservation for more than 14 days, it won't matter, you will have to unsubscribe, then re-subscribe that unit. That's because of the gap between the oldest metadata on the publisher & the newest record/metadata on the subscriber. No overlap, no sync up.

SQL Server 2005 Replication and different indexes on the subscriber

We have SQL Server database setup. We are setting up a replication scenarios where we have one publisher and on subscriber. The subscriber will be used as a reporting platform so that we can run all the BI queries that we need and have to hit the server that is reciving all the data from our clients. The subscriber is set to pull data in from the distributer.
We don't have many indexes on the publisher db, but we will need them on the reporting server (i.e subscriber).
My Question is: Will SQL Server a) allow this scenario, noting that no changes on the subscriber are pushed back the the publisher. b) if a snapshot is run I am presuming it will overwrite our indexes, can I stop this from happening? c) is this a wise course of action.
Thanks.
Paul Kinlan,
http://www.topicala.com/
http://www.thecompanything.com/
The scenario you explain is a common one and one of the benefits of using replication. No changes or indexes you create on the subscriber will go to the publisher as it is a one way process. If you have to re-run the snapshot agent for some reason and re-initialize the subscriber than you will need to re-create your indexes on the subscriber. There are alot of things you can do to minimize the need to re-initialize the subscriber but some of them require some manual steps. Generally if you keep all of your index creation scripts for the subscriber up to date it usually isn't a big deal to re-run them if needed.