I am using Sybase IQ 15, looking for a mechanism to replicate IQ tables to IQ replication server.
How to find if data has changed in IQ ( there are no triggers in IQ)
Able to replicate tables having timestamp and id columns.
SAP IQ's transaction log can not be replicated by any tool. Even the vendor (SAP) does not support any program to do that.
If you want to replicate the changes from SAP IQ you need to provide some kind of your own CDC logic. For example you can provide a timestamp for every row and periodically run a query that copies rows modified since last run.
Or you can run periodically a full export of table data.
This can be achieved by system views, whenever any data modification happens on sybase IQ tables , timestamp is captured in system view SYS.SYSIQTAB
SQL to find last data modified in table
SELECT A.Table_id, T.table_name , dateformat(A.update_time,'mm/dd/yyyy hh:mm:ss.nnnnnn') LastModifiedTime FROM SYS.SYSIQTAB as A, SYS.SYSTABLE as T WHERE A.table_id = T.table_id AND T.table_name ='TableName'
Related
We have primary source database as Oracle 11gR2 and target as SAP HANA. We are trying to test SAP - Sybase Replication server for replication from Primary ORACLE to Target HANA.
We need to add extra columns such as RECORD_DATE and LAST_MODIFIED_DATE to HANA tables. Is it possible to add Transformations or extra columns to target tables which are not present in Primary Database.
Best Regards
are you thinking of adding these fields during replication.
or want to merge them after replication. If after replication you want to merge them simply just go to Hana Studio and make an Information view to get the the merged or simply joined data from different tables.
and if that table is not present in the the source System then instead of replication make a Excel flat file and import it into Hana using the Import option on the RHS of hana studio.
and The only way to Alter a table definition in Hana is by Using the Alter Table SQL statement no other Shortcuts. Or just import and make a join.
I'm assuming you want to capture auditing data for records inserted/updated by the Repserver maintenance user (in the HANA database).
While the column default (for inserts; as discussed with Shivam) will work, for updates you've got a few options:
an update trigger on the HANA table [I don't work with HANA so I don't know if this is doable]
defining the update column as a (materialized) computed column, with the associated function being responsible for obtaining the current date/time when other columns in the table are modified [while this is doable in Sybase ASE, I don't know if this is doable in HANA]
(in repserver) create a custom function string for the rs_update function on this table which emulates a standard rs_update function string with the addition of an update of LAST_MODIFIED_DATE = getdate() (replace getdate() with HANA's equivalent of the current date/time) [there are a couple different ways to do this depending on SRS version, what's doable with HANA-specific function strings, and personal preference - a bit much to go into at this point if a custom function string is going to be out of the question or you've already got an acceptable solution]
We have developed a set of operational reports long back ago based on the SQL server , Currently we have added a third party source which has Oracle as source and we need to modify our existing SQL queries to pull the partial data from oracle as well. The tricky part of the requirement is we have different table from different source belong to single query
for example
SELECT A.ID , B.Name
FROM A --Belong to SQL server
INNER JOIN B --Belong to Oracle server
On A.id=B.id
I know we can have a linked server facility in SQL which allows us to grab the table B in some temp table and then join the SQL table with the temp table.
I just want to know is there any better approach to accomplish this as this would be very difficult to deal with the complex queries in case of this , Specially the sub queries and recursive queries.
Is creating a staging table with all required field from oracle and then use it in queries makes sense? as we are dealing with the operational data (though in most of the cases it is 1 day old,So we can do this).
We are using SSIS,SSRS and SQL serer 2008 R2 as development environment.
Thanks.
I have a vendor SQL Server 2008 db which I am trying to add-on to with additional tables etc. to customize for my order processing .Net 3.5 site. The vendor db should not be altered. However I need to record the current Order Status which is not included in the vendor's db.
Currently I'm using a VIEW with a CASE to get the Status based on data in the Orders table. To improve performance I'd like to create a new Status table with OrderID & Current_Status.
To keep the Status table up-to-date is there an alternative to frequently running a script which will look at all Orders and update the Status table accordingly?
Creating a job or other polling-type solution is probably fine if the load imposed by the query is not high and if you can tolerate some delay before the data is synched.
If you can tolerate at least some delay, but the query to check the table is extensive, or if you have several tables you want to perform this operation on, you could also use Change Tracking, since you have SQL Server 2008.
Here's a link on how to use it.
Create a SSIS job to check orders and update the status table. And schedule it to run hourly/daily(as per your requirement). Creating SSIS jobs will not affect any of your vender process or objects.
If performance is your worry, how about an indexed view?
I have a PostgreSQL database that stores real-time data from sensors in a specific table (every 30sec).
What I want to do, is to get periodically the data from the remote PostgreSQL database (for instance every 30sec) and store them in SQL Server 2005 to manipulate them locally. I don't care about having the two databases with duplicate tables. Actually this is what I want to achieve!
So far, I have as Linked Server the PostgreSQL to SQL Server and I can query and retrieve the sensor data. However, I prefer to store them in my SQL Server for performance reasons.
Solution so far:
Make select openquery statements with the linked PostgreSQL and insert the results to my table in SQL Server. Repeat this periodically and store fresh data only (e.g. with a larger timestamp).
I assume that my proposed solution is not ideal. I want to know what are the best practices to achieve this synchronization between the two databases.
Thank you in advance!
If you don't want to write your own code(implementations) to do that you can use SymmetricDS to synch the table from postgreSQL to MSSQL .
I need to quickly implement a read-only database containing data pulled from two identically structured live databases.
The live dbs are actually company dbs from a Dynamics accounting system so I'm happy for any Dynamics specific advice but this is mostly a SQL question. It's a fairly old version of Dynamics from before Great Plains was acquired by Microsoft. This is on SQL Server 2000.
We have reports and applications which access the Dynamics data. These apps are designed to look at one company db. Now we need to add another. It's appropriate that most of these reports and apps see combined data. They don't really care which company an order or invoice exists in. They only look at a small number of the tables.
It seems to me that the simplest solution is to create a reports only db with combined data. Preferably, we need an efficient way to update this db with changes several times a day.
I'm a developer, not a db expert but here's my plan:
Create the combined reporting db with the required tables initially with the same table structure as the live dbs.
All Dynamics tables seem to have an int identity column called DEX_ROW_ID. I'm not sure what it's used for, (it's not indexed) but that seems like the obvious generic way to uniquely identify rows. On the reporting db I will change it to a normal int (not an identity). I will create a unique index on DEX_ROW_ID in all dbs.
Dynamics does not have timestamps so I will add a timestamp column to tables in the live dbs and a corresponding binary(8) column in the reporting db. I'm assuming and hoping that Dynamics won't be upset by the additional index and column.
Add an int CompanyId column to the reporting db tables and add it to the end of any unique indexes. Most data will be naturally unique even without that. ie, order and invoice numbers etc will be different for the two live dbs. We may need to make some minor changes to the applications but I'm not expecting to do much other than point them to the new reporting db.
Assuming my reporting db is called Reports, the live dbs are Live1 and Live2, the timestamp column is called TS and all dbs are on the same server ... here's my first attempt at an update script for copying the changes in one table called MyTable in Live1 to the reporting db.
USE Reports
CREATE TABLE #Changes
(
ReportId int,
LiveId int
)
/* Collect in a temp table the ids or rows which have been deleted or changed
in the live db L.DEX_ROW_ID will be null if the row has been deleted */
INSERT INTO #Changes
SELECT R.DEX_ROW_ID, L.DEX_ROW_ID
FROM MyTable R LEFT OUTER JOIN Live1.dbo.MyTable L ON L.DEX_ROW_ID = R.DEX_ROW_ID
WHERE R.CompanyId = 1 AND L.DEX_ROW_ID IS NULL OR L.TS <> R.TS
/* Delete rows that have been deleted or changed on the live db
I wonder if using join syntax would run better than the subquery. */
DELETE FROM MyTable
WHERE CompanyId = 1 AND DEX_ROW_ID IN (SELECT ReportId FROM #Changes)
/* Recopy rows that have changed in the live db */
INSERT INTO MyTable
SELECT 1 AS CompanyId, * FROM Live1.dbo.MyTable L
WHERE L.DEX_ROW_ID IN (SELECT ReportId FROM #Changes WHERE LiveId IS NOT NULL)
/* Copy the rows that are new in the live db */
INSERT INTO MyTable
SELECT 1 AS CompanyId, * FROM Live1.dbo.MyTable
WHERE DEX_ROW_ID > (SELECT MAX(DEX_ROW_ID) FROM MyTable WHERE CompanyId = 1)
Then do the same for the Live2 db. Repeat for every table in Reports. I know I should use a parameter #CompanyId instead of the literal but I can't do that for the live db name some I might generate these dynamically with a C# program or something.
I'm looking for any advice, suggestions or critique on what I'm doing here. I know it won't be atomic. Things could be happening on the live db while this script runs. I think we can live with that. We'll probably do a full copy either nightly or weekly when nothing is happening on the live dbs.
We need to favor performance over elegance or perfection. Some initial testing has the first query with the TS comparisons running at about 30 seconds for the biggest table so I'm optimistic that this is going to work but I'd also like to know if I'm missing something obvious or not seeing the forest for the trees.
We don't really want to deal with log files on the reporting db. Can we just set that to simple recovery model and forget about logs?
Thanks
I think there are a couple open questions here.
Do you need these reports to be near-real-time? Or is this this sort of reporting that could live with daily updates? But assume you need up-to-the-minute data.
Have you considered querying the databases directly and merging the data per-report on the fly? You'll have to do a lot of reporting to duplicate the effort that's going to go into designing, creating, and supporting a real-time merged replicated database.
Thirty seconds is (IMHO) unacceptable for any single query against a production database. There could be any number of tuning-related reasons for taking this long, but it at least means you're going to need serious professional SQL Server optimization resources (i.e. people). And if this is a problem for the queries for reports, it doesn't bode well for the queries to maintain a separate database for reporting.
Tuck into the back of your mind the consideration that, if you need to consolidate to a single database, it's worth considering whether you should make it an OLAP database rather than a mirror. The mirror will be quicker and easier, but the OLAP would be far more flexible and powerful in the long term; and it might be well to go the whole way from the beginning.
The last thing I'd want to do is write a custom update script. Try these bulletproof methods first:
Let's hope your production databases are backed up. Restore those backups every night to the reporting server. You can automate restores with the RESTORE command, which will work with a file on a network server.
Use SQL Server replication to push data from the live servers to the backend.
Schedule a DTS package every night to import the entire production database.
This might seem like brute force. But since you're copying a 2000-era database, brute force cannot be a problem with today's hardware. As an added advantage, these methods can be supported by a sysadmin instead of a developer.
Method 1 has the added added advantage of serving as backup verification. :)