How to get Sybase replication definitions code? - replication

How to get the code of all replication definitions of a Sybase server?
If possible and since I'm new to this, give me a little explanation on them as well.

Much of the information about your replication system can be found using the various help functions in the RSSD/ERSSD. The specific commands will vary somewhat based on the kind of Replication you are using (Warm Standby, Function String, Table, Procedure, mixed).
rs_helpdb [ dataserver, database ]: Shows all databases known by the RSSD. If you specify dataserver/datbase, it will only show information for that connection. Output lists Server/Database, rep dbid, primary RS, errorclass, repserver errorclass, function class and status.
rs_helprepdb: Shows replicate databases that have subscriptions to primary data in the current RS, or shows the specified DS/DB.
Other commands that will help:
rs_helprep Displays information about replication definitions
rs_helpsub Displays information about subscriptions
rs_helpreptable Displays information about replication definitions created against a primary table.
rs_helppub Displays information about publications.
rs_helppubsub Displays information about publication subscriptions and article subscriptions.
rs_helpdbrep Displays information about database replication definitions associated with the current Replication Server
rs_helpdbsub Displays information about database subscriptions associated with the replicate data server.
These commands all pull information from the tables in the RSSD, so you can also poke around the RSSD to find the information you are looking for. Here is the RS table diagram to help.
Also, if you are going to do any regular replication work, I highly recommend Rob Verschoors The Complete Sybase Replication Server Quick Reference Guide (www.sypron.nl), as it has >90% of what you need to create and manage a Sybase rep environment.

you can use Replication Monitoring Services (RMS)
Various stored procedures and shell scripts for reverse-engineering RepServer objects can be downloaded from http://repserver.codexchange.sybase.com (link 'repserver', folder Admin Tools/Scripts)
PowerDesigner provides reverse-engineering but might require a special license

Related

Azure SQL view contents of secondary (Geo-Replicated) Database

I have set up an Azure database instance which supposedly replicates into a 'read only' secondary database using standard geo-replication. In the Azure portal I can see the status of the replication is 'online' and 'Secondary type' is 'Offline', which appears to be normal.
My question is, is there a way for me to see the actual contents of the secondary database, to ensure the replication is actually working as planned?
I cannot 'Manage' the database in the portal. I can connect to the instance in SQL Management Studio, where I can see the database but expanding tables / stored procedures shows nothing (a bit like connecting to a secure database using the non-secure connection string). I am also not able to run any queries against it as it gives me 'Connection to an offline secondary database is not allowed.'
I've searched this site an did a web search for an answer but can't seem to find one. Am I supposed to blindly rely on the fact that Azure is perfoming the replication correctly (with no way to double-check), or am I missing something here?
Many thanks in advance for any light you are able to shed on this.
Standard Geo-Replicated Secondary DBs are offline copies that do not accept client connections (so there is no way to query the data directly). If you need a readable Geo-Replicated Secondary then you must use the Active Geo-Replication available for Premium DBs.
Even though you can't query Standard Geo-Replicated DBs directly, you can use the DMVs in the Master to determine if the continuous copy is working correctly.
On the Master try the following:
SELECT * FROM sys.dm_database_copies
SELECT * FROM sys.dm_Continuous_copy_status
I hope this helps!
For more information about Standard Geo-Replication, Active Geo-Replication Or Checking the activity of continuous copy use the following links.
Standard Geo-Replication: https://msdn.microsoft.com/en-us/library/azure/Dn758204.aspx
Active Geo-Replication: https://msdn.microsoft.com/en-us/library/azure/dn741339.aspx
Continuous Copy DMV Blog: http://www.sqlservercentral.com/blogs/pie-in-the-sky/2014/12/25/monitoring-geo-replication-in-sql-azure-using-dmvs/
I tried to repro your situation and I think I understand the confusion.
When the Secondary Type = "Offline" then it is a standard Geo-Replicated Secondary. The Primary Databases page is confusing, but when you click on the link to the secondary should show that it is offline.
As far as understanding if the continuous copy is working, run the script below against the Primary (I was mistaken last time, Sorry).
SELECT * FROM sys.dm_Continuous_copy_status
You should see the linked server, database, and Replication State.
As before if you need to read from your secondary, you will have to created a premium active-Geo Replicated Secondary.
Hope This helps!

How to track which tables the database wrote data to?

I use "Lexware Warenwirtschaft Premium 2014" (a well-known merchandise management software in Germany). It uses Sybase as a database. I connect to the database by using a ODBC connection(SQL Anywhere driver). The database has 800+ tables. For example when Lexware creates a new Article, it writes data into different tables.
Is there a way to track into which tables Lexware wrote data?
As an ad-hoc measure you could switch on ODBC tracing, and then review the contents.
http://support.microsoft.com/kb/274551 tells you how to do this from a Windows client, and you can find similar information for Linux/Unix and other clients.
You'd then have to parse the trace file to see which queries were inserted into. The first step would probably be to isolate all the SQLPrepare and SQLExecDirect statements, and check them for INSERT, UPDATE and other relevant Sybase statements.
Note that this is not something you'd want as an ongoing solution, just a way to find out what an ODBC client does if you do not have access to e.g. logging information on the database itself. However, the trace slows down execution and would generate a very large trace file if you left it running for any significant period.
I don't think so. Whatever this program does behind the interface is hidden in its binaries and unreadable for humans, so you can't read the code to see which tables are altered.
You might be able to figure out which table was edited last, depending on the SQL-Server and it's version.

SQL Server 2005: Replication, varbinary

Scenario
In our replication scheme we replicate a number of tables, including a photos table that contains binary image data. All other tables replicate as expected, but the photos table does not. I suspect this is because of the larger amount of data in the photos table or perhaps because the image data is a varbinary field. However, using smaller varbinary fields did not help.
Config Info
Here is some config information:
Each image could be anywhere from 65-120 Kb
A revision and approved copy is stored along with thumbnails, so a single row may approach ~800Kb
I once had trouble with the "max text repl size" configuration field, but I have set that to the max value using sp_configure and reconfigure with override
Photos are filtered based on a “published” field, but so are other working tables
The databases are using the same local db server (in the development environment) and are configured for transactional replication
The replicated database uses a “push” subscription
Also, I noticed that sometimes regenerating the snapshot and reinitializing the subscription caused the images to replicate. Taking this into consideration, I configured the snapshot agent to regenerate the snapshot every minute or so for debugging purposes (obviously this is overkill for a production environment). However, this did not help things.
The Question
What is causing the photos table not to replicate while all others do not have a problem? Is there a way around this? If not, how would I go about debugging further?
Notes
I have used SQL Server Profiler to look for errors as well as the Replication Monitor. No errors exist. The operation just fails silently as far as I can tell.
I am using SQL Server 2005 with Service Pack 3 on Windows Server 2003 Service Pack 2.
[update]
I have found out the hard way that Philippe Grondier is absolutely right in his answer below. Images, videos and other binary files should not be stored in the database. IIS handles these files much more efficiently than I can.
I do not have a straight answer to your problem, as our standard policy has always been 'never store (picture) files in (database) fields'. Our solution, that applies not only to pictures but to any kind of file, or document, is now standard:
We have a "document" table in our database, where document/file names and relative folders are stored (in order to get unique document/file names, we generate them from the primary key/uniqueIdentifier value of the 'Document' table).
This 'document' table is replicated among our different suscribers, like all other tables
We have a "document" folder and
subfolders, available on each of our
database servers.
Document folders are then replicated independently from the database, with some files and folders replication software (allwaysynch is an option)
main publisher's folders are fully accessible through ftp, where a user trying to read a document (still) unavailable on his local server will be proposed to download it from the main server through a ftp client software (such as coreFTP and its command line options)
With an images table like that, have you considered moving that article to a one-way (or two-way, if you like) merge publication? That may alleviate some of your issues.

Log changes made to all fields in a table to another table (SQL Server 2005)

I would like to log changes made to all fields in a table to another table. This will be used to keep a history of all the changes made to that table (Your basic change log table).
What is the best way to do it in SQL Server 2005?
I am going to assume the logic will be placed in some Triggers.
What is a good way to loop through all the fields checking for a change without hard coding all the fields?
As you can see from my questions, example code would be veeery much appreciated.
I noticed SQL Server 2008 has a new feature called Change Data Capture (CDC). (Here is a nice Channel9 video on CDC). This is similar to what we are looking for except we are using SQL Server 2005, already have a Log Table layout in-place and are also logging the user that made the changes. I also find it hard to justify writing out the before and after image of the whole record when one field might change.
Our current log file structure in place has a column for the Field Name, Old Data, New Data.
Thanks in advance and have a nice day.
Updated 12/22/08: I did some more research and found these two answers on Live Search QnA
You can create a trigger to do this. See
How do I audit changes to sq​l server data.
You can use triggers to log the data changes into the log tables. You can also purchase Log Explorer from www.lumigent.com and use that to read the transaction log to see what user made the change. The database needs to be in full recovery for this option however.
Updated 12/23/08: I also wanted a clean way to compare what changed and this looked like the reverse of a PIVOT, which I found out in SQL is called UNPIVOT. I am now leaning towards a Trigger using UNPIVOT on the INSERTED and DELETED tables. I was curious if this was already done so I am going through a search on "unpivot deleted inserted".
Posting Using update function from an after trigger had some different ideas but I still believe UNPIVOT is going to be the route to go.
Quite late but hopefully it will be useful for other readers…
Below is a modification of my answer I posted last week on a similar topic.
Short answer is that there is no “right” solution that would fit all. It depends on the requirements and the system being audited.
Triggers
Advantages: relatively easy to implement, a lot of flexibility on what is audited and how is audit data stored because you have full control
Disadvantages: It gets messy when you have a lot of tables and even more triggers. Maintenance can get heavy unless there is some third party tool to help. Also, depending on the database it can cause a performance impact.
Creating audit triggers in SQL Server
Log changes to database table with trigger
CDC
Advantages: Very easy to implement, natively supported
Disadvantages: Only available in enterprise edition, not very robust – if you change the schema your data will be lost. I wouldn’t recommend this for keeping a long term audit trail
Reading transaction log
Advantages: all you need to do is to put the database in full recovery mode and all info will be stored in transaction log
Disadvantages: You need a third party log reader in order to read this effectively
Read the log file (*.LDF) in sql server 2008
SQL Server Transaction Log Explorer/Analyzer
Third party tools
I’ve worked with several auditing tools from ApexSQL but there are also good tools from Idera (compliance manager) and Krell software (omni audit)
ApexSQL Audit – Trigger based auditing tool. Generated and manages auditing triggers
ApexSQL Log – Allows auditing by reading transaction log
Under SQL '05 you actually don't need to use triggers. Just take a look at the OUTPUT clause. OUTPUT works with inserts, updates, and deletes.
For example:
INSERT INTO mytable(description, phone)
OUTPUT INSERTED.description, INSERTED.phone INTO #TempTable
VALUES('blah', '1231231234')
Then you can do whatever you want with the #TempTable, such as inserting those records into a logging table.
As a side note, this is an extremely easy way of capturing the value of an identity field.
You can use Log Rescue. It quite the same as Log Explorer, but it is free.
It can view history of each row in any tables with logging info of user, action and time.
And you can undo to any versions of row without set database to recovery mode.

Configuring SQL Server 2005 with both server replication and client replication

I need to set up this scenario:
A SQL Server 2005 database will create a transactional replication subscription from another database to populate a set of lookup tables. These lookup tables will then be published as a merge replication publication to the client's SQL Server Mobile.
I remember seeing a similar scenario defined in the SQL Server Books Online somewhere, but I was unable to find the link anymore. I hope someone can help me find it, or otherwise point me to any other similar links.
Okay, I managed to get the answers I needed at the MSDN SQL Server Replication forum.
The article I was looking for is called: Republishing Data.
Apparently, it is located within "Advanced Replication Features and Internals" of the "Configuring and Maintaining Replications" section. It's a little non-obvious, so I spent most of my time looking in the "Replicating Data Between a Server and Clients" section, instead of there. Good to know, as there seems to be a number of other special scenarios worth looking at in there.
I do not get the interest of having a transactional replication to generate lookup tables. Such tables are not made to be updated from the client side, so why do you want to combine transactional + merge replication when you don't have the data modified by the subscribers?
Maybe the original scenario is not clear, so let me clarify.
The database where the original lookup tables are located either remotely with bad network connection, or operated under heavy load. This approach was suggested so that the lookup tables are replicated to another database which all merge replication with the clients will be performed.
Of course, it may not be the most appropriate approach to our problem, so if anyone has a better idea, we'll like to hear them.
Still, the main reason for this question is to find an article I previously found (but stupidly did not bookmarked) which described our scenario quite well. Any possible leads to this article (title, author, similar topics, etc) are definitely appreciated.