SQL Server 2005: Replication, varbinary - sql-server-2005

Scenario
In our replication scheme we replicate a number of tables, including a photos table that contains binary image data. All other tables replicate as expected, but the photos table does not. I suspect this is because of the larger amount of data in the photos table or perhaps because the image data is a varbinary field. However, using smaller varbinary fields did not help.
Config Info
Here is some config information:
Each image could be anywhere from 65-120 Kb
A revision and approved copy is stored along with thumbnails, so a single row may approach ~800Kb
I once had trouble with the "max text repl size" configuration field, but I have set that to the max value using sp_configure and reconfigure with override
Photos are filtered based on a “published” field, but so are other working tables
The databases are using the same local db server (in the development environment) and are configured for transactional replication
The replicated database uses a “push” subscription
Also, I noticed that sometimes regenerating the snapshot and reinitializing the subscription caused the images to replicate. Taking this into consideration, I configured the snapshot agent to regenerate the snapshot every minute or so for debugging purposes (obviously this is overkill for a production environment). However, this did not help things.
The Question
What is causing the photos table not to replicate while all others do not have a problem? Is there a way around this? If not, how would I go about debugging further?
Notes
I have used SQL Server Profiler to look for errors as well as the Replication Monitor. No errors exist. The operation just fails silently as far as I can tell.
I am using SQL Server 2005 with Service Pack 3 on Windows Server 2003 Service Pack 2.
[update]
I have found out the hard way that Philippe Grondier is absolutely right in his answer below. Images, videos and other binary files should not be stored in the database. IIS handles these files much more efficiently than I can.

I do not have a straight answer to your problem, as our standard policy has always been 'never store (picture) files in (database) fields'. Our solution, that applies not only to pictures but to any kind of file, or document, is now standard:
We have a "document" table in our database, where document/file names and relative folders are stored (in order to get unique document/file names, we generate them from the primary key/uniqueIdentifier value of the 'Document' table).
This 'document' table is replicated among our different suscribers, like all other tables
We have a "document" folder and
subfolders, available on each of our
database servers.
Document folders are then replicated independently from the database, with some files and folders replication software (allwaysynch is an option)
main publisher's folders are fully accessible through ftp, where a user trying to read a document (still) unavailable on his local server will be proposed to download it from the main server through a ftp client software (such as coreFTP and its command line options)

With an images table like that, have you considered moving that article to a one-way (or two-way, if you like) merge publication? That may alleviate some of your issues.

Related

FILESTREAM/FILETABLE Clarifications for Implementation

Recently our team was looking at FILESTREAM to expand the capabilities of our proprietary application. The main purpose of this app is managing the various PDFS, Images and documents to all of the parts we manufacture. Our ASP application uses a few third party tools to allow viewing of these files. We currently have 980GB of data on the Fileserver. We have around 200GB of Binary data in SQL Server that we would like to extract since it is not performing well hence FILESTREAM seems to be a good compromise to the two major data storage/access issues.
A few things are not exactly clear to us:
FILESTREAM Can or Cannot store its data on a drive that is not locally attached. We already have a File Server with a RAID 10 (1.5TB drives). This server stores all of the documents right now, would we have to move these drives to the SQL Server for FILESTREAM? That would be a tough bullet to bite since the server also is doubling as the Application Server (Two VMs on one physical server).
FILETABLE stores the common metadata about the files but where is the Full Text part of it stored to allow searching of files like doc/docx? Is this separate? Are you able to freely add criteria to this to search by? If so any links to clarify would be appreciated.
Can FILETABLE be referenced in another table with a foreign key?
Thank you in advance
EDIT: For those having these questions this web video covered everything and more in terms of explaining filestream from 2008 to 2012 and the cavets to consider (I would seriously rep him if I could): http://channel9.msdn.com/Events/TechDays/Techdays-2012-the-Netherlands/2270
In conclusion we will not be using FILESTREAM as it would be way to huge of an upsurge to accommodate for investment.
EDIT 2:
Update to #1 - After carefully assessing FileTable in addition to FILESTREAM we got a winning combination. We did have to move the files over to the new server (wasn't to painful since they were on the same VM).It honestly took more time to write an extraction tool to dump the binary data within SQL to the File System.
Update to #2 - This was seperate but again Bob had an excellent webinar explaining this: http://channel9.msdn.com/Events/TechEd/Europe/2012/DBI411
Update to #3 - Using TFT inheritance we recycled the Docs table we had (minus the huge binary blobs) which required very little changes in our legacy apps. This was a huge upshot for the developer team.
The location that the files are stored in for FileTables has to be local, or at least must appear to SQL Server as being local so a clever san driver might trick it. Since the FileTables stuff is built on the FILESTREAM stuff I imagine the limitations to be the same.
The searching of filetables is done via the containstable function which is documented on MSDN the search criteria uses the same syntax as FULLTEXT searching AFAIK.
For all intent and purpose the FileTable is a typical table so can be joined, searched or whatever. The only thing is that you have to use some functions of sql server in order to change the FILESTREAM guids into something more useful like a file path.

How to Sql Backup or Mirror database?

We are not hosting our databases. Right now, One person is manually creating a .bak file from the production server. The .bak then copied to each developer's pc. Is there a better apporach that would make this process easier? I am working on build project right now for our team, I am thinking about adding the .bak file into SVN so each person has the correct local version? I had tried to generate a sql script but, it has no data just the schema?
Developers can't share a single dev database?
Adding the .bak file to SVN sounds bad. That's going to keep every version of it forever - you'd be better off (in most cases) leaving it on a network share visible by all developers and letting them copy it down.
You might want to use SSIS packages to let developers make ad hoc copies of production.
You might also be interested in the Data Publishing Wizard, an open source project that lets you script databases with their data. But I'd lean towards SSIS if developers need their own copy of the database.
If the production server has online connectivity to your site you can try the method called "log shipping".
This entails creating a baseline copy of your production database, then taking chunks of the transaction log written on the production server and applying the (actions contained in) the log chunks to your copy. This ensures that after a certain delay your backup database will be in the same state as the production database.
Detailed information can be found here: http://msdn.microsoft.com/en-us/library/ms187103.aspx
As you mentioned SQL 2008 among the tags: as far as I remember SQL2008 has some kind of automatism to set this up.
You can create a schedule back up and restore
You don't have to developer PC for backup, coz. SQL server has it's own back up folder you can use it.
Also you can have restore script generated for each PC from one location, if the developer want to hold the database in their local system.
RESTORE DATABASE [xxxdb] FROM
DISK = N'\xxxx\xxx\xxx\xxxx.bak'
WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10
GO
Check out SQL Source Control from RedGate, it can be used to keep schema and data in sync with a source control repository (docs say supports SVN). It supports the datbase on a centrally deployed server, or many developer machines as well.
Scripting out the data probably won't be a fun time for everyone depending on how much data there is, but you can also select which tables you're going to do (like lookups) and populate any larger business entity tables using SSIS (or data generator for testing).

Easiest way to copy a MySQL database?

Does anyone know of an easy way to copy a database from one computer to a file, and then import it on another computer?
Here are a few options:
mysqldump
The easiest, guaranteed-to-work way to do it is to use mysqldump. See the manual pages for the utility here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
Basically, it dumps the SQL scripts required to rebuild the contents of the database, including creation of tables, triggers, and other objects and insertion of the data (it's all configurable, so if you already have the schema set up somewhere else, you can just dump the data, for example).
Copying individual MyISAM table files
If you have a large amount of data and you are using the MyISAM storage engine for the tables that you want to copy, you can just shut down mysqld and copy the .frm, .myd, and .myi files from one database folder to another (even on another system). This will not work for InnoDB tables, and may or may not work for other storage engines (with which I am less familiar).
mysqlhotcopy
If you need to dump the contents of a database while the database server is running, you can use mysqlhotcopy (note that this only works for MyISAM and Archive tables):
http://dev.mysql.com/doc/refman/5.0/en/mysqlhotcopy.html
Copying the entire data folder
If you are copying the entire database installation, so, all of the databases and the contents of every database, you can just shut down mysqld, zip up your entire MySQL data directory, and copy it to the new server's data directory.
This is the only way (that I know of) to copy InnoDB files from one instance to another. This will work fine if you're moving between servers running the same OS family and the same version of MySQL; it may work for moving between operating systems and/or versions of MySQL; off the top of my head, I don't know.
You may very well use SQL yog - a product of web yog.. it uses similar techniques mentioned above but gives you a good GUI making you know what you are doing. You can get a community project of the same or a trial version from site
http://www.webyog.com/en/downloads.php#sqlyog
This has option for creating backups to a file and restoring the file into new server. Even better option of exporting database from one server to another is there..
Cheers,
RDJ

How do you upload SQL Server databases to shared hosting environments?

We have a common problem of moving our development SQL 2005 database onto shared web servers at website hosting companies.
Ideally we would like a system that transfers the database structure and data as an exact replica.
This would be commonly achieved by restoring a backup. But because they are shared SQL servers, we cannot restore backups – we are not given access to the actual machine.
We could generate a script to create the database structure, but then we could not do a data transfer through the menu item Tasks/Import Data because we might violate foreign key constraints as tables are imported in an order the conflicts with the database schema. Also, indexes might not be replicated if they are set to auto generate.
Thus we are left with a messy operation:
Create a script in SQL 2005 that generates the database in SQL 2000 format.
Run the script to create a SQL 2000 database in SQL 2000.
Create a script in SQL 2000 that generates the database structure WITHOUT indexes and foreign keys.
Run this script on the production server. You now have a database structure to upload data to.
Use SQL 2005 to transfer the data to the production server with Tasks/Import data.
Use SQL 2000 to generate a script that creates the database with indexes and keys.
Copy the commands that generate the indexes and foreign keys only. These are located after the table creation commands. Note: In SQL 2005, the indexes and foreign keys are generated as one and cannot be easily separated.
Run this script on the production database.
Voila! The database is uploaded with all data and keys/constraints in place. What a messy and error prone system.
Is there something better?
Scott Gu had written few posts on this topic :
SQL Server Database Publishing Toolkit for Web Hosting
Generation scripts are fine for creating the database objects, but not for transporting database information. For example, client-specific databases where the developer is required to pre-populate some data.
One of the issues I've run into with this is the new MAX types in SQL Server 2005+. (nvarchar(max), varchar(max), etc.) Of course, this is worse when you are actually using Sql Server Express, which doesn't allow for exporting other than creating your own scripts to create the data.
I would recommend switching to a hosting company that allows you to have the ability to FTP backup files and does NOT require you to use your own scripts. That's the whole point of SQL Server, right? To provide more tools that are friendlier to use. If the hosting company takes that away, you may as well move to MySql for its ease in dumping information.
WebHost4Life is a life saver in this category. They offer FTP to the database server to upload your backup file or MDF and LDF files for attachment! I was so upset when I saw GoDaddy had the similar restriction you mentioned. Their tool didn't tell me it was a bad import, and I couldn't figure out why my site was coming back with 500 errors.
One other note: I'm not sure which is considered more secure. I enabled external connections in GoDaddy and connected with Management Studio, and I was able to see every database on that server! I couldn't access them, but I now have that info. A double whammy is that GoDaddy requires that the user name for the DB be the same as the DB! now all you need to do is spam passwords against those hundreds of DBs!
Webhost4life, on the other hand, has only your specific database shown in Management Studio. And they let you pick your own DB name and user name, independent of each other. They only append the same unique id on the end of the user & db names in order to keep them from conflicting with others.
You should not rely on restoring backups for copying / transferring databases. You need to use scripts - trust me you will get better at it.
I have used the RedGate Compare tools with shared hosting and it works well.
Database-generation scripts are messy, but they also have several advantages that ... well, make the pain more tolerable.
First, if you treat the DB scripts as real programming tasks in and of themselves, you can encapsulate the messiness. If you generate a script once (using a database tool), you can split the table structure aspects from the constraint aspects (keys, indices, etc.). Similarly, you can export the data once, but split it it into "system" data that's not frequently changed but is necessary for correct operation (stuff like tax or shipping rates, etc.), 'test' data that's easily identifiable, and 'operational' data that needs to be moved from DB version Old to DB version New (last week's Orders).
The first 3 minutes after you've accomplished that, things are wonderful: you can regenerate a new database with or without test data in a few minutes. Unfortunately, after 3 minutes, the databases are out of synch, at least in terms of data, if not quite as frequently in terms of structure.
I personally like to have each table's structure as a separate SQL file (and it's constraints as a separate file in a separate directory, and it's test data in one file, it's system data in another, etc.). On the one hand, this means that several different files have to be touched when making a change, but on the other hand, it makes it much easier to see the granularity of what's been changed: it's all right there in the version control logs. (I could probably be convinced that many-files is a mistaken strategy...)
All of this is predicated on the assumption that you have some facility for actually running a complex script involving many files and are not just constrained to some Web-based control panel, which may be what you're describing when you say "we are not given access to the actual machine." I feel that you can't do custom software development and not have some kind of shell access on the server; the hosting business is competitive enough that you can certainly find a script-friendly host easily enough.
Check whether the webhsoting company provides myLittleBackup
This is definitively the easiest solution to "install" a db from the development server to the shared sql server
Answer for SQL Server 2008 users.
I had the same exact issue as OP but I was using SQL Server 2008 and my shared hosting company is GoDaddy. Here's the solution to copy DB + the data to GoDaddy database...
In Visual Studio 2010, go to Server Explorer (in VS Express, I think it's called database explorer). Right click on database and select Publish to Provider ... this opens the Database Publishing Wizard ... go thru the wizard and it'll create a xxx.sql file on your local computer ...
Open SQL Server Management Studio and connect to the GoDaddy database (you should have already created this via the GoDaddy control panel within their website) ...
Open windows explorer and find the xxx.sql file and double click it. The script should open up in SSMS. Execute the script "within the proper database" ... voila, done.

What is the easiest way to copy a database from one Informix IDS 11 Server to another

The source database is quite large. The target database doesn't grow automatically. They are on different machines.
I'm coming from a MS SQL Server, MySQL background and IDS11 seems overly complex (I am sure, with good reason).
One way to move data from one server to another is to backup the database using the dbexport command.
Then after copying the backup files to the destination server run the dbimport command.
To create a new database you need to create the DBSpace for the new database using the onmonitor tool, at this point you could use the existing files from the other server.
You will then need to create the database on the destination server using the dbaccess tool. The dbaccess tool has a database option that allows you to create a database. When creating the database you specify what DBSpace to use.
The source database may be made up of many chunks which you will also need to copy and attach to the new database.
The easiest way is dbexport/dbimport, as others have mentioned.
The fastest way is using onpload, the High Performance Loader. If you have lots of data, but not a ridiculous number of tables, this is definitely worth pursuing. There are some bits and pieces on the IIUG site that may be of assistance in scripting the HPL to generate all the config you'll need.
You have a few choices.
dbexport/dbimport
onunload/onload
HPL (high performance loader) options.
I have personally used onunload/onload and dbexport/dbimport. I have not used HPL. I'm using IDS 10.
onunload/onload IBM docs
Back up the raw database to disk or tape in page size chunks
faster (especially if you go to disk)
Issues if the the database servers are on different operating systems or hardware or if they just have different page sizes.
dbexport/dbimport IBM docs
backup the database in delimited ascii files
writes an ascii schema of the database including all users, tables, views, indexes, etc. Everything about the structure of the database into one huge plain text file.
separate plain text files for each table of the database as well
not so fast
issues on dbimport on any table that has bad data, any view with incorrect syntax, etc. (This can be a good thing, an opportunity to identify and clean)
DO NOT LEAVE THIS TAPE ON THE FRONT SEAT OF YOUR CAR WHEN YOU RUN INTO THE STORE FOR AN ICE CREAM (or you'll be on the news). Also read ... Not a very secure way to be moving data around. :)
Limitation: Requires exclusive access to the source database.
Here is a good place to start in the docs --> Migration of Data Between Database Servers
have you used the export tool ? There used to be a way if you first put the db's into quiescent mode and then you could actually copy the DBSpaces across (dbspaces tool I think... its been a few years now).
Because with informix you used to be able to specify the DBSpaces(s) to used for the table (maybe even in the alter table ?).
Check - dbaccess tool - there is an export command.
Put the DB's into quiesent mode or shut down, copy the dbspaces and then attach table telling it to point to the new dbspaces file. (the dbspaces tool could be worth while looking at.. I have manuals around here. they are 9.2, but it shouldn't have changed too much).
If both the machines use the same version of IDS then another option would be to use ontape to take a backup on one machine one machine and restore on another. You can use the STDIO option and then just stream the backup onto the other machine where the restore could just restore from the STDIO.
From the "Data Replication for High Availability and Distribution" redbook:
ontape -s -L 0 -F | rsh secondary_server "ontape –p"
You could also create a passwordless ssh connection b/w the hosts and transfer in a more secure way.