I am working on replicating my application's environments and one of the components is a Virtuoso RDF data store. What I need to do is copy the entire database from one host to another.
I have found these instructions but they assume admin rights on the source to produce a dump. I only have admin rights on the target host.
Is there a way to easily copy an entire source database to a target easily without doing multiple sparql reads or at least with simple sparql that doesn't require knowing the data structure, if I am not an admin and can't do a dump from the source?
If you want to take a Virtuoso Database Document from one system to another, you can as long as you have a Virtuoso Database Server binary installed at the destination.
Your destination database (the copy) will include all data stored across Virtuoso SQL Relational, RDF Property Graph, and WebDAV storage realms.
Ideally, you should put the source Database is stable state before copying the virtuoso.db and virtuoso.ini files via a checkpoint or system shutdown (which includes a checkpoint).
Related
I have started experimenting with FileTable in SQL Server. I got it working once, and was able to add and delete files via the share name, and delete files by deleting records in the proper FileTable.
However, I had names that I didn't like (test, table1, table2, etc.), and I had it on the (SQL Server default) system disk instead of the data disk, so I deleted everything and started over, intending to set everything up properly, since (I thought) I now understood enough about it to use it. But my new setup doesn't work, and I can't figure out why.
Specifically, I am unable to access the share created to hold the FileStream, neither locally when logged into the server, nor remotely from other machines. It shows in the list of shared folders, but attempting to open it gives me only an error message that I do not have permission to open it.
Both machines run under the supervision of a domain controller, and my account has local admin privileges in both. There is one instance on the server, and three databases in the instance - production, development and attachments. I regularly turn the upgraded development version into the production version, so I created a separate attachments database, which is where I put the FileTable tables - accessible from both production and development, but not duplicated, since the number of attached files and their contents are both large.
I have been reading all sorts of things on the net, and permissions are regularly mentioned, but I seem to have permissions. I can query the FileTable in that third database (with no results, because I have been unable to open the share to put files in it, but also no error, so I do have at least read access). And it is not the individual FileTable names inside the share that are inaccessible, but the entire share itself.
Can anyone give me an idea of what I may be overlooking? I had it working once, so it's not some fundamental problem with the configuration of the server.
Trying to move a database from an Azure GOV tenant to a standard Azure tenant. From what I can tell, the export is failing due to special characters in the Store Procedures (slashes, dollar signs, etc.) These are properly escaped and work as standalone T-SQL scripts. I can drop all of the SPs, move the database, and then restore. But there has to be a better way.
Has anyone else had an issue with special characters in the body of stored procedures? I am open to other ways of trying to move between tenants, but have come up empty-handed.
Adding correct argument handling to your code would be a more solid method to handle this.
Use cmd.Parameters.AddWithValue(String parameterName, Object value) | SqlParameterCollection.Add Method
https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlparametercollection.add?redirectedfrom=MSDN&view=dotnet-plat-ext-5.0#System_Data_SqlClient_SqlParameterCollection_Add_System_String_System_Object_
It will be simpler to utilise special characters as arguments if you run the SQL by creating a SQLCommand object and adding parameters to it.
Alternative
To migrate resources, you can utilise the Azure interface, Azure PowerShell, Azure CLI, or the REST API. Move resources to a new resource group or subscription | https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription
During the move procedure, both the source and target groups are
locked. Until the relocation is finished, write and delete activities
on resource groups are banned. You can't add, alter, or delete
resources in resource groups if they're locked. This does not imply
that the resources have been frozen. Applications that use the
databases will not experience any downtime if you migrate an Azure SQL
logical server and its databases to a different resource group or
subscription. They still have access to the databases and can read and
write to them. Although the lock can last up to four hours, most
manoeuvres are completed in considerably less time.
You must ensure the following before moving resources across subscriptions:
Both the source and destination subscriptions must be located in the same folder.
In both subscriptions, a single user account must be able to
generate and delete resources.
You must migrate all SQL databases on that server at the same time.
If the SQL server and the destination directory are in separate directories, you can transfer the SQL server to a temporary, trial subscription, then move that subscription to the target directory (from the old portal), then complete the move in the new portal using the target directory.
We are not hosting our databases. Right now, One person is manually creating a .bak file from the production server. The .bak then copied to each developer's pc. Is there a better apporach that would make this process easier? I am working on build project right now for our team, I am thinking about adding the .bak file into SVN so each person has the correct local version? I had tried to generate a sql script but, it has no data just the schema?
Developers can't share a single dev database?
Adding the .bak file to SVN sounds bad. That's going to keep every version of it forever - you'd be better off (in most cases) leaving it on a network share visible by all developers and letting them copy it down.
You might want to use SSIS packages to let developers make ad hoc copies of production.
You might also be interested in the Data Publishing Wizard, an open source project that lets you script databases with their data. But I'd lean towards SSIS if developers need their own copy of the database.
If the production server has online connectivity to your site you can try the method called "log shipping".
This entails creating a baseline copy of your production database, then taking chunks of the transaction log written on the production server and applying the (actions contained in) the log chunks to your copy. This ensures that after a certain delay your backup database will be in the same state as the production database.
Detailed information can be found here: http://msdn.microsoft.com/en-us/library/ms187103.aspx
As you mentioned SQL 2008 among the tags: as far as I remember SQL2008 has some kind of automatism to set this up.
You can create a schedule back up and restore
You don't have to developer PC for backup, coz. SQL server has it's own back up folder you can use it.
Also you can have restore script generated for each PC from one location, if the developer want to hold the database in their local system.
RESTORE DATABASE [xxxdb] FROM
DISK = N'\xxxx\xxx\xxx\xxxx.bak'
WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10
GO
Check out SQL Source Control from RedGate, it can be used to keep schema and data in sync with a source control repository (docs say supports SVN). It supports the datbase on a centrally deployed server, or many developer machines as well.
Scripting out the data probably won't be a fun time for everyone depending on how much data there is, but you can also select which tables you're going to do (like lookups) and populate any larger business entity tables using SSIS (or data generator for testing).
Does anyone know of an easy way to copy a database from one computer to a file, and then import it on another computer?
Here are a few options:
mysqldump
The easiest, guaranteed-to-work way to do it is to use mysqldump. See the manual pages for the utility here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
Basically, it dumps the SQL scripts required to rebuild the contents of the database, including creation of tables, triggers, and other objects and insertion of the data (it's all configurable, so if you already have the schema set up somewhere else, you can just dump the data, for example).
Copying individual MyISAM table files
If you have a large amount of data and you are using the MyISAM storage engine for the tables that you want to copy, you can just shut down mysqld and copy the .frm, .myd, and .myi files from one database folder to another (even on another system). This will not work for InnoDB tables, and may or may not work for other storage engines (with which I am less familiar).
mysqlhotcopy
If you need to dump the contents of a database while the database server is running, you can use mysqlhotcopy (note that this only works for MyISAM and Archive tables):
http://dev.mysql.com/doc/refman/5.0/en/mysqlhotcopy.html
Copying the entire data folder
If you are copying the entire database installation, so, all of the databases and the contents of every database, you can just shut down mysqld, zip up your entire MySQL data directory, and copy it to the new server's data directory.
This is the only way (that I know of) to copy InnoDB files from one instance to another. This will work fine if you're moving between servers running the same OS family and the same version of MySQL; it may work for moving between operating systems and/or versions of MySQL; off the top of my head, I don't know.
You may very well use SQL yog - a product of web yog.. it uses similar techniques mentioned above but gives you a good GUI making you know what you are doing. You can get a community project of the same or a trial version from site
http://www.webyog.com/en/downloads.php#sqlyog
This has option for creating backups to a file and restoring the file into new server. Even better option of exporting database from one server to another is there..
Cheers,
RDJ
Scenario
In our replication scheme we replicate a number of tables, including a photos table that contains binary image data. All other tables replicate as expected, but the photos table does not. I suspect this is because of the larger amount of data in the photos table or perhaps because the image data is a varbinary field. However, using smaller varbinary fields did not help.
Config Info
Here is some config information:
Each image could be anywhere from 65-120 Kb
A revision and approved copy is stored along with thumbnails, so a single row may approach ~800Kb
I once had trouble with the "max text repl size" configuration field, but I have set that to the max value using sp_configure and reconfigure with override
Photos are filtered based on a “published” field, but so are other working tables
The databases are using the same local db server (in the development environment) and are configured for transactional replication
The replicated database uses a “push” subscription
Also, I noticed that sometimes regenerating the snapshot and reinitializing the subscription caused the images to replicate. Taking this into consideration, I configured the snapshot agent to regenerate the snapshot every minute or so for debugging purposes (obviously this is overkill for a production environment). However, this did not help things.
The Question
What is causing the photos table not to replicate while all others do not have a problem? Is there a way around this? If not, how would I go about debugging further?
Notes
I have used SQL Server Profiler to look for errors as well as the Replication Monitor. No errors exist. The operation just fails silently as far as I can tell.
I am using SQL Server 2005 with Service Pack 3 on Windows Server 2003 Service Pack 2.
[update]
I have found out the hard way that Philippe Grondier is absolutely right in his answer below. Images, videos and other binary files should not be stored in the database. IIS handles these files much more efficiently than I can.
I do not have a straight answer to your problem, as our standard policy has always been 'never store (picture) files in (database) fields'. Our solution, that applies not only to pictures but to any kind of file, or document, is now standard:
We have a "document" table in our database, where document/file names and relative folders are stored (in order to get unique document/file names, we generate them from the primary key/uniqueIdentifier value of the 'Document' table).
This 'document' table is replicated among our different suscribers, like all other tables
We have a "document" folder and
subfolders, available on each of our
database servers.
Document folders are then replicated independently from the database, with some files and folders replication software (allwaysynch is an option)
main publisher's folders are fully accessible through ftp, where a user trying to read a document (still) unavailable on his local server will be proposed to download it from the main server through a ftp client software (such as coreFTP and its command line options)
With an images table like that, have you considered moving that article to a one-way (or two-way, if you like) merge publication? That may alleviate some of your issues.