I have a web application which provide some information for my customers. I have another version (windows) that exactly work same as web application.
This is because Web connection may lost for some hours and in this time user is going to use the application.
I'm wonder how to sync these SQL Server databases.
Note that web application is using from 3 different cities and all of them have a windows based application too. What should I do?
NoteL windows verision is exactly web application which installed in the Local Web Server in 3 different cities and users have access to them via their LAN.
All updates in data to from/to web/windows would originate from the windows application. But the problem is that the windows app will run when there is no internet connection.
So you will have to use a windows service which will call a webservice for local and remote database updates. The windows can wake up every x mins and update the remote and local databases.
The webservice will have two methods:
GetData(DateTime getRecordsFromThisDate) - Windows service should call this on regular intervals and update the local database.
UploadData(dataRows/collection) - Windows service should call this on regular intervals and update the remote database.
Each record in database will have a timestamp. For local update, get the largest timestamp and send it as parameter to GetData(). The webservice will return the records created after this time.
For upload data, you will have to store the last time when an successful upload operation was run. Get the records(inserted and updated) after this time and send them to UploadData().
Your choices could be the use of a database backup to synchronize (probably pretty slow and impractical). After that you must use ETL. Pick your favorite tool. You could use either sql server CDC or I would recommend Change tracking to identify your changes and load just those. Then use merge to synchronize your changes. Granted these solutions will require you to set up linked servers or use a third party to temporarily hold the dml changes.
http://technet.microsoft.com/en-us/library/bb510625.aspx
http://msdn.microsoft.com/en-us/library/cc305322.aspx
I thought I would add one non microsoft solution http://www.red-gate.com/products/sql-development/sql-data-compare/ its not free but does exactly what you need.
Related
I have an ASP.NET Web Application that is connected to a Database that is installed in several clients in production environment.
Some of those clients manage critical information (in other schemas, not accesible for the Web App, like people's money) so the access to execute scripts directly in the database to fix things in my Web App, if it's needed, requires time and also approbation, sometimes it takes weeks..
As some of my clients have a volatile reallity, my Web App has to manage a lot of changes in some short periods of time, that means script executions in the database to alter data or schema, and that means time waste !
Long story short, my question is, is it a good practice to implement a page, only for administrator users, that executes a raw query directly to database?
Think in the scenario where security issue is managed properly.
Something like: Sql Pad where you cannot see the entire database system, just the query and the result as the target database is only one.
No. It's a terrible idea. The security issue is probably not manageable - a web page that's available on the public internet which grants schema modification rights to the logged in user is a horrible security risk. Even if you can't get to another schema, you can easily bring the server to its knees by writing simple SQL which consumers all CPU, memory or disk space.
It's also terrible because you lose any track of what changes were installed in which environment.
If the IT department won't approve your scripts when run from management studio they certainly won't let you loose on your own via a web interface.
I've always solved this problem via automated deployment scripts - execute the schema changes etc. as a part of installing the new version of the web application. That way, you can do things like back up the database before running your changes, keep track of versioning and control access.
There are some features in our LOB application that allow users to define their own queries to retrieve data for reports and listings within the app. The problem that we are encountering is that sometimes these queries they have written a really heavy (and sometimes erroneous) and cause massive load on the server.
Removing these features is out of the question but Im wanting to know if there is a way to create some type of sandbox within SQL server so that the queries that they execute are only allotted a certain amount of resources to execute therefore not giving them the chance to cause any damage to anyone else using the system. Any ideas?
The Resource governor has been mentioned in the comments above already. One other solution I can think of is using SQL Server High Availability Groups.
At the last place I worked had this kind of set up. There is a primary server which takes in all the transactions that write stuff to the database, with a secondary in case the primary fails. Added to this we also had read-only replicas added to the availability group.
The main purpose of this is in the event that your main server goes down you are automatically transferred to another replica. When you connect your application to the database, you connect it to the Availability Group rather than a specific server. Then if a server goes down you are automatically transferred to a secondary server instead. However, it can also be used to optimise application functionality that just needs read-only access by taking load off the primary server.
Any functionality that we knew that it only needed read-only access then we could connect to the availability group and add into the connection string ApplicationIntent=READONLY which means that we're using the read-only replica rather than the primary, leaving the primary for regular transactions. (IIRC, by default the primary will accept any read/write connection, so you have to configure the primary not to accept read-only connections)
Anyway, the kicking off point for reading up about this is here: https://msdn.microsoft.com/en-us/library/ms190202.aspx
The latest Windows 10 1903 upgrade already has inbuilt Sandbox features, where you can run SQL server within it's own sandbox. I don't think SQL Server itself has its own inbuilt sandbox environment, as it would be practically impossible to manage within a normal Windows server that is not using sandbox, if you know what I mean.
Right now, our application only has one Web Site instance along with SQL Database deployed at Azure US datacenter. We are looking for deploying more Web Site instance at other datacenter such as APAC and Europe. There still be a local SQL Database for each of those web site instance. We would like end user could fail over to another instance if his registered instance is not available, such as if US web site instance is down, we could fail over user to Europe instance. With this, we would need to synchronize local SQL Database at all data centers, US, Europe and APAC.
So we are looking for what's best approach to implement the database synchronization here for Azure SQL Database. Here are what we found at this point:
Azure Data Sync, it looks like that it is the perfect choice since it is available right away at Azure Management Portal and it would be up and running with some simple configuration. However there seems couple catches. The feature has been on preview about 2 years now (see this link with the following quote from comment):
SQL Data Sync has been in preview for over 2 years and the last update was December 2012. Has this been abandoned? Is this a technology we should encourage our clients to use? There absolutely needs to be an ability to synchronize data between a local SQL DB and Azure but Microsoft seems to have dropped this and I'm leery of putting a client on this only to find that the plug has been pulled. You owe it to your users to give us some information
I also saw the post Azure data sync not syncing all databases at SO, it seems that this feature is a second class feature at Azure and MS doesn't really pay sufficient attention to it. So I am worried how good it is.
Microsoft Sync Framework, it seems a more generic sync framework and more suitable for client and server sync instead of sync among server database. Plus it is not simple as above SQL Data Sync which is available just by configuration at Azure.
Any other suggestions on sql database sync at Azure? It would be really appreciated if you could share your experience here.
Thanks very much in advance for your insight.
Update:
Azure Data Sync is built upon using Microsoft Sync Framework: see link, the quote:
Microsoft SQL Data Sync is a cloud-based data synchronization service built on the Microsoft Sync Framework technologies.
Since no one is answering this question and I am going to do it myself. Based on some latest information, the Azure Data Sync is buggy and can not be used for production at this point. I guess that's the reason why it never moves out of preview even after around 2 years. There is no other good approach for handling Azure SQL Database sync at this point unless you want to build something yourself.
you can use RedGate Data Compare to sync your Azuresql DB with your Local DB
I'm developing application WinForms .net 4.0 using C# and the backend is SQL Server 2008
the nature of the data for this app is to be displayed to the user in real time manner, whenever the data is changed or new data was added the UI has to reflect that in real time.
I'm trying to find out the best way to get the data from SQL without constantly pooling from the server, I came to a few options:
Create background thread to update the data. (I don't like pooling from the server)
Use SQLDependency class to receive notification from the server.
What do you recommend, or if you have a better method it will be great if you can share it.
If you only have a few clients then a SQLDependency *might be an OK solution. However here is the Microsoft recommended approach for a full blown client/server application.
http://msdn.microsoft.com/en-us/library/ms187528.aspx
This approach is good for many clients but less frequent changes.
The last time I had this type of requirement for more frequent changes with a bunch of clients (i.e. thousands) we built a middleware service that we installed on the server which in turn broadcast the running changes from the database via socket.
I am working on a project which uses a relational database (SQL Server 2008). The local (on-premises) application both reads and writes to the database. I am working on a different front end for Azure (MVC2 Web Role), which will use the same data, but in a read only fashion. If I was deploying a traditional web app, I would use SQL Express to act as the local database, and deploy changes with updates to the application (the data changes very slowly) or via some sync system.
With Azure, the picture is a little cloudy (sorry, I had to). I can't seem to find any information to indicate if SQL Express will work inside of Web Roles, and if so, how to do it. Does anyone know if using SQL Express in an Azure web role is possible?
Other options I could do if forced: SQL CE or use SQL Azure. Both have a number of downsides, and are definitely less than perfect.
Thanks,
Erick
Edit
I think my scenario may not have been clear enough.
This data won't change between deployments, and is only accessed from within the Web Role; it is basically a static cache. The on-premises part is kind of a red herring, as it doesn't impact the data on the web role (aside from being its source). Basically, what I want to do is have a local data store/cache that I use existing T-SQL/DAL code with.
While I could use SQL Azure, it doesn't add anything, and if anything only adds additional overhead and failure points. I could also use a VM Role, but that is way too costly/complex.
In a perfect world, I would package the MDF into the cspkg (so it gets deployed with the app) and then use it locally from within the role. If there is no way to do this, then that is ok and I need to figure out the pros and cons of other solutions. We don't live in a perfect world. :)
You might be able to run SQL Express using a custom VHD but you won't be able to rely on any data every being present on that VHD. The VMs are completely reset when they reboot - there is no physical persistence across reboots.
If you wanted to, you might be able to locate your entire SQL Server installation in Azure blob storage.
However, in doing all of this, you'll only be able to have one worker/web role that can use that database. Remember: a SQL Server database can only be attached to one SQL Server at a time. If you want to scale out, you'll have to create new SQL Server instances for every web/worker role.
Outside of cost concerns, I can't think of anything that is in SQL Express that should be a show stopper for 99.9% of applications out there.
Adding to Jeremiah's answer: SQL Azure should give you nearly everything SQL Express does today, and you can use the Sync service to synchronize on-premise SQL Server with SQL Azure.
If you installed SQL Express into a VM role, you'd be consuming around $90 monthly just for that instance, plus blob storage (you'd want a Cloud Drive for durability). By definition, a VM Role (or any role) must support scale-out; if you were to scale to 2 instances for whatever reason, both instances would need their own copy of the database, so you'd need to create a blob snapshot for each instance.
Keep in mind, though, if you choose to install SQL Express in a VM: once you're at 2 instances, along with, say, 20GB per instance of blob storage, you're nearing $200 monthly and you're maintaining your VM's OS patches, SQL Express configuration and updates, failure recovery procedures, etc. In contrast, SQL Azure at 20GB, while costing the same $200, will offer better performance and works with the sync service, while completely removing any OS or database server management tasks from you.
To add to the already existing answers and for anyone wondering if its a good idea to run SQL Express in the cloud:
it does makes sense as a temporary storage area. Consider this architectural approach:
say you're spinning up nodes to run jobs. Storing a gazillion of calculation results might be a good idea inside a local SQL Express for each node, and provide the aggregated responses immediately when the job finishes on the node. Transfer of the no longer hot results to off-prem SQL server for future reporting/etc can be done afterwords. SQL Azure may not be optimal from the volume/latency/cost perspective to store gazillion of results and ATS will not always fit the bill, especially when relational data, performance or existing code are involved.
To expand on what David mentioned you can register for SQL Azure Data Sync CTP2 that would allow sync from SQL Server to SQL Azure here: http://www.microsoft.com/en-us/SQLAzure/datasync.aspx
Make sure to use CTP2 though since CTP1 did not support SQL Server.
If it's a read only local cache - SQL CE 4 or SQLite.
Both have Entity Framework providers.
If you're writing to it - SQL Azure