Is there a way to enable the Spatial Extender in the Bluemix SQL Database or are there any plans to add this to the Service? I am aware that this functionality is available in the dashDB Service, however I would be interested to add geospatial queries to a standard application database that is not strongly focused on analytics and therefore the SQL Database Service seems to be a better fit.
You could use the "IBM DB2 on Cloud" service which can be fully configured to your desire. The "SQL Database" typically is a shared instance/database service and cannot be customized to an individual user.
If you are looking for a fully-managed service, then you may consider creating your tables as row based tables in dashDB (use ORGANIZED BY ROW in your create table DDL) so that it won't leverage the BLU features but will still have spatial extender.
Otherwise, as Henrik suggested, DB2 on Cloud is a good option if you like customization and don't mind the added responsibility of administrating and maintaining your database.
Related
Good morning,
I am using an asp.net framework with an azure client database.
I am now creating another server on Azure to host databases. On this server, for each customer registering on the website (for which 1 entry is created in my first database), I need to create a database with 8 tables - identical for each customer.
What would be the best thing to map the ASP.NET ID to a new database? Which framework would you recommend?
Thanks
Rather than running a VM where you're going to have to manage a SQL Server installation and write a bunch of code to handle a database per tenant scenario, I highly, highly, highly recommend taking a look at Azure SQL's multi-tenant sharding support. All of this code is already written for you. And it's not that you're paying for one DB per client - check out elastic pooling.
You can read the docs here.
Also note, this option will scale very well.
I have done this three different ways: a database per client where I wrote my own code to manage sharding, a single database with a separate schema per client (a huge pain in the rear), and using Azure SQL sharding support. It's not just the issue of correctly separating client data. You also need to think about querying for reporting across all client databases, and managing schema changes. Under the first two options, if you change a schema, you get to modify N client databases. Azure SQL's sharding tools will manage this for you.
The use case is Distributed deployment of Web Application on Azure using PaaS. I read the Azure documentation on SQL Azure database geo-replication, and it seems none of the services tiers best fit this need. The other option is SQL Sync, which is in preview and cannot be used in production.It seems Microsoft Azure does not have any way for a redundant database centric application using PaaS model.
Please help how to resolve the issue or any alternate solution.
Akanksha
Both SQL Data Sync and Geo-replication are for database redundant using. But we need to know you detailed scenarios, so that we can say which one is more fit. Basically Geo-replication is DB level data synchronization used for DR. SQL Data Sync is Table level data synchronization used for reference data replication for both Azure DB and On-prem DB.
I read in a few places that SQL Azure data is automatically replicated and the Azure platform provides redundant copies of the data, Therefore SQL Server high availability features such as database mirroring and failover cluster aren't needed.
Has anyone got a chance to investigate deeper into this? Are all those availability enhancements really not needed in Azure? Thanks!
To clarify, I'm talking about SQL as a service and not a VM hosted SQL.
The SQL Database service (database-as-a-service) is a multi-tenant database service, and your databases are triple-replicated within the data center, providing durable storage. The service itself, being large-scale, provides high availability (since there are many VMs running the service itself, along with replicated data). Nothing is needed in terms of mirroring or failover clusters. Having said that: If, say, your particular database became unavailable for a period of time, you'll need to consider how you'll handle that situation (perhaps sync'ing to another SQL Database, maybe even in another data center).
If you go with SQL Database (DBaaS), you'll still need to work out your backup strategy, and possibly syncing with another DC (or on-premises database server) for DR purposes.
More info on SQL Database fault tolerance is here.
Your desired detail is probably contained in this MSDN article of Business Continuity and Azure SQL Database (see: http://msdn.microsoft.com/en-us/library/windowsazure/hh852669.aspx). At the most basic level Azure SQL Database will keep three replicas of your database - one primary and two secondary.
While this helps with BCP / DR scenarios you may also wish to investigate ways to backup your database so you have point-in-time restore capabilities. More information on backup / restore can be found here: http://msdn.microsoft.com/en-us/library/windowsazure/jj650016.aspx
I have a scenario as explained below and I need to implement the best Data Sync method.
I have a centralized SQL Azure database (master Database)
There are about 20 (this will increase in future) on-premises SQL Server Databases. These database are not necessarily always connected to the internet.
All master and on-premises DB's will have the same schema/table structures.
I would like to do bidirectional data sync between all on-premises databases with SQL Azure and vice-versa.
Data Sync frequency will be once in a day.
Each on-premises DB size is reasonable(not too big and not too small).
These below options I have explored:
SQL Azure Data Sync
Microsoft Sync Framework
SQL Server 2008 Change Data Capture
SQL Server Change Tracking
I would like to know the best possible method to achieve this.
I have been working with SQl azure data sync, Microsift sync framework and Sql server change tracking. I have no idea about change data capture.
Sql azure data sync.
This is the easiest way to implement data sync. It is a matter of configuration. But unfortunately still in preview and Microsoft no recommended for production yet. We have been using to sync 20 databases spread around different geographical location and so far works good. No coding required. But you may have to pay in future when you are using this service. At the moment it is free.
Microsoft Sync Framework
Microsoft sync framework is for developers. Developers can use Sync framework as an API and develop sync application. Sql azure data sync use sync framework internally. To implement data sync with azure you need to implement N-Tier architecture with WCF. And you need to host your WCF service in azure web site or virtual machine. Considerable development time required and see the following link for sample implementation from Microsoft. Once you develop you can easily configure and use for sync multiple databases.
Database Sync:SQL Server and SQL Express N-Tier with WCF
SQL Server Change Tracking
You need to manually programme the each table for data syn and you need to have link server setup between each sql server. To setup link server with azure database you need to open some specific port.
items #3 and #4 in your list are not really synchronization solutions, just part of it. Both SQL CDC and SQL CT simply allows you to track the changes. you have to put in extra code to grab those changes and apply/sync to another database.
SQL Data Sync service will be your best option if you don't want to write code. Note that up until today (despite the fact its in preview for so long), Data Sync is still in Preview Mode.
If you're find writing code, Sync Fx is a good option as well (SQL Data Sync internally uses Sync Framework).
Azure SQL Data Sync has now reached general availability (GA) as shown on the following Microsoft Article.
Announcing the general availability of Azure SQL Data Sync
I'm currently developing a service for an App with WCF. I want to host this data on windows-azure and it should host data from differed users. I'm searching for the right design of my database. In my opinion there are only two differed possibilities:
Create a new database for every customer
Store a customer-id to every table (or the main table when every table is connected via entities)
The first approach has very good speed and isolating, but it's very expansive on windows azure (or am I understanding something of the azure pricing wrong?). Also I don't know how to configure a WCF- Service that way, that it always use another database.
The second approach is low on speed and the isolating is poor. But it's easy to implement and cheaper.
Now to my question:
Is there any other way to get high isolation of data and also easy integration in a WCF- service using azure?
What design should I use and why?
You have two additional options: build multiple schema containers within a database (see my blog post about this technique), or even better use SQL Database Federations (you can use my open-source project called Enzo SQL Shard to access federations). The links I am providing give you access to other options as well.
In the end it's a rather complex decision that involves a tradeoff of performance, security and manageability. I usually recommend Federations, even if it has its own set of limitations, because it is a flexible multitenant option for the cloud with the option to filter data automatically. Check out the open source project - you will see how to implement good separation of customer of data independently of the physical storage.