IBM Bluemix SQL Database service data store management - sql

I'm working on IBM Bluemix SQL Database service with premium plan. According to our application the sql service can have many schemas.
Is there any way to seperate data storage of each schema to different physical storage and manage them as individual(file, file group etc.)
For example: The tables and data on SchemaUser1 stored in SQLDBFile1 and SchemaUser2's tables and data stored in SQLDBFile2
Is it possible to create schemas like this? or the only way to separate datastore is creating a new sql database service.

With the SQLDB premium plan you can create multiple schemas within a single database. For each schema and the objects within privileges can be assigned, i.e., it is possible to manage security on a per-schema level and for the objects within. Access to the objects (tables, views, functions, ...) is independent of their storage location (tablespaces).

Related

Access Azure Table Storage in SQL Server

I'm trying to access Azure Table Storage in a Gen 2 data lake from Azure SQL Server, but I can't find any documentation. Loads on how to get to csv's in blob storage, but nothing on Azure tables.
Any ideas?
John
Your requirement isn't feasible.
Azure Table storage is a service that stores non-relational
structured data (also known as structured NoSQL data) in the cloud,
providing a key/attribute store with a schemaless design.
Since, Table storage can't be queried using SQL, therefore there is no sense to access it using any SQL Server.
I recommend you to first go through Table storage concepts
before knowing how to query it.
Once getting the Table Storage structure, you can query the tables either through REST API or Cosmos DB Table API based on your application. Refer Querying tables and entities.
You can also follow this complete tutorial Quickstart: Build a Table API app with .NET SDK and Azure Cosmos DB to create basic application using Table Storage for learning purpose.

Sql Azure - Cross database queries

I have N databases, for example 10 databases.
Every database has the same schema, but different data.
Now i would like to take every data of each database from the table "Table1" and insert them in a common table in a new database "DWHDatabase" in a table named Table1Common.
so it's an insert like n to 1.
How i can do that? i'm trying to solve my issues with the elastic queries but seems it's a 1 to 1 stuff
Use Azure Data Factory with Linked Services to each database. Use the Copy activity to load the data.
You can also paramaterize the solution.
Parameterize linked services
Parameters in Azure Data Factory by Catherine Wilhemsen
Elastic query is best suited for reporting scenarios in which the majority of the processing (filtering, aggregation) may be done on the external source side. It is unsuitable for ETL procedures involving significant amounts of data transfer from a distant database (s). Consider Azure Synapse Analytics for large reporting workloads or data warehousing applications with more sophisticated queries.
You may use the Copy activity to copy data across on-premises and
cloud-based data storage. After you've copied the data, you may use
other actions to alter and analyse it. The Copy activity may also be
used to publish transformation and analysis findings for use in
business intelligence (BI) and application consumption.
MSFT Copy Activity Overview: Here.

Creating Feeds between local SQL servers and Azure SQL servers?

We are wanting to use Azure servers to run our Power Apps applications, however we have local SQL servers which contains our data warehouse we want only certain tables to be on Azure and want to create data feeds between the two with information going from one to the other.
Does anyone have any insight into how I can achieve this?
I have googled but there doesn't appear to be a wealth of information on this topic.
It depends on how fast after a change in your source (the on premise SQL Server) you need that change reflected in your Sink (Azure SQL).
If you have some minutes or even only need to update it every day I would suggest a basic Data Factory Pipeline (search on google for data factory upsert). Here it depends on your data on how you can achieve this.
If you need it faster or it is impossible to extract an incremental update from your source you would need to either use triggers and write the changes from one database to the other or get a program that does change data capture that does that.
It looks like you just want to sync the data in some table between local SQL Server and Azure SQL database.
You can use the Azure SQL Data Sync.
Summary:
SQL Data Sync is a service built on Azure SQL Database that lets you synchronize the data you select bi-directionally across multiple SQL databases and SQL Server instances.
With Data Sync, you can keep data synchronized between your on-premises databases and Azure SQL databases to enable hybrid applications.
A Sync Group has the following properties:
The Sync Schema describes which data is being synchronized.
The Sync Direction can be bi-directional or can flow in only one
direction. That is, the Sync Direction can be Hub to Member, or
Member to Hub, or both.
The Sync Interval describes how often synchronization occurs.
The Conflict Resolution Policy is a group level policy, which can be
Hub wins or Member wins.
Next step, you need to learn how to configure the Data Sync. Please reference this Azure document:Tutorial: Set up SQL Data Sync between Azure SQL Database and SQL Server on-premises.
In this tutorial, you learn how to set up Azure SQL Data Sync by creating a sync group that contains both Azure SQL Database and SQL Server instances. The sync group is custom configured and synchronizes on the schedule you set.
Hope this helps.
The most robust solution here is Transactional Replication. You can also use SSIS or Azure Data Factory for copying tables to/from Azure SQL Database. And Azure SQL Data Sync also exists.

Access Azure Data Lake Analytics Tables from SQL Server Polybase

I need to export a multi terabyte dataset processed via Azure Data Lake Analytics(ADLA) onto a SQL Server database.
Based on my research so far, I know that I can write the result of (ADLA) output to a Data Lake store or WASB using built-in outputters, and then read the output data from SQL server using Polybase.
However, creating the result of ADLA processing as an ADLA table seems pretty enticing to us. It is a clean solution (no files to manage), multiple readers, built-in partitioning, distribution keys and the potential for allowing other processes to access the tables.
If we use ADLA tables, can I access ADLA tables via SQL Polybase? If not, is there any way to access the files underlying the ADLA tables directly from Polybase?
I know that I can probably do this using ADF, but at this point I want to avoid ADF to the extent possible - to minimize costs, and to keep the process simple.
Unfortunately, Polybase support for ADLA Tables is still on the roadmap and not yet available. Please file a feature request through the SQL Data Warehouse User voice page.
The suggested work-around is to produce the information as Csv in ADLA and then create the partitioned and distributed table in SQL DW and use Polybase to read the data and fill the SQL DW managed table.

How to know the storage address of a record in Azure Database

In Oracle we have RowID, gives Physical address of a record.
Likewise we have %%physloc%% in SQL Server. What is the keyword for fetching the Physical location of a record in Azure DB.
Since the Azure SQL Database service is a PaaS (Platform as a Service) service, it abstracts out the need to care or worry about the physical disks that store your data. The service managing scaling everything in your database according to the pricing tier and DTU's you select. As a result of this there are no queries that can be performed to tell where on disk you data is stored. Also, there are no queries to specify where in physical storage you want Azure SQL Database to put your database tables and data. Azure SQL Database just manages all this for you automatically.