Create a SQL database in Azure Portal let you select a collation. How to retrieve this list externally? - sql

When we create a SQL database on the Azure Portal, the portal give the possibility to select the collation you want from a list:
I would like to integrate this list in my own azure devops extension, inside the task.json.
I know set dataSourceBindings, query and filter REST Api.
Here, my problem is to find the source of this collection list, where it is stored ?
Does anybody can help me to find the way ?

For SQL Server you can execute these selects to get all collations:
These are Windows related: SELECT * FROM sys.fn_helpcollations() WHERE name NOT LIKE 'SQL%'
These are SQL Server related: SELECT * FROM sys.fn_helpcollations() WHERE name LIKE 'SQL%'
Cf. https://learn.microsoft.com/en-us/sql/t-sql/statements/sql-server-collation-name-transact-sql?view=sql-server-ver15
With this you can have a fixed list in your json file and don't need to query this list all over again.

Related

Azure Databricks - Generate SQL Select Statement with Columns

I have tables in Azure Databricks that I am using SQL to interact with via a notebook. I need to select all columns from a table with 200 columns, I need to select all of them but I need to modify some for a select insert (To modify specific columns for a PK). Therefore I can not use a select *. (There are multiple scenarios this is just my current objective)
How can I generate a select statement on a table with all the column names in a sql statement. This would be equivalent of a 'Select top N' in SSMS where it generates a select for the table I can than edit.
I have seen functions like describe and show but they can't build a select statement.
I am new to Databricks. Any help is appreciated.
I have the same problem. It is really tough to make and modify SELECT statement for this kind of tables. I have tried many ways and found using the 3rd party software to connect to the table on Azure Databricks worked fine.
Here is what I do:
Download the 3rd party software such as DBeaver
Download Databricks JDBC driver form this page.
Configure Databricks driver. Luckily there is an official doc for DBeaver.
Connect to the Databricks and find the table to generate SELECT statement.
Use DBeaver built-in function to generate it. See the screenshot below.
That's it!
I found this setup took just 10-15 minutes to complete saving much time.

Partitioning Data on BULK INSERT command

In a previous post I was able to successfully use filepath metadata from an OPENROWSET command in SQL On-Demand to create a partitioning scheme within a view. Link.
I am looking to replicate this in a normal Azure SQL Database view. When I try to run the same command I get an error stating
Cannot find either column "r" or the user-defined function or aggregate "r.filepath", or the name is ambiguous.
This is what I am running in SSMS for my Azure SQL database instance.
CREATE VIEW testview6 AS
SELECT *, r.filepath(1) as [date]
FROM OPENROWSET (
BULK 'Sales/2020-10-01/Iris_Shortened.csv',
DATA_SOURCE = 'azure_blob_sas5',
SINGLE_CLOB
) AS [r];
I am not sure of what I am doing wrong. My goal is to create a partitioning scheme so that filepath metadata can be used for parsing what is needed. Is this something that is only available with SQL On-Demand?
Currently yes.
This is a great idea, can you create a feature request for Azure SQL database here?

get schemas in a specific databse in ms sql server

I need to get the schemas list in specific database in MS SQL server, not all schemas list in entire MS SQL server
EX:
i will get list of Db's like A,B,C from ms sql server.Now i need to fetch all schema list from A
I need a query for that
can i get some help here
This can be accomplished using the sys.schemas catalog view:
USE A;
SELECT name
FROM sys.schemas;
3-part name example:
SELECT name
FROM A.sys.schemas;
You can obtain all schemas from specific database like this
USE Database_Name
SELECT * FROM sys.schemas
Read link below to have a better understanding
How do I obtain a list of all schemas in a Sql Server database

SQL Azure - copy table between databases

I am trying to run following SQL:
INSERT INTO Suppliers ( [SupplierID], [CompanyName])
Select [SupplierID], [CompanyName] From [AlexDB]..Suppliers
and got an error "reference to database and/or server name in is not supported in this version of sql server"
Any idea how to copy data between databases "inside" the server?
I can load data to client and then back to server, but this is very slow.
I know this is old, but I had another manual solution for a one off run.
Using SQL Management Studio R2 SP1 to connect to azure, I right click the source database and select generate scripts.
during the wizard, after I have selected my tables I select that I want to output to a query window, then I click advanced. About half way down the properties window
there is an option for "type of data to script". I select that and change it to "data only", then I finish the wizard.
All I do then is check the script, rearrange the inserts for constraints, and change the using at the top to run it against my target DB.
Then I right click on the target database and select new query, copy the script into it, and run it.
Done, Data migrated.
Since 2015, this can be done by use of elastic database queries also known as cross database queries.
I created and used this template, it copies 1.5 million rows in 20 minutes:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';
CREATE DATABASE SCOPED CREDENTIAL SQL_Credential
WITH IDENTITY = '<username>',
SECRET = '<password>';
CREATE EXTERNAL DATA SOURCE RemoteReferenceData
WITH
(
TYPE=RDBMS,
LOCATION='<server>.database.windows.net',
DATABASE_NAME='<db>',
CREDENTIAL= SQL_Credential
);
CREATE EXTERNAL TABLE [dbo].[source_table] (
[Id] BIGINT NOT NULL,
...
)
WITH
(
DATA_SOURCE = RemoteReferenceData
)
SELECT *
INTO target_table
FROM source_table
Unfortunately there is no way to do this in a single query.
The easiest way to accomplish it is to use "Data Sync" to copy the tables. The benefit of this is that it will also work between servers, and keep your tables in sync.
http://azure.microsoft.com/en-us/documentation/articles/sql-database-get-started-sql-data-sync/
In practise, I haven't had that great of an experience with "Data Sync" running in production, but its fine for once off jobs.
One issue with "Data Sync" is that it will create a large number of "sync" objects in your database, and deleting the actual "Data Sync" from the Azure portal may or may not clean them up. Follow the directions in this article to clean it all up manually:
https://msgooroo.com/GoorooTHINK/Article/15141/Removing-SQL-Azure-Sync-objects-manually/5215
SQL-Azure does not support USE statement and effectively no cross-db queries. So the above query is bound to fail.
If you want to copy/backup the db to another sql azure db you can use the "Same-server" copying or "Cross-Server" copying in SQL-Azure. Refer this msdn article
You could use a tool like SQL Data Compare from Red Gate Software that can move database contents from one place to another and fully supports SQL Azure. 14-day free trial should let you see if it can do what you need.
Full disclosure: I work for Red Gate Software
An old post, but another option is the Sql Azure Migration wizard
Use the following steps, there is no straight forward way to do so. But by some trick we can.
Step1 : Create another one table with the same structure of Suppliers table inside [AlexDB], Say it as SuppliersBackup
Step2 : Create table with the same structure of Suppliers table inside DesiredDB
Step3 : Enable Data Sync Between AlexDB..Suppliers and DesiredDB..Suppliers
Step4 : Truncate data from AlexDB..Suppliers
Step5 : Copy data from AlexDB..SuppliersBackup to AlexDB..Suppliers
Step6 : Now run the sync
Data Copied to DesiredDB.
If you have onprem version that has the sp_addlinkedsrvlogin, you can setup Linked Servers for both source and target database then you can run your insert into query.
See "SQL Server Support for Linked Server and Distributed Queries against Windows Azure SQL Database" in this blog: https://azure.microsoft.com/en-us/blog/announcing-updates-to-windows-azure-sql-database/
Ok, i think i found answer - no way. have to move data to client, or do some other tricks. Here a link to article with explanations: Limitations of SQL Azure: only one DB per connection
But any other ideas are welcome!
You can easily add a "Linked Server" from SQL Management Studio and then query on the fully qualified table name. No need for flat files or export tables. This method also works for on-prem to azure database and vice versa.
e.g.
select top 1 ColA, ColB from [AZURE01_<hidden>].<hidden>_UAT_RecoveryTestSrc.dbo.FooTable order by 1 desc
select top 1 ColA, ColB from [AZURE01_<hidden>].<hidden>_UAT_RecoveryTestTgt.dbo.FooTable order by 1 desc
A few options (rather workarounds):
Generate script with data
Use data sync in Azure
Use MS Access (import and then export), with many exclusions (like no GUID in Access)
Use 3-rd party tools like Red Gate.
Unfortunately no easy and built-in way to do that so far.
I would recommend SSMS SQL Server Import and Export feature. This feature supports multiple connection configurations and cross-server copy of selected tables. I have tried .NET Sql Server connector, which works very well for the Azure SQL databases.

Retrieving users for specific SQL Server databases

I have a few SQL Server databases (all in one server), containing their own set of users. Now I'm trying to design a small application that would query those users and then display them in a report (TBD). I've looked over online how to do this, however I didn't find any. Is it possible in SQL Server to retrieve all the users of a database? If so, how?
On SQL Server 2005 and up:
connect to that specific database you're interested in
USE Databasename
execute this query
SELECT * FROM sys.database_principals
That gives you a slew of information on all users defined in the database
See the MSDN documentation for a detailed explanation of all rows returned from that view.