Azure SqlServer and SqlDatabase resource groups - azure-sql-database

I would like to use Azure to create environments under a single ResourceGroup for clients comprising:
X web servers
Y app servers
1 database
Ideally, the database would be hosted on a server that is part of a different resource group, so I can leverage elastic pools across multiple clients.
When I attempt to use New-AzureRmResourceGroupDeployment to do this, I get an error stating that the parent SqlServer cannot be found.
I have created a SqlServer via PowerShell:
New-AzureRmSqlServer -servername "MyServer" -ResourceGroupName "Common" -Location "South Central US"
I then attempt a deployment for ClientA with:
New-AzureRmResourceGroup -Name 'ClientA' -Location "South Central US"
New-AzureRmResourceGroupDeployment -ResourceGroupName 'ClientA' -TemplateFile azure.ClientDeployment.json -Client 'ClientA'
My deployment configuration is:
{
"parameters": { ... }
"variables": { ... }
"resources": [
{
"name": "MySqlServer/ClientA",
"type": "Microsoft.Sql/servers/databases",
"location": "[resourceGroup().location]",
"apiVersion": "2014-04-01",
"tags": {},
"location": "[resourceGroup().location]",
"properties": {
"edition": "Basic"
},
}
],
"outputs": { }
}
Results in the error message:
New-AzureRmResourceGroupDeployment : 5:04:33 PM - Resource Microsoft.Sql/servers/databases 'MySqlServer/ClientA' failed with message '{
"code": "NotFound",
"message": "Server 'MySqlServer' does not exist in resource group 'ClientA' in subscription '{...}'.",
"target": null,
"details": [],
"innererror": []
}
Note that both resource groups (Common and ClientA) are in the same subscription and location.
Is it possible to have a SqlServer part of one resource group, and the SqlDatabase part of a different resource group?

Separating SQLServer with one Subscriptions(location)and it's Database to another Subscriptions is not possible as of now.
See the similar question here in MSDN forum
An Azure resource group is also defined to be a strong container for
its resources, which defines their identity (look at the URI of all
Azure resources), and propagation of role-based access rights, and
controls resource lifetimes - thus granting access to a resource group
grants rights to its resources and deleting a resource group deletes
the resources it contains.
Combining the above means that the resource group that contains a
server must also contain the databases hosted/nested on that server.
Even if you try to create all in a single subscription and try to move the particular SQLDB resource group after. It won't allow doing that (In Portal)
For your case, I think to create all the services in a common subscription where the location common or easily accessible by all the client.
You can use AzureSpeed to get the latency for all the location from your current location and try to create the subscription in a common location which is having minimal latency for all your clients.

Related

Azure Datalake Analytics U-SQL with Azure Datalake Storage Gen 2

Question : what is the path forward for using ADLA (U-SQL) with ADLS(Gen2) ?
I have been running Azure Data lake Analytics (U-SQL) jobs via Azure Data factory (ADF v2) with Azure Data lake Store Generation 1 for quite a while now in East US2
I was planning to have another instance deployed to cater Canadian clients and wanted to setup Azure Data lake Store Generation 1
What I tried :
I was not able to create an Azure Datalake Storage Gen 1 account in Central Canada (or any Canadian region for that matter)
I tried to move to Azure Datalake Storage Gen2 but then ran into an issue where Azure Data Factory - U-SQL activity could not be linked with Gen2 Storage linked service to pick up U-SQL script
I stumbled upon multiple links about this topic :
https://feedback.azure.com/forums/327234-data-lake/suggestions/36445702-add-support-for-adls-gen2-to-adla
https://social.msdn.microsoft.com/Forums/en-US/5ce97eef-8940-4591-a19c-934f71825e7d/connect-data-lake-analytics-to-adls-gen-2
which essentially say that U-SQL / ADLA won't be supporting ADLS Gen2
I am a bit confused since there is no official documentation on ADLA's direction
Update:
This is the structure of my u-sql activity. It can work and process successfully:(You can try to create a new json of u-sql activity to replace your u-sql activity.)
{
"name": "pipeline4",
"properties": {
"activities": [
{
"name": "U-SQL1",
"type": "DataLakeAnalyticsU-SQL",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"scriptPath": "test1/u-sql.txt",
"scriptLinkedService": {
"referenceName": "LinkTo0730",
"type": "LinkedServiceReference"
}
},
"linkedServiceName": {
"referenceName": "AzureDataLakeAnalytics1",
"type": "LinkedServiceReference"
}
}
],
"annotations": []
}
}
Original Answer:
I was not able to create an Azure Datalake Storage Gen 1 account in
Central Canada (or any Canadian region for that matter)
On my side, I also cannot create datalake gen1 on region Central Canada. This is the limit of my subscription. But you can have a check of the resource manager on your side, maybe you can.(Azure data lake gen1 is 'Microsoft.DataLakeStore')
Resource Manager is supported in all regions, but the resources you deploy might not be supported in all regions. In addition, there may be limitations on your subscription that prevent you from using some regions that support the resource. The resource explorer displays valid locations for the resource type.
Please have a check of this document:
https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/resource-providers-and-types
I tried to move to Azure Datalake Storage Gen2 but then ran into an
issue where Azure Data Factory - U-SQL activity could not be linked
with Gen2 Storage linked service to pick up U-SQL script
On my side, seems it is reading the u-sql script in gen2, did you get some error?

Writing to an Excel spreadsheet via Microsoft Graph API - Could not obtain a WAC access token

I'm trying to update an existing spreadsheet that is located in the root of a Business OneDrive.
I can run a GET and retrieve the file details.
I'm authorised as an Application as opposed to Delegated.
Whenever I run the following POST:
https://graph.microsoft.com/v1.0/drives/{drive-id}/root:/demo.xlsx
I get this error:
{
"error": {
"code": "AccessDenied",
"message": "Could not obtain a WAC access token.",
"innerError": {
"request-id": "07422c42-930f-4329-809a-93103bff3ab4",
"date": "2020-05-14T18:32:46"
}
}
}
I also have Files.ReadWrite.All set to the Application.
I have been using the following documentation for help:
https://learn.microsoft.com/en-us/graph/api/resources/excel?view=graph-rest-1.0
Apparently sessions are not mandatory, and by default they are persisted - which is what I want (https://learn.microsoft.com/en-us/graph/api/workbook-createsession?view=graph-rest-1.0&tabs=http)
I have ran the GET and POST requests via Postman
One of the following permission scopes is required to use the Excel resource:
Files.Read (for read actions)
Files.ReadWrite (for read and write actions)
You need to add this permission and give the admin consent:

slashDB accessing a database via POST request and using APIkey yields 403 error

Question about security for POST method of HTTP:
I made a user called "MyAPP":
{
"userdef": [
"view",
"create"
],
"api_key": "dzn8k7hj2sdgddlvymfmefh1k2ddjl05",
"user_id": "MyAPP",
"name": "MyAPP",
"creator": "admin",
"edit": [],
"dbdef": [
"view",
"create"
],
"querydef": [
"view",
"create"
],
"databases": {
"Gaming": {
"dbuser": "mydbuser_here",
"dbpass": "mypass_here"
}
},
"password":
"$6$rounds=665736$x/Xp0k6Nj.5qzuM5$G.3w6Py1s.xZ83RHDU55qonNMpJe4Le8nD8PqjYKoOtgbab7T22knJPqwHspoT6BQxp.5gieLFuD0SdD9dyvi/",
"email": "",
"view": []
}
Then I wanted to issue a POST in order to execute a SQL Pass-thru
such as this:
http:///query/InsertBestScore/Score/99/ScreenName/GollyGolly.xml?apikey=dzn8k7hj2sdgddlvymfmefh1k2ddjl05
Where I built a query and named it "InsertBestScore":
insert into Gaming.Leaderboard
(ScreenName, Score)
values
(:ScreenName, :Score);
If I run this via POSTMAN using the POST method:
... then I get an access, 403 :
<?xml version="1.0" encoding="utf-8"?>
<SlashDB>
<http_code>403</http_code>
<description>Access was denied to this resource. Please log in with your username/password or resend your request with a valid API key.</description>
<url_template>/query/InsertBestScore/Score/{Score}/ScreenName/{ScreenName}.xml</url_template>
</SlashDB>
Also, I would be calling this POST (or PUT) request from an application, in my case a Python program running from within a AWS Lambda Function.
Now, I came across this in the documentation:
Two parameters API key
SlashDB also allows a two parameters credentials in this authentication method - app id and api key. This may come handy when integrating with API management systems like 3Scale. By default header and query string argument would be:
• appid - identifies certain application
• apikey - secret for the application
Request with API key in header - Access granted
... however in the example above, I don't see where the appid comes into play.
Can you tell me how one would call the SlashDB endpoint and pass a APIkey and assure that the userid is known as MyAPP.
So, to sum up, the Documentation mentions:
• Another application utilizes an API key to authenticate, which is sent with every request. The application is recognized as SlashDB user App2, which uses database login db_admin. Effectively this application can SELECT, UPDATE, INSERT and DELETE data.
So I want to actually, do just what is in that bullet: Identify myself as the user (instead of App2, I'm user MyAPP), and then use the dbuser and dbpass that was assigned to access that "Gaming" database.
Idea?
Make sure you've given user MyAPP permission to execute the query.
To do so:
login as admin,
go to Configure -> Queries,
open your query definition,
update field Execute. It accepts comma separated user ids.
OK, there are really two questions here:
Why was access denied?
What is the appid and how to use it.
Ad. 1: There are two authorization barriers that the request has to clear.
The first one is imposed by SlashDB in that the user executing the query must be listed in the Execute field on the query definition screen. This is done under Configure -> Queries -> "edit" button on your query.
The second barrier is imposed by the database. The SlashDB user who is executing your POST request must be mapped to a physical database user with INSERT privileges to the Gaming.Leaderboard table. It goes without saying that this database user must be associated with the database schema in which the table exists.
Ad. 2. To enable the appid the user api key must be composed out of two parts separated by colon ":". The first part will be interpreted as the appid and the second will be the apikey.
To do that, use Configuration -> Users -> 'edit' button for the user in question. Then simply add a colon at the beginning of the API key and type in your desired appid to the left of the colon. The app will have to supply both keys to execute the request. Note that the names of those keys (i.e. appid) are configurable in /etc/slashdb/slashdb.ini.
The reasoning behind this feature is to facilitate API Management platforms, which can help with key management, especially when API will be exposed to third party developers.

Checking a SQL Azure Database is available from a c# code

I do an up scale with a code like this on an Azure SQL Database:
ALTER DATABASE [MYDATABASE] Modify (SERVICE_OBJECTIVE = 'S1');
How is it possible to know from a c# code when Azure has completed the job and the table is already available?
Checking for SERVICE_OBJECTIVE value is not enough, the process still continue further.
Instead of performing this task in T-SQL I would perform the task from C# using an API call over to the REST API, you can find all of the details on MSDN.
Specifically, you should look at the Get Create or Update Database Status API method which allows you to call the following URL:
GET https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Sql/servers/{server-name}/databases/{database-name}}/operationResults/{operation-id}?api-version={api-version}
The JSON body allows you to pass the following parameters:
{
"id": "{uri-of-database}",
"name": "{database-name}",
"type": "{database-type}",
"location": "{server-location}",
"tags": {
"{tag-key}": "{tag-value}",
...
},
"properties": {
"databaseId": "{database-id}",
"edition": "{database-edition}",
"status": "{database-status}",
"serviceLevelObjective": "{performance-level}",
"collation": "{collation-name}",
"maxSizeBytes": {max-database-size},
"creationDate": "{date-create}",
"currentServiceLevelObjectiveId":"{current-service-id}",
"requestedServiceObjectiveId":"{requested-service-id}",
"defaultSecondaryLocation": "{secondary-server-location}"
}
}
In the properties section, the serviceLevelObjective property is the one you can use to resize the database. To finish off you can then perform a GET on the Get Database API method where you can compare both the currentServiceLevelObjectiveId and requestedServiceObjectiveId properties to ensure your command has been successful.
Note: Don't forget to pass all of the common parameters required to make API calls in Azure.

Determine a collaborator's permissions (to an organization repos) via the api?

In a repos that belongs to an organization (as opposed to an individual), collaborators are members of "teams", and teams can have three different kinds of permissions:
Pull Only
Push & Pull
Push, Pull, & Administrative
I can see a list of collaborators on a repos like this:
GET /repos/:owner/:repo/collaborators
...but it does not tell me what type of permissions a collaborator has.
In this particular case, I am one of the collaborators; I do not own the repository.
Is there a way to programmatically determine the type of collaborator permission I have?
If you want to check whether you have push access to a GitHub repo, you can use the repo endpoint:
GET /repos/:owner/:repo
If your request includes your authorization information (either including your account credentials or an OAuth access token), the permissions field will give you the information you're looking for:
{
'admin': False,
'push': False,
'pull': True
}
Update June 2017: You now have nested teams.
However, the orgs/teams API mentions:
The REST API v3 does not yet fully support nested teams.
For example, the endpoints for List team members and List team repositories do not return results inherited from a nested team's structure.
Warning: since January 2020, The Team APIs will be moving from a top-level path under /teams/:team_id to a scoped path under the organization that owns the team with a path like /organizations/:org_id/team/:team_id.
See "Moving the teams API" by Dirkjan Bussink
Original answer 2016
The one query which should include your permission is mentioned in "List User Teams"
GET /user/teams
List all of the teams across all of the organizations to which the authenticated user belongs. This method requires user or repo scope when authenticating via OAuth.
The answer looks like:
Status: 200 OK
X-RateLimit-Limit: 5000
X-RateLimit-Remaining: 4999
[
{
"url": "https://api.github.com/teams/1",
"name": "Owners",
"id": 1,
"permission": "admin", <============
"members_count": 3,
"repos_count": 10,
"organization": {
"login": "github",
"id": 1,
"url": "https://api.github.com/orgs/github",
"avatar_url": "https://github.com/images/error/octocat_happy.gif"
}
}
]
You need to filter that result in order to match the team with the right repo, and you should get back the permission level associated to that repo that way.