I do an up scale with a code like this on an Azure SQL Database:
ALTER DATABASE [MYDATABASE] Modify (SERVICE_OBJECTIVE = 'S1');
How is it possible to know from a c# code when Azure has completed the job and the table is already available?
Checking for SERVICE_OBJECTIVE value is not enough, the process still continue further.
Instead of performing this task in T-SQL I would perform the task from C# using an API call over to the REST API, you can find all of the details on MSDN.
Specifically, you should look at the Get Create or Update Database Status API method which allows you to call the following URL:
GET https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Sql/servers/{server-name}/databases/{database-name}}/operationResults/{operation-id}?api-version={api-version}
The JSON body allows you to pass the following parameters:
{
"id": "{uri-of-database}",
"name": "{database-name}",
"type": "{database-type}",
"location": "{server-location}",
"tags": {
"{tag-key}": "{tag-value}",
...
},
"properties": {
"databaseId": "{database-id}",
"edition": "{database-edition}",
"status": "{database-status}",
"serviceLevelObjective": "{performance-level}",
"collation": "{collation-name}",
"maxSizeBytes": {max-database-size},
"creationDate": "{date-create}",
"currentServiceLevelObjectiveId":"{current-service-id}",
"requestedServiceObjectiveId":"{requested-service-id}",
"defaultSecondaryLocation": "{secondary-server-location}"
}
}
In the properties section, the serviceLevelObjective property is the one you can use to resize the database. To finish off you can then perform a GET on the Get Database API method where you can compare both the currentServiceLevelObjectiveId and requestedServiceObjectiveId properties to ensure your command has been successful.
Note: Don't forget to pass all of the common parameters required to make API calls in Azure.
Related
I'm trying to update an existing spreadsheet that is located in the root of a Business OneDrive.
I can run a GET and retrieve the file details.
I'm authorised as an Application as opposed to Delegated.
Whenever I run the following POST:
https://graph.microsoft.com/v1.0/drives/{drive-id}/root:/demo.xlsx
I get this error:
{
"error": {
"code": "AccessDenied",
"message": "Could not obtain a WAC access token.",
"innerError": {
"request-id": "07422c42-930f-4329-809a-93103bff3ab4",
"date": "2020-05-14T18:32:46"
}
}
}
I also have Files.ReadWrite.All set to the Application.
I have been using the following documentation for help:
https://learn.microsoft.com/en-us/graph/api/resources/excel?view=graph-rest-1.0
Apparently sessions are not mandatory, and by default they are persisted - which is what I want (https://learn.microsoft.com/en-us/graph/api/workbook-createsession?view=graph-rest-1.0&tabs=http)
I have ran the GET and POST requests via Postman
One of the following permission scopes is required to use the Excel resource:
Files.Read (for read actions)
Files.ReadWrite (for read and write actions)
You need to add this permission and give the admin consent:
We use Azure Data Factory copy pipeline to transfer data from REST api's to a Azure SQL Database and it is doing some strange things. Because we loop over a set of API's that need to be transferred the mapping is empty from the copy activity.
But for one API the automatic mapping is going wrong, the destination table is created with all the needed columns and correct datatypes based on the received metadata. When we run the pipeline for that specific API, the following message is showed.
{ "errorCode": "2200", "message": "ErrorCode=SchemaMappingFailedInHierarchicalToTabularStage,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Failed to process hierarchical to tabular stage, error message: Ticks must be between DateTime.MinValue.Ticks and DateTime.MaxValue.Ticks.\r\nParameter name: ticks,Source=Microsoft.DataTransfer.ClientLibrary,'", "failureType": "UserError", "target": "Copy data1", "details": [] }
As a test we did do the mapping for that API manually by using the "Import Schema" option on the Mapping page. there we see that all the fields are correctly mapped. We execute the pipeline again using the mapping and everything is working fine.
But of course, we don't want to use a manually mapping because it is used in a loop for different API's also.
what are the correct parameters to be entered in the JSON body while sending post API request?
this is what is given in their documentation:
{
"name": "IBM Netezza Data Source",
"connectionDetails": "{"server":"MQPDAQ01.AM.LILLY.COM", "database":"GMDM_STG_QAR"}",
"type": "ODBC",
"credentialDetails": {
"credentials": "ABEF==",
"encryptionAlgorithm": "RSA-OAEP",
"encryptedConnection": "Encrypted|NotEncrypted",
"privacyLevel": "None|Public|Organizational|Private",
"credentialType": "Basic|Windows|Anonymous|…"
}
}
Generally I would advise you to use odbc to connect from powerbi to netezza, as far as I know the ‘native’ connect won’t let you override the sql and thus you will never ‘group by’ or filter data going out of netezza. That will ensure poor performance for the end user and a long running query that allocates vital resources in netezza for no use.
The ‘wizard’ for odbc fills out those Json connect strings nicely in my experience:)
Question about security for POST method of HTTP:
I made a user called "MyAPP":
{
"userdef": [
"view",
"create"
],
"api_key": "dzn8k7hj2sdgddlvymfmefh1k2ddjl05",
"user_id": "MyAPP",
"name": "MyAPP",
"creator": "admin",
"edit": [],
"dbdef": [
"view",
"create"
],
"querydef": [
"view",
"create"
],
"databases": {
"Gaming": {
"dbuser": "mydbuser_here",
"dbpass": "mypass_here"
}
},
"password":
"$6$rounds=665736$x/Xp0k6Nj.5qzuM5$G.3w6Py1s.xZ83RHDU55qonNMpJe4Le8nD8PqjYKoOtgbab7T22knJPqwHspoT6BQxp.5gieLFuD0SdD9dyvi/",
"email": "",
"view": []
}
Then I wanted to issue a POST in order to execute a SQL Pass-thru
such as this:
http:///query/InsertBestScore/Score/99/ScreenName/GollyGolly.xml?apikey=dzn8k7hj2sdgddlvymfmefh1k2ddjl05
Where I built a query and named it "InsertBestScore":
insert into Gaming.Leaderboard
(ScreenName, Score)
values
(:ScreenName, :Score);
If I run this via POSTMAN using the POST method:
... then I get an access, 403 :
<?xml version="1.0" encoding="utf-8"?>
<SlashDB>
<http_code>403</http_code>
<description>Access was denied to this resource. Please log in with your username/password or resend your request with a valid API key.</description>
<url_template>/query/InsertBestScore/Score/{Score}/ScreenName/{ScreenName}.xml</url_template>
</SlashDB>
Also, I would be calling this POST (or PUT) request from an application, in my case a Python program running from within a AWS Lambda Function.
Now, I came across this in the documentation:
Two parameters API key
SlashDB also allows a two parameters credentials in this authentication method - app id and api key. This may come handy when integrating with API management systems like 3Scale. By default header and query string argument would be:
• appid - identifies certain application
• apikey - secret for the application
Request with API key in header - Access granted
... however in the example above, I don't see where the appid comes into play.
Can you tell me how one would call the SlashDB endpoint and pass a APIkey and assure that the userid is known as MyAPP.
So, to sum up, the Documentation mentions:
• Another application utilizes an API key to authenticate, which is sent with every request. The application is recognized as SlashDB user App2, which uses database login db_admin. Effectively this application can SELECT, UPDATE, INSERT and DELETE data.
So I want to actually, do just what is in that bullet: Identify myself as the user (instead of App2, I'm user MyAPP), and then use the dbuser and dbpass that was assigned to access that "Gaming" database.
Idea?
Make sure you've given user MyAPP permission to execute the query.
To do so:
login as admin,
go to Configure -> Queries,
open your query definition,
update field Execute. It accepts comma separated user ids.
OK, there are really two questions here:
Why was access denied?
What is the appid and how to use it.
Ad. 1: There are two authorization barriers that the request has to clear.
The first one is imposed by SlashDB in that the user executing the query must be listed in the Execute field on the query definition screen. This is done under Configure -> Queries -> "edit" button on your query.
The second barrier is imposed by the database. The SlashDB user who is executing your POST request must be mapped to a physical database user with INSERT privileges to the Gaming.Leaderboard table. It goes without saying that this database user must be associated with the database schema in which the table exists.
Ad. 2. To enable the appid the user api key must be composed out of two parts separated by colon ":". The first part will be interpreted as the appid and the second will be the apikey.
To do that, use Configuration -> Users -> 'edit' button for the user in question. Then simply add a colon at the beginning of the API key and type in your desired appid to the left of the colon. The app will have to supply both keys to execute the request. Note that the names of those keys (i.e. appid) are configurable in /etc/slashdb/slashdb.ini.
The reasoning behind this feature is to facilitate API Management platforms, which can help with key management, especially when API will be exposed to third party developers.
I'm using wit ai for a bot and I think it's amazing. However, I must provide the customer with screens in my web app to train and manage the app. And here I found a big problem (or maybe I'm just lost). The documentation of the REST API is not enough to design a client that acts like the wit console (not even close). it's like a tutorial of what endpoints you can hit and an overview of the parameters, but no clean explanation of the structure of the response.
For example, there is no endpoint to get the insights edge. Also and most importantly, no clear documentation about the response structure when hitting the message endpoints (i.e. the structure the returned entities: are they prebuilt or not, and if they are, is the value a string or an object or array, and what the object might contain [e.g. datetime]). Also the problem of the deprecated guide and the new guide (the new guide should be done and complete by now). I'm building parts of the code based on my testing. Sometimes when I test something new (like adding a range in the datetime entity instead of just a value), I get an error when I try to set the values to the user since I haven't parsed the response right, and the new info I get makes me modify the DB structure at my end sometimes.
So, the bottom line, is there a complete reference that I can implement a complete client in my web app (my web app is in Java by the way and I couldn't find a client library that handles the latest version of the API)? Again, the tool is AWESOME but the documentation is not enough, or maybe I'm missing something.
The document is not enough of course but I think its pretty straightforward. And from what I read there is response structure under "Return the meaning of a sentence".
It's response in JSON format. So you need to decode the response first.
Example Request:
$ curl -XGET 'https://api.wit.ai/message?v=20170307&q=how%20many%20people%20between%20Tuesday%20and%20Friday' \
-H 'Authorization: Bearer $TOKEN'
Example Response:
{
"msg_id": "387b8515-0c1d-42a9-aa80-e68b66b66c27",
"_text": "how many people between Tuesday and Friday",
"entities": {
"metric": [ {
"metadata": "{'code': 324}",
"value": "metric_visitor",
"confidence": 0.9231
} ],
"datetime": [ {
"value": {
"from": "2014-07-01T00:00:00.000-07:00",
"to": "2014-07-02T00:00:00.000-07:00"
},
"confidence": 1
}, {
"value": {
"from": "2014-07-04T00:00:00.000-07:00",
"to": "2014-07-05T00:00:00.000-07:00"
},
"confidence": 1
} ]
}
}
You can read more about response structure under Return the meaning of a sentence