Log Analytics - Pricing tier doesn't match the subscriptions billing model - azure-log-analytics

I have a log analytics resource setup on perGB and I am trying to deploy a solution that uses an Azure Automation account.
When deploying, I see the error in my log analytics resource :Pricing tier doesn't match the subscriptions billing model.
It is my understanding that something recently changed in OMS that may cause this. I have already tried to install the Upgrade Readiness solution but that didn't solve the problem.

Use Sku PerGB2018 instead of standard or free.

Related

Google Cloud Dataflow permission issues

Beginner in GCP here. I'm testing GCP Dataflow as part of a IOT project to move data from Pub/Sub to BigQuery. I created a Dataflow job from the Topic's page "Export to BigQuery" button.
Apart from the issue that I can't delete a dataflow, I am hitting the following issue:
As soon as the dataflow starts, I get the error:
Workflow failed. Causes: There was a problem refreshing your credentials. Please check: 1. Dataflow API is enabled for your project. 2. Make sure both the Dataflow service account and the controller service account have sufficient permissions. If you are not specifying a controller service account, ensure the default Compute Engine service account [PROJECT_NUMBER]-compute#developer.gserviceaccount.com exists and has sufficient permissions. If you have deleted the default Compute Engine service account, you must specify a controller service account. For more information, see: https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#security_and_permissions_for_pipelines_on_google_cloud_platform. , There is no cloudservices robot account for your project. Please ensure that the Dataflow API is enabled for your project.
Here's where it's funny:
Dataflow API is definitely enabled, since I am looking at this from the Dataflow portion of the console.
Dataflow is using the default compute engine service account, that exists. The link it's pointing at says that this account is created automatically and has a broad access to project's resources. Well, does it?
Dataflows elude me.. How can I tell a dataflow job to restart, or edit or delete it?
please verify below checklist:
Dataflow API should be enabled check under APIs & Services. If you just enabled ,wait for some time to get it updated
[project-number]-compute#developer.gserviceaccount.com and service-[project-number]#dataflow-service-producer-prod.iam.gserviceaccount.com service accounts should exists if dataflow-service-producer-prod didn't get created you can contact dataflow support or you can create and assign Cloud Dataflow Service Agent role, If you are using shared VPC create it in host project and assign Compute Network User role

Azure SQL DB Error, This location is not available for subscription

I am having pay as you go subscription and I am creating an Azure SQL server.
While adding server, on selection of location, I am getting this error:
This location is not available for subscriptions
Please help.
There's an actual issue with Microsoft servers. They have too many Azure SQL database creation requests. They're currently trying to handle the situation. This seems to affect all types of subscriptions even paid ones. I have a Visual Studio Enterprise Subscription and I get the same error (This location is not available for subscriptions) for all locations.
See following Microsoft forum thread for more information:
https://social.msdn.microsoft.com/Forums/en-US/ac0376cb-2a0e-4dc2-a52c-d986989e6801/ongoing-issue-unable-to-create-sql-database-server?forum=ssdsgetstarted
As the other answer states, this is a (poorly handled) restriction on Azure as of now and there seems to be no ETA on when it shall be lifted
In the meantime, you can still get an SQL database up and running in Azure, if you don't mind doing a bit of extra work and don't want to wait - just set up a Docker instance and put MSSQL on it!
In the Azure Portal, create a container instance. Use the following docker image: https://hub.docker.com/r/microsoft/mssql-server-windows-express/
while creating, you might have to set the ACCEPT_EULA environment variable to "Y".
after it boots up (10-20 minutes for me), in the portal, connect to it with the "sqlcmd" command and set up your login. In my case, I just needed a quick demo db, so I took the "sa" login, ran "alter login SA with password ='{insert your password}'" and "alter login SA enable". See here for details: https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-login-transact-sql?view=sql-server-ver15#examples
and voila, you have an SQL instance on Azure. Although it's unmanaged and poorly monitored, it might be enough for a short-term solution. The IP address of the docker instance can be found in the Properties section of the container instance blade.
Maybe you can reference this blog: Azure / SQL Server / This location is not available for subscription. It has the same error with you.
Run this powershell command to check if the location you choose is available:
Get-AzureRmLocation | select displayname
If the location is available, the best way to resolve this issue just contact the Azure support to have this enabled for you. You can do this for free using support page on your Azure Portal.
They well contact you can help you solve it.
Hope this helps.
This is how I solved myself. Let me tell you the problem first. Then the solution.
Problem: I created a brand new free Azure account (comes with $250 free credit) for a client. Then upgraded to pay-as-you-go subscription. I was unable to create Azure SQL db. The error was 'location is not available'.
How I solved: I created another pay-as-you-go subscription in the same account. Guess what - I was able to create SQL db in my new subscription right away. Then I deleted the first subscription from my account. And yes, I lost the free credit.
If your situation is similar to mine, you can try this.
PS: I have 3 clients with their own Azure accounts. I was able to create SQL Db in all of their accounts. I think the problem arises only for free accounts and/or for free accounts that upgraded to pay-as-you-go accounts.
EDIT - 2020/04/22
This is still an ongoing problem up to today, but I was told by Microsoft support that on April 24th, a new Azure cluster will be available in Europe. Thus it might get possible to finally deploy SQL Server instances on Free accounts around there.
Deploy a docker container running SQL Server
To complement on #Filip's answer, and given that the problem still remains with Azure SQL Server, a docker container running a SQL Server is a great alternative. You can set yourself one very easily running the following command on the cloud shell:
az container create --image microsoft/mssql-server-windows-express --os-type Windows --name <ContainerName> --resource-group <ResourceGroupName> --cpu <NumberOfCPUs> --memory <Memory> --port 1433 --ip-address public --environment-variables ACCEPT_EULA=Y SA_PASSWORD=<Password> MSSQL_PID=Developer --location <SomeLocationNearYou>
<ContainerName> : A container name of your choice
<ResourceGroupName> : The name of a previously created Resource Group
<NumberOfCPUs> : Number of CPUs you want to use
<Memory> : Memory you want to use
<Password> : Your password
<SomeLocationNearYou> : A location near you. For example,
westeurope
Access SQL Server
Once the container instance is deployed, in the Overview you will be able to find an IP address. Use that IP address and the password you chose in the az container command to connect to the SQL Server, either using Microsoft's SSMS, or the sqlcmd utility
Some documentation regarding the image I have used can be found here.
More information on the command I have used here.

Azure Devops workitem migration from one account to another

We have been trying to migrate the workitems from one AzureDevops account to our enterprise AzureDevops account using the vsts-work-item-migrator mentioned below. But the "Discussion Field" information is not getting migrated. Is this expected behavior or we missing something here.
https://mohamedradwan.com/2018/04/12/migrating-tfs-and-vsts-work-items-using-vsts-work-item-migrator/

Azure Storage account opening issue

I have an RBAC access to Azure portal. Previously I was able to access storage account and blobs successfully. But suddenly I am unable to access container or blob. I am able to view the storage account listed for me, but cannot access it.
I get error as "Something went wrong while getting your resources. Please try again later." I tried refreshing, clearing cache and signing again. Still facing same issue.
In portal I get notification as "Refresh the browser to try again.
Microsoft_Azure_Storage extension failed to load"
There is no network issue, as I can access all other resources from portal at same point.
Also there is no Unauthorized access issue notification.
Unable to figure out what is the issue.
Any help highly appreciated.
I'd recommend checking the activity logs for recent RBAC changes in case if they happened and someone changed access to containers/blobs, here's how: https://learn.microsoft.com/en-us/azure/role-based-access-control/change-history-report
I'd also recommend checking the list of roles/access you currently have by following these steps: https://learn.microsoft.com/en-us/azure/role-based-access-control/change-history-report
If you are still experiencing the issue, in case if you have a co-admin, check if they are facing the same issues. if they are, please send me your subscription ID, link to this thread, and include attn Adam in the subject to AzCommunity[at]microsoft.com ?
I'll enable a free support ticket to quickly escalate it.
I hade the same issue while I was using VPN from Bangladesh. When I disconnected from the VPN, it works fine.

Google ML Engine - Unable to log objective metric due to exception <HttpError 403>

I am running a TensorFlow application on the Google ML Engine with hyper-parameter tuning and I've been running into some strange authentication issues.
My Data and Permissions Setup
My trainer code supports two ways of obtaining input data for my model:
Getting a table from BigQuery.
Reading from a .csv file.
For my IAM permissions, I have two members set up:
My user account:
Assigned to the following IAM roles:
Project Owner (roles/owner)
BigQuery Admin (roles/bigquery.admin)
Credentials were created automatically when I used gcloud auth application-default login
A service account:
Assigned to the following IAM roles:
BigQuery Admin (roles/bigquery.admin)
Storage Admin (roles/storage.admin)
PubSub Admin (roles/pubsub.admin)
Credentials were downloaded to a .json file when I created it in the Google Cloud Platform interface.
The Problem
When I run my trainer code on the Google ML Engine using my user account credentials and reading from a .csv file, everything works fine.
However, if I try to get my data from BigQuery, I get the following error:
Forbidden: 403 Insufficient Permission (GET https://www.googleapis.com/bigquery/v2/projects/MY-PROJECT-ID/datasets/MY-DATASET-ID/tables/MY-TABLE-NAME)
This is the reason why I created a service account, but the service account has a separate set of issues. When using the service account, I am able to read from both a .csv file and from BigQuery, but in both cases, I get the following error at the end of each trial:
Unable to log objective metric due to exception <HttpError 403 when requesting https://pubsub.googleapis.com/v1/projects/MY-PROJECT-ID/topics/ml_MY-JOB-ID:publish?alt=json returned "User not authorized to perform this action.">.
This doesn't cause the job to fail, but it prevents the objective metric from being recorded, so the hyper-parameter tuning does not provide any helpful output.
The Question
I'm not sure why I'm getting these permission errors when my IAM members are assigned to what I'm pretty sure are the correct roles.
My trainer code works in every case when I run it locally (although PubSub is obviously not being used when running locally), so I'm fairly certain it's not a bug in the code.
Any suggestions?
Notes
There was one point at which my service account was getting the same error as my user account when trying to access BigQuery. The solution I stumbled upon is a strange one. I decided to remove all roles from my service account and add them again, and this fixed the BigQuery permission issue for that member.
Thanks for the very detailed question.
To explain what happened here, in the first case Cloud ML Engine used an internal service account (the one that is added to your project with the Cloud ML Service Agent role). Due to some internal security considerations, that service account is restricted from accessing BigQuery, so hence the first 403 error that you saw.
Now, when you replaced machine credentials with your own service account using the .json credentials file, that restriction went away. However your service account didn't have all the access to the internal systems, such as the pubsub service used for Hyperparameter tuning mechanism internally. Hence the pubsub error in the second case.
There are a few possible solutions to this problem:
on the Cloud ML Engine side, we're working on better BigQuery support out-of-the-box, although we don't have an ETA at this point.
your approach with a custom service account might work as a short-term solution as long as you don't use Hyperparameter tuning. However this is obviously fragile because it depends on the implementation details in Cloud ML Engine, so I wouldn't recommend relying on this long-term
finally, consider exporting data from BigQuery to GCS first and using GCS to read training data. This scenario is well-supported in Cloud ML Engine. Besides you'll get performance gains on large datasets compared to reading BigQuery directly: the current implementation of BigQueryReader in TensorFlow has suboptimal perf characteristics, which we're also working to improve.