Need PCS_AAD_APPID and more to run Azure IoT storage-adapter microserver locally - azure-storage

I'm trying the Azure IoT Accelerators Remote Monitoring solution and trying to follow the instructions here:
https://learn.microsoft.com/en-us/azure/iot-accelerators/iot-accelerators-remote-monitoring-create-simulated-device
In it, I need to run the storage adapter microservice locally and for that to work, it seems that I need three environmental variables WHICH I DON'T KNOW HOW TO FIND THE VALUES FOR:
PCS_AAD_APPID = { Azure service principal id }
PCS_AAD_APPSECRET = { Azure service principal secret }
PCS_KEYVAULT_NAME = { Name of Key Vault resource that stores settings and configuration }
I can create those environmental variables but I have no idea what values I should put in there. Anyone?
FYI, right now when I'm running the storage adapter microservice locally, I get this error:
"{"Name":"StorageAdapter","Status":{"IsHealthy":false,"Message":"Storage check failed"}..."
...which is preceeded by a caught exception with this messae:
"AuthKey = '((Microsoft.Azure.Documents.Client.DocumentClient)this.client).AuthKey' threw an exception of type 'System.ArgumentNullException'"

Related

Calling an API that runs on another GCP project with Airflow Composer

I'm running a task with SimpleHTTPOperator on Airflow Composer. This task calls an API that runs on Cloud Run Service living in another project. This means I need a service account in order to access the project.
When I try to make a call to the api, I get the following error :
{secret_manager_client.py:88} ERROR - Google Cloud API Call Error (PermissionDenied): No access for Secret ID airflow-connections-call_to_api.
Did you add 'secretmanager.versions.access' permission?
What's a solution to such an issue ?
Context : Cloud Composer and Cloud Run live in 2 different Projects
This specific error is irrelevant to the cross project scenario. It seems that you have configured Composer/Airflow to use Secret Manager as the primary backend for connections and variables. However, according to the error message , the service account used by Composer is missing the secretmanager.versions.access permission to access the connection (call_to_api) you have configured for the API.
Check this part of the documentation.

How to get .Net Core 3.1 Azure WebJob to read the AzureWebJobsStorage connection string from the Connected Services setup?

I'm building a WebJob for Azure to run in an App Service using .Net Core 3.1.
The WebJob will be triggered via Timers (it's basically a cronjob).
Timer triggers require the AzureWebJobsStorage connection string as storage is required for Timer events.
When deployed to Azure App Service, I want the WebJob to read the AzureWebJobsStorage value from the properties on the App Service.
I have a Resource Manager template that deploys my infrastructure and sets the connection string on my App Service resource:
"connectionStrings": [
{
"name": "AzureWebJobsStorage",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('_StoreAccountName'), ';AccountKey=', listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('_StoreAccountName')), '2019-04-01').keys[0].value,';EndpointSuffix=core.windows.net')]"
}
],
When testing my WebJob locally, I need to set that AzureWebJobsStorage value so that my local builds can connect to storage.
Since I re-deploy the infrastructure all the time as I make tweaks and changes to it, I do not want to manually maintain the long connection string in my appsettings.json or a local.settings.json file.
In Visual Studio, In theory, I can add a Service Dependency to the project for Azure Storage and that will store the connection string in my local Secrets.json file. Then, when I redeploy the infrastructure I can use the Visual Studio UI to edit the connection and re-connect it to the newly deployed storage account (i.e. it will create and update the connection string without me having to do it manually).
When I add Azure Storage as a connected service, Visual Studio adds a line like this in my Secrets.json file:
"ConnectionStrings:<LABEL>": "DefaultEndpointsProtocol=https;AccountName=<LABEL>;AccountKey=_____________;BlobEndpoint=https://<LABEL>.blob.core.windows.net/;TableEndpoint=https://<LABEL>.table.core.windows.net/;QueueEndpoint=https://<LABEL>.queue.core.windows.net/;FileEndpoint=https://<LABEL>.file.core.windows.net/",
and this in my ServiceDependencies/serviceDependencies.local.json:
"storage1": {
"resourceId": "/subscriptions/[parameters('subscriptionId')]/resourceGroups/[parameters('resourceGroupName')]/providers/Microsoft.Storage/storageAccounts/<LABEL>",
"type": "storage.azure",
"connectionId": "<LABEL>",
"secretStore": "LocalSecretsFile"
}
and this in my ServiceDependencies/serviceDependencies.json:
"storage1": {
"type": "storage",
"connectionId": "<LABEL>"
}
Where <LABEL> is the name of the Storage Account (in both JSON snippits).
When I run the WebJob locally, it loads the appsettings.json, appsettings.Development.json, secrets.json, and Environment Variables into the IConfiguration.
However, when I run the WebJob locally it dies with:
Microsoft.Azure.WebJobs.Host.Listeners.FunctionListenerException: The listener for function 'Functions.Run' was unable to start.
---> System.ArgumentNullException: Value cannot be null. (Parameter 'connectionString')
at Microsoft.Azure.Storage.CloudStorageAccount.Parse(String connectionString)
at Microsoft.Azure.WebJobs.Extensions.Timers.StorageScheduleMonitor.get_TimerStatusDirectory() in C:\azure-webjobs-sdk-extensions\src\WebJobs.Extensions\Extensions\Timers\Scheduling\StorageScheduleMonitor.cs:line 77
at Microsoft.Azure.WebJobs.Extensions.Timers.StorageScheduleMonitor.GetStatusBlobReference(String timerName) in C:\azure-webjobs-sdk-extensions\src\WebJobs.Extensions\Extensions\Timers\Scheduling\StorageScheduleMonitor.cs:line 144
at Microsoft.Azure.WebJobs.Extensions.Timers.StorageScheduleMonitor.GetStatusAsync(String timerName) in C:\azure-webjobs-sdk-extensions\src\WebJobs.Extensions\Extensions\Timers\Scheduling\StorageScheduleMonitor.cs:line 93
at Microsoft.Azure.WebJobs.Extensions.Timers.Listeners.TimerListener.StartAsync(CancellationToken cancellationToken) in C:\azure-webjobs-sdk-extensions\src\WebJobs.Extensions\Extensions\Timers\Listener\TimerListener.cs:line 99
at Microsoft.Azure.WebJobs.Host.Listeners.SingletonListener.StartAsync(CancellationToken cancellationToken) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Singleton\SingletonListener.cs:line 72
at Microsoft.Azure.WebJobs.Host.Listeners.FunctionListener.StartAsync(CancellationToken cancellationToken, Boolean allowRetry) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Listeners\FunctionListener.cs:line 69
I have confirmed that if I add the ConnectionStrings:AzureWebJobsStorage value to my appsettings.json then the program runs fine.
So I know it's an issue with the loading of the AzureWebJobsStorage value.
Has anyone figured out how to get an Azure WebJob, running locally, to properly read the connection string that Visual Studio configures when adding the Azure Storage as a Connected Service?
What's the point of adding the Connected Service to the WebJob if it won't read the connection string?
(note: I realize that the WebJobs docs https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-how-to#webjobs-sdk-versions state that Because version 3.x uses the default .NET Core configuration APIs, there is no API to change connection string names. but it's unclear to me if that means the underlying WebJobs code also refuses to look in the Connected Services setup or if I'm just missing something)
I found a work-around, but I don't like it... basically check if there's a ConnectionStrings:AzureWebJobsStorage value at the end of my ConfigureAppConfiguration code and if not, try and read the one from the secrets.json file and set the ConnectionStrings:AzureWebJobsStorage to that value.
private const string baseAppSettingsFilename = "appsettings.json";
private const string defaultStorageAccountName = "<LABEL>";
...
IHostBuilder builder = new HostBuilder();
...
builder.ConfigureAppConfiguration(c =>
{
c.AddJsonFile(
path: baseAppSettingsFilename.Replace(".json", $".{Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT")}.json"),
optional: true,
reloadOnChange: true);
if (Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") == "Development")
{
c.AddUserSecrets<Program>();
}
// Add Environment Variables even though they are already added because we want them to take priority over anything set in JSON files
c.AddEnvironmentVariables();
IConfiguration config = c.Build();
if (string.IsNullOrWhiteSpace(config["ConnectionStrings:AzureWebJobsStorage"]))
{
string storageConnectionString = config[$"ConnectionStrings:{defaultStorageAccountName}"];
if (string.IsNullOrWhiteSpace(storageConnectionString))
{
throw new ConfigurationErrorsException($"Could not find a ConnectionString for Azure Storage account in ConnectionStrings:AzureWebJobsStorage or ConnectionStrings:{defaultStorageAccountName}");
}
c.AddInMemoryCollection(new Dictionary<string, string>() {
{ "ConnectionStrings:AzureWebJobsStorage", storageConnectionString }
});
}
});
This seems exceedingly dumb but even looking at the Azure SDK source code I'm thinking it's just hard coded to a single key name and the Service Configuration in Visual Studio is simply not supported: https://github.com/Azure/azure-webjobs-sdk-extensions/blob/afb81d66749eb7bc93ef71c7304abfee8dbed875/src/WebJobs.Extensions/Extensions/Timers/Scheduling/StorageScheduleMonitor.cs#L77
I just ran into a similar problem where VS2019 automatically configured Function and Function1 with Connection = "ConnectionStrings:AzureWebJobsStorage" and it couldn't find that. Simply changing it to Connection = "AzureWebJobsStorage" worked like a charm.
FYI - I also had to change BlobTrigger("Path/{name}"... to BlobTrigger("path/{name}"...
re: Microsoft.Azure.StorageException: The specified resource name contains invalid characters

NServiceBus endpoint is not starting on Azure Service Fabric local cluster

I have a .NetCore stateless WebAPI service running inside Service Fabric local cluster.
return Endpoint.Start(endpointConfiguration).GetAwaiter().GetResult();
When I'm trying to start NServiceBus endpoint, I'm getting this exception :
Access to the path 'C:\SfDevCluster\Data_App_Node_0\AppType_App10\App.APIPkg.Code.1.0.0.diagnostics' is denied.
How can it be solved ? VS is running under administrator.
The issue you are having is because the folder you are trying to write to is not supposed to be written by your application.
The package folder is used to store you application binaries and can be recreated dynamically whenever an application is hosted in the node.
Also, the binaries are reused by multiple service instances running on same node, so it might compete to use the files by different instances.
You should instead instruct your application to write to the WorkFolder,
public Stateless1(StatelessServiceContext context): base(context)
{
string workdir = context.CodePackageActivationContext.WorkDirectory;
}
The code above will give you a path like this:
'C:\SfDevCluster\Data_App_Node_0\AppType_App10\App.APIPkg.Code.1.0.0.diagnostics\work'
This folder is dynamic, will change depending on the node or instance of your application is running, when created, your application should already have permission to write to it.
For more info, see:
how-do-i-get-files-into-the-work-directory-of-a-stateless-service?forum=AzureServiceFabric
Open folder properties Security tab
Select ServiceFabricAllowedUsers
Add Write permission

Stackdriver Node.js Logging not showing up

I have a Node.js application, running inside of a Docker container and logging events using Stackdriver.
It is a Node.Js app, running with Express.js and Winston for logging and using a StackDriverTransport.
When I run this container locally, everything is logged correctly and shows up in the Cloud console. When I run this same container, with the same environment variables, in a GCE VM, the logs don't show up.
What do you mean exactly by locally? Are you running the container on the Cloud Shell vs running it on an instance? Keep in mind that if you create a container or instance that has to do something that needs privileges (like the Stackdriver logging client library) and run it, if that instance doesn't have a service account with that role/privileges set up it won't work.
Yu mentioned that you use the same environment variables, I take that one of the env vars points to your json key file. Is the key file present in that path on the instance?
From Winston documentation it looks like you need to specify the key file location for the service account:
const winston = require('winston');
const Stackdriver = require('#google-cloud/logging-winston');
winston.add(Stackdriver, {
projectId: 'your-project-id',
keyFilename: '/path/to/keyfile.json'
});
Have you checked if this is defined with the key for the service account with a logging role?

Windows Azure Console for Worker Role Cloud Service

I have a worker role cloud service that I have recently developed on my local machine. The service exposes a WCF interface that receives a file as a byte array, recompiles the file, converts it to the appropriate format, then stores it in Azure Storage. I managed to get everything working using the Azure Compute Emulator on my machine and published the service to Azure and... nothing. Running it on my machine again, it works as expected. When I was working on it on my computer, the Azure Compute Emulator's console output was essential in getting the application running.
Is there a similar functionality that can be tapped into on the Cloud Service via RDP? Such as starting/restarting the role at the command prompt or in power shell? If not, what is the best way to debug/log what the worker role is doing (without using Intellitrace)? I have diagnostics enabled in the project, but it doesn't seem to be giving me the same level of detail as the Computer Emulator console. I've rerun the role and corresponding .NET application again on localhost and was unable to find any possible errors in the console.
Edit: The Next Best Thing
Falling back to manual logging, I implemented a class that would feed text files into my Azure Storage account. Here's the code:
public class EventLogger
{
public static void Log(string message)
{
CloudBlobContainer cbc;
cbc = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("StorageClientAccount"))
.CreateCloudBlobClient()
.GetContainerReference("errors");
cbc.CreateIfNotExist();
cbc.GetBlobReference(string.Format("event-{0}-{1}.txt", RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).UploadText(message);
}
}
Calling ErrorLogger.Log() will create a new text file and record whatever message you put in there. I found an example in the answer below.
There is no console for worker roles that I'm aware of. If diagnostics isn't giving you any help, then you need to get a little hacky. Try tracing out messages and errors to blob storage yourself. Steve Marx has a good example of this here http://blog.smarx.com/posts/printf-here-in-the-cloud
As he notes in the article, this is not for production, just to help you find your problem.