I found this great Getting Started Guide for the Azure Blob Storage SDK and how to connect to my storage account.
A quick prototype showed that it already works, but I want to ensure this and the logic behind it via tests (either unit or integration tests).
I found this resource on an Azure Testing Library that can record HTTP requests of a pipeline and was wondering whether this is applicable to the Blob Storage SDK as well?
Are there any other options to properly test my applications code interaction with the Blob Storage SDK?
My idea would for example be:
Call a method on my client with a parameter
Take the blob name from the passed parameter and make a call to the blob storage container
Validate that the call was made to the correct container and blob via a test case
• I too just tried to follow the documentation link for various tasks to be performed regarding the Azure Blob storage using the .Net v12 SDK and the results were successful as follows: -
Also, if you want to call a method/task on your client/application using a parameter with respect to Blob storage, you can surely do so by using the ‘BlobServiceClient’ class. To know more on how to use it, please refer to the documentation link below: -
https://azure.github.io/azure-sdk-korean/dotnet_introduction.html
It clearly states on how to call the service name client and the methods for which to use the parameters for performing various tasks as shown in the sample code from that document below: -
namespace Azure.<group>.<service_name> {
// main service client class
public class <service_name>Client {
// simple constructors; don't use default parameters
public <service_name>Client(<simple_binding_parameters>);
public <service_name>Client(<simple_binding_parameters>, <service_name>ClientOptions options);
// 0 or more advanced constructors
public <service_name>Client(<advanced_binding_parameters>, <service_name>ClientOptions options = default);
// mocking constructor
protected <service_name>Client();
// service methods (synchronous and asynchronous)
public virtual Task<Response<<model>> <service_operation>Async(<parameters>, CancellationToken cancellationToken = default);
public virtual Response<model> <service_operation>(<parameters>, CancellationToken cancellationToken = default);
// other members
}
// options for configuring the client
public class <service_name>ClientOptions : ClientOptions {
}
}
Also, I would suggest you to please refer this community thread: -
Call .NET Web API method whenever a new file is added to Azure Blob Storage
Related
With Elsa workflow designer possible to define a workflow and publish it, also can make a workflow programmatically by implementing the IWorkflow interface.
I need to make programmatically workflow at runtime, save it on the database and run it sometimes.
in the asp.net core project controller, I resolve IWorkflowBuilder as a dependency and make a workflow with WorkflowBuilder and return a WorkFlowblueprint object, but I don't know how I can store it and how to run it?
I also have Elsa dashboard on my project and I use EntityFramework Persistence for it.
Is there a way to convert a WorkflowBluePrint to WorkFlowDefination or generate WorkflowDefination from scratch programmatically?
Does everyone have any idea?
Although it might theoretically be possible to store an IWorkflow implementation in the database, there are some caveats that make this tricky to say the least. Here is why:
A workflow definition created by the designer consists purely of a list of activities and connections between them. Because of that, everything is easily serialized into JSON and stored in the database.
However, when you write a C# class, you can do more fancy things, such as configuring activities using C# lambda expressions and implement "inline" activity code. When you try to serialize this to JSON, these C# expressions will be serialized using just their type names.
Although there might be ways to somehow store a programmatic workflow into the database, perhaps even by storing a compiled assembly in the DB, I don't think it's worth the trouble because there are better ways.
You said that you need a programmatic workflow that you only run sometimes.
To achieve that, you do not need to store a workflow in the database.
The way Elsa works is that all workflow sources are converted into a thing called a Workflow Blueprint.
A workflow blueprint is what represents an executable workflow where all the necessary details are dehydrated that the workflow invoker can use.
There are different "sources" to establish these workflow blueprints by means of classes that implement IWorkflowProvider, of which there are three:
Programmatic Workflow Provider
Database Workflow Provider
Blob Storage Workflow Provider
The programmatic provider is what turns IWorkflow implementations into workflow blueprints, while the database provider turns workflow definitions into blueprints. The blob storage provider is similar, except it turns JSON files into blueprints.
The bottom line is that the origin of a workflow blueprint doesn't matter for the workflow engine.
All workflow blueprints are accessed through a service called the workflow registry, which you can use to load & execute a given workflow.
For example, if you have a programmatic workflow called MyWorkflow, you can execute it whenever you want like this:
public class MyWorkflow : IWorkflow
{
public void Build(IWorkflowBuilder builder)
{
builder.WriteLine("Hello World!");
}
}
[ApiController]
[Route("my-workflow")]
public class MyWorkflowController : Controller
{
private readonly IWorkflowRegistry _workflowRegistry;
private readonly IStartsWorkflow _workflowStarter;
public MyWorkflowController(IWorkflowRegistry workflowRegistry, IStartsWorkflow workflowStarter)
{
_workflowRegistry = workflowRegistry;
_workflowStarter = workflowStarter;
}
[HttpGet("run")]
public async Task<IActionResult> RunMyWorkflow(CancellationToken cancellationToken)
{
// 1. Get my workflow blueprint.
var myWorkflowBlueprint = (await _workflowRegistry.GetWorkflowAsync<MyWorkflow>(cancellationToken))!;
// 2. Run the workflow.
await _workflowStarter.StartWorkflowAsync(myWorkflowBlueprint, cancellationToken: cancellationToken);
return Ok();
}
}
Invoking this controller will execute MyWorkflow.
As you can see, there is no need to store the workflow in the database in order to be able to execute it on demand. Even if you did store the workflow in the database, the code would be the same, provided that the name of the workflow remains "MyWorkflow". Under the covers, the GetWorkflowAsync<TWorkflow> is simply an extension method that uses the type name to find the workflow by name. If you wanted to load a workflow by name for which there's no workflow class defined, you would simply use FindByNameAsync, or FindAsync if all you had is a workflow definition ID.
I am using the Azure SignalR Service in combination with Azure Functions and I have the following code:
public class SignalRHubFunction
{
[FunctionName("SignalRConnected")]
public async Task Run([SignalRTrigger("myhubname", "connections", "connected", ConnectionStringSetting = "AzureSignalRConnectionString")] InvocationContext invocationContext, ILogger logger)
{
logger.LogInformation($"{invocationContext.ConnectionId} connected");
}
}
I have a hard time getting a trigger on the 'OnConnected' event. Samples are only given with Class based model.
And the docs aren't really helpful for me.
Docs are telling me the category parameter of the SignalRTrigger constructor should be: connections or messages.
So I use connections.
This value must be set as the category of messages for the function to be triggered. The category can be one of the following values:
connections: Including connected and disconnected events
messages: Including all other events except those in connections category
I don't really understand what the docs mean with the event parameter
This value must be set as the event of messages for the function to be triggered. For messages category, event is the target in invocation message that clients send. For connections category, only connected and disconnected is used.
I guess they are saying you can choose between connected and disconnected.
However with the code from above the trigger is never hit. Any thoughts?
Original Answer: https://learn.microsoft.com/en-us/answers/questions/159266/debug-function-using-a-signalrtrigger.html
For the SignalRTrigger to work, you need to set a webhook to the function in SignalR.
When you deploy a function with the SignalRTrigger, it does 2 extra things:
Create the webhook: https://<APP_NAME>.azurewebsites.net/runtime/webhooks/signalr
Create an API key code (signalr_extension) in the function app system settings (see "system keys" section in "App Keys" blade of the azure portal Function App)
SignalR POSTs events to this webhook (so obviously this doesn't work on local, unless you have a publicly addressable IP that you can add to SignalR).
To configure the SignalR event webhook, go to "Settings" blade of SignalR and add the upstream URL https://<APP_NAME>.azurewebsites.net/runtime/webhooks/signalr?code=<API_KEY>
Voila! This should now work
Ref: https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-signalr-service-trigger?tabs=javascript#send-messages-to-signalr-service-trigger-binding
This code snippet works.
[FunctionName("OnConnected ")]
public async Task OnConnected ([SignalRTrigger("YourHub", "connections", "connected ")]
InvocationContext invocationContext, ILogger logger)
{
logger.LogInformation($"{invocationContext.ConnectionId} has connected");
}
Also configure, Upstream URL in Azure SignalR settings
<Function_App_URL>/runtime/webhooks/signalr?code=<API_KEY>
SignalR Service integration
The Function_App_URL can be found on Function App's Overview page and The API_KEY is generated by Azure Function. You can get the API_KEY from signalr_extension in the App keys blade of Function App.
I have created a demo microservices application implemented with the help of Azure Function Apps. For separation of concerns, I have created an API Layer, Business Layer, and a Data Layer.
The API layer, being the function app, calls the business layer which implements the business logic while the data layer implements logic for storing and retrieving data.
After considerable thought, I have decided to use query-based API versioning for my demo.
The question I have is,
What is the best way to organize my code to facilitate this? Is there any other way to organize my code to accommodate the different versions apart from using different namespaces / repos?
As of now, I've created separate namespaces for each version but this has created a lot of code duplication. Also after getting it reviewed by some of my friends, they raised the concern that If separate namespaces are being used I would be forcing legacy systems to change references to the new namespace if they need to update which is not recommended.
Any help would be appreciated.
The simplest way to implement versioning in Azure Functions is using endpoints. The HttpTrigger Attribute allows the definition of a custom route where you can set the expected version.
// Version 1 of get users
[FunctionName(nameof(V1UserList))]
public static IEnumerable<UserModel> V1UserList(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "v1/users")]HttpRequest req, ILogger log)
{
}
// Version 2 of get users
[FunctionName(nameof(V2UserList))]
public static IEnumerable<UserModel> V2UserList(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "v2/users")]HttpRequest req, ILogger log)
{
}
When deploying each version in isolation a router component is required to redirect requests to the correct API endpoint.
The router component can be implemented in Azure using different services, such as:
Azure Function Proxies : you can specify endpoints on your function app that are implemented by another resource. You can use these proxies to break a large API into multiple function apps (as in a microservice architecture), while still presenting a single API surface for clients.
API Management :Azure API Management supports importing Azure Function Apps as new APIs or appending them to existing APIs. The process automatically generates a host key in the Azure Function App, which is then assigned to a named value in Azure API Management.
Sample code for Versioning APIs in Azure Functions
A project I'm working on is utilizing Spring Cloud Config server to handle property update/refresh.
One question that has repeatedly come up is how to reference/serve plain text from the config server.
I know that the server supports serving plain-text. What I'm trying to figure out is that if I have a reference /foo/default/master/logj42.xml.
How would I reference this in an "agnostic" way such that if I were to put:
{configserver}/foo/default/master/log4j2.xml in the config file
The reference {configserver} would be expanded.
Additionally, when using "discovery", if I inject the reference to the "resource" as above, the default mechanism will attempt to use java.net.URLConnection to load the content. I do not think it will resolve the 'discovery' host.
Thanks in advance.
It also can be resolved using Customizing Bootstrap Configuration without aspects by creating custom property source and set configserver uri after locating from discovery.
I had similar issue, more details in this stackoverflow post
I found a way to do this that is minimally invasive but "pierces the veil" of where the config server actually resides.
On the primary application class, the annotation #EnableDiscoveryClient needs to be added.
I created an aspect to add a property source with a key that indicates the actual URI of the server handling the request:
#Component
#Aspect
public class ResolverAspect {
#Autowired
private DiscoveryClient discoveryClient;
#Pointcut("execution(org.springframework.cloud.config.environment.Environment org.springframework.cloud.config.server.environment.EnvironmentController.*(..))
private void environmentControllerResolve();
#Around("environmentControllerResolve()")
public Object environmentControllerResolveServer(final ProceedingJoinPoint pjp) throws Throwable {
final Environment pjpReturn = (Environment)pjp.proceed();
final ServiceInstance localSErviceInstance = discoveryClient.getLocalServiceInstance();
final PropertySource instancePropertySource =
new PropertySource("cloud-instance", Collections.singletonMap("configserver.instance.uri", localServiceInstance.getUri().toString()));
pjpReturn.addFirst(instancePropertySource);
return pjpReturn;
}
}
By doing this, I expose a key configserver.instance.uri which can then be referenced from within a property value and interpolated/resolved on the client.
This has some ramifications with regard to exposing the actual configuration server, but for resolving resources that do not necessarily utilize the discovery client this can be utilized.
As part of the web application that I'd developing at the moment I have a requirement to write files out to storage. The will initially be hosted on Azure Websites, but in the future I would like to have the ability to host it on non-azure servers.
So - I'm looking for a library (and I hope that one exists) that would make it easy to switch between outputting files out to Azure Blob Storage or a local file system. Ideally something that would have a common API and would allow you to switch between the storage location by changing config files only.
I'm having some issues finding libraries that would have this sort of functionality and I hope someone can point me in the right direction.
Not sure if such library exists. If an abstraction library exists than I would have thought it would need to provide implementation of Azure, S3, local FileSytem, Rackspace etc.
Anyways, it's fairly straight forward to implement. Our project had two different version, cloud based and on-premise based with main real difference being the Blob storage. What we did was to build the core upload/download etc logic against an interface, have two difference implementation of it one for Azure Blob storage and one for Local file storage used StructureMap to get a reference to concrete implementation based on config value
we obviously did not replicate each and every BLOB Storage API in the interface but only the minimal required by our system
Some code example:
Interface: (BlobBase is our custom class holding info such as container name, file name, content type etc. and BlobStorageProviderStatus is custom enum providing some status info. But you get the idea!)
public interface IBlobStorageProvider
{
void CreateContainer(string containerName);
BlobStorageProviderStatus UploadFile(BlobBase file, bool uploadAsNewversion, Stream data, int chunk, int totalChunks, out string version);
BlobStorageProviderStatus DownloadToStream(BlobBase file, Stream target, int chunkSize, IClientConnection clientConnection);
void Delete(BlobBase blobBase);
void DeleteDirectory(string directoryPath, string blobContainer);
BlobStorageProviderStatus UploadFile(string container, string folder, string fileName, Stream data, string contentType);
void DownloadToStream(string container, string filePath, Stream target);
}
Web.config has
<add key="AzureBlobStorage" value="true" />
and simplified version of StructureMap registeration:
x.For<IBlobStorageProvider>()
.Use(() => bool.Parse(ConfigurationManager.AppSettings["AzureBlobStorage"])
? new AzureBlobStorageProvider()
: new FileSystemStorageProvider());
and the actual usage sample:
IBlobStorageProvider blobStorage = ObjectFactory.GetInstance<IBlobStorageProvider>();
blobStorage.CreateContainer("image");