I'm trying to use nuget Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker to use QnAMakerClient and Knowledgebase.DownloadWithHttpMessagesAsync, but when I try to instantiate Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker() the parameters are the abstract classes, so I don't know how I can use QnAMakerClient to download the contents of Body.QnaDocuments.
Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker.QnAMakerClient z = new Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker.QnAMakerClient(new ServiceClientCredentials(),HttpClient httpClient, bool disposeHttpClient);
var kb = z.Knowledgebase.DownloadWithHttpMessagesAsync("key", "test").Result;
Thanks for your help.
#elise You can find one example on how to download the knowledgebase here.
Essentially:
var client = new QnAMakerClient(new ApiKeyServiceClientCredentials(key)) { Endpoint = endpoint };
// Download the KB
Console.Write("Downloading KB...");
var kbData = await client.Knowledgebase.DownloadAsync(kbId, EnvironmentType.Prod);
Console.WriteLine("KB Downloaded. It has {0} QnAs.", kbData.QnaDocuments.Count);
Related
I'm pretty new to ABP Framework and probably this question has a really simple answer, but I haven't managed to find it. Images are an important part of any app and handling them the best way (size, caching) is mandatory.
Scenario
setup a File System Blob Storing provider. This means that the upload file will be stored in the file system as an image file
make a service that uses a Blob container to save and retrieve the image. So, after saving it, I use the unique file name as a blob name. This name is used to retrieve it back.
the user is logged in, so authorization is required
I can easily obtain the byte[]s of the image by calling blobContainer.GetAllBytesOrNullAsync(blobName)
I want to easily display the image in <img> or in datatable row directly.
So, here is my question: is there an easy way to use a blob stored image as src of a <img> directly in a razor page? What I've managed to achieve is setting in the model, a source as a string made from image type + bytes converted to base 64 string (as here) however in this case I need to do it in the model and also I don't know if caching is used by the browser. I don't see how caching would work in this case.
I am aware that this may be a question more related to asp.net core, but I was thinking that maybe in abp there is some way via a link to access the image.
If you have the ID of the blob then it is easy to do. Just create a Endpoint to get the Image based on the blob id.
Here is the sample AppService
public class DocumentAppService : FileUploadAppService
{
private readonly IBlobContainer<DocumentContainer> _blobContainer;
private readonly IRepository<Document, Guid> _repository;
public DocumentAppService(IRepository<Document, Guid> repository, IBlobContainer<DocumentContainer> blobContainer)
{
_repository = repository;
_blobContainer = blobContainer;
}
public async Task<List<DocumentDto>> Upload([FromForm] List<IFormFile> files)
{
var output = new List<DocumentDto>();
foreach (var file in files)
{
using var memoryStream = new MemoryStream();
await file.CopyToAsync(memoryStream).ConfigureAwait(false);
var id = Guid.NewGuid();
var newFile = new Document(id, file.Length, file.ContentType, CurrentTenant.Id);
var created = await _repository.InsertAsync(newFile);
await _blobContainer.SaveAsync(id.ToString(), memoryStream.ToArray()).ConfigureAwait(false);
output.Add(ObjectMapper.Map<Document, DocumentDto>(newFile));
}
return output;
}
public async Task<FileResult> Get(Guid id)
{
var currentFile = _repository.FirstOrDefault(x => x.Id == id);
if (currentFile != null)
{
var myfile = await _blobContainer.GetAllBytesOrNullAsync(id.ToString());
return new FileContentResult(myfile, currentFile.MimeType);
}
throw new FileNotFoundException();
}
}
Upload function will upload the files and Get function will get the file.
Now set the Get route as a src for the image.
Here is the blog post: https://blog.antosubash.com/posts/dotnet-file-upload-with-abp
Repo: https://github.com/antosubash/FileUpload
I use the following code on an Azure function to push files on a github repository when a new file is uploaded to a blobstorage, that trigger the function.
But it doesn't work if multiple file are uploaded to blobstorage in a short time interval: only one random file is pushed to github and then the function throw an exception; in the log:
Description: The process was terminated due to an unhandled exception.
Exception Info: Octokit.ApiValidationException: Reference cannot be updated
{"message":"Reference cannot be updated","documentation_url":"https://docs.github.com/rest/reference/git..."}
This is the code:
public static async void PushToGithub(string fileName, Stream myBlob)
{
// github variables
var owner = GITHUB_USER;
var repo = GITHUB_REPO;
var token = GITHUB_TOKEN;
//Create our Client
var github = new GitHubClient(new ProductHeaderValue("GithubCommit"));
var tokenAuth = new Credentials(token);
github.Credentials = tokenAuth;
var headMasterRef = "heads/master";
// Get reference of master branch
var masterReference = await github.Git.Reference.Get(owner, repo, headMasterRef);
// Get the laster commit of this branch
var latestCommit = await github.Git.Commit.Get(owner, repo, masterReference.Object.Sha);
// For image, get image content and convert it to base64
byte[] bytes;
using (var memoryStream = new MemoryStream())
{
myBlob.Position = 0;
myBlob.CopyTo(memoryStream);
bytes = memoryStream.ToArray();
}
var pdfBase64 = Convert.ToBase64String(bytes);
// Create blob
var pdfBlob = new NewBlob { Encoding = EncodingType.Base64, Content = (pdfBase64) };
var pdfBlobRef = await github.Git.Blob.Create(owner, repo, pdfBlob);
// Create new Tree
var nt = new NewTree { BaseTree = latestCommit.Tree.Sha };
// Add items based on blobs
nt.Tree.Add(new NewTreeItem { Path = fileName, Mode = "100644", Type = TreeType.Blob, Sha = pdfBlobRef.Sha });
var newTree = await github.Git.Tree.Create(owner, repo, nt);
// Create Commit
var newCommit = new NewCommit("File update " + DateTime.UtcNow, newTree.Sha, masterReference.Object.Sha);
var commit = await github.Git.Commit.Create(owner, repo, newCommit);
// Update HEAD with the commit
await github.Git.Reference.Update(owner, repo, headMasterRef, new ReferenceUpdate(commit.Sha, true));
}
How can I solve so it pushes correctly to github all the files that are uploaded on the blobstorage?
Thanks in advance,
Marco
Have a look of this official doc:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger?tabs=csharp
In addition, storage logs are created on a "best effort" basis.
There's no guarantee that all events are captured. Under some
conditions, logs may be missed.
If you require faster or more reliable blob processing, consider
creating a queue message when you create the blob. Then use a queue
trigger instead of a blob trigger to process the blob. Another option
is to use Event Grid; see the tutorial Automate resizing uploaded
images using Event Grid.
If you focus on processing blob and don't care about the loose event, then you can use queue trigger to make sure the blob be precessed, if you care about the loose event, please use event grid.
I got an assignment to see if I can make power app that will generate some PDF file for end user to see.
After through research on this topic I found out that this is not an easy to achieve :)
In order to make power app generate and download/show generated pdf I made these steps:
Created power app with just one button :) to call Azure function from step 2
Created Azure function that will generate and return pdf as StreamContent
Due to power app limitations (or I just could not find the way) there was no way for me to get pdf from response inside power app.
After this, I changed my Azure function to create new blob entry but know I have problem to get URL for that new entry inside Azure function in order to return this to power app and then use inside power app Download function
My Azure function code is below
using System;
using System.Net;
using System.Net.Http.Headers;
using System.Runtime.InteropServices;
using Aspose.Words;
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log, Stream outputBlob)
{
log.Info($"C# HTTP trigger function processed a request. RequestUri={req.RequestUri}");
var dataDir = #"D:/home";
var docFile = $"{dataDir}/word-templates/WordAutomationTest.docx";
var uid = Guid.NewGuid().ToString().Replace("-", "");
var pdfFile = $"{dataDir}/pdf-export/WordAutomationTest_{uid}.pdf";
var doc = new Document(docFile);
doc.Save(pdfFile);
var result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new FileStream(pdfFile, FileMode.Open);
stream.CopyTo(outputBlob);
// result.Content = new StreamContent(stream);
// result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
// result.Content.Headers.ContentDisposition.FileName = Path.GetFileName(pdfFile);
// result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
// result.Content.Headers.ContentLength = stream.Length;
return result;
}
I left old code (the one that streams pdf back under comments just as reference of what I tried)
Is there any way to get download URL for newly generated blob entry inside Azure function?
Is there any better way to make power app generate and download/show generated PDF?
P.S. I tried to use PDFViewer control inside power app, but this control is completely useless cause U can not set Document value via function
EDIT: Response from #mathewc helped me a lot to finally wrap this up. All details are below.
New Azure function that works as expected
#r "Microsoft.WindowsAzure.Storage"
using System;
using System.Net;
using System.Net.Http.Headers;
using System.Runtime.InteropServices;
using Aspose.Words;
using Microsoft.WindowsAzure.Storage.Blob;
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log, CloudBlockBlob outputBlob)
{
log.Info($"C# HTTP trigger function processed a request. RequestUri={req.RequestUri}");
var dataDir = #"D:/home";
var docFile = $"{dataDir}/word-templates/WordAutomationTest.docx";
var uid = Guid.NewGuid().ToString().Replace("-", "");
var pdfFile = $"{dataDir}/pdf-export/WordAutomationTest_{uid}.pdf";
var doc = new Document(docFile);
doc.Save(pdfFile);
var result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new FileStream(pdfFile, FileMode.Open);
outputBlob.UploadFromStream(stream);
return req.CreateResponse(HttpStatusCode.OK, outputBlob.Uri);
}
REMARKS:
Wee need to add "WindowsAzure.Storage" : "7.2.1" inside project.json. This package MUST be the same version as one with same name that is in %USERPROFILE%\AppData\Local\Azure.Functions.Cli
If you change your blob output binding type from Stream to CloudBlockBlob you will have access to CloudBlockBlob.Uri which is the blob path you require (documentation here). You can then return that Uri back to your Power App. You can use CloudBlockBlob.UploadFromStreamAsync to upload your PDF Stream to the blob.
Is it possible to download a specific block from an Azure Block Blob if you know the Block Id?
Yes, you absolutely can, here's an example of how to download the first block:
var storageAccount = CloudStorageAccount.Parse("DefaultEndpointsProtocol=https;AccountName=...;AccountKey=...;EndpointSuffix=core.windows.net");
CloudBlobClient serviceClient = storageAccount.CreateCloudBlobClient();
var container = serviceClient.GetContainerReference("containerName");
var blockBlob = container.GetBlockBlobReference("blobName");
var blocklist = await blockBlob.DownloadBlockListAsync();
var firstBlock = blocklist.First();
var memStream = new MemoryStream();
await blockBlob.DownloadRangeToStreamAsync(memStream, 0, firstBlock.Length);
string contents;
using (var streamReader = new StreamReader(memStream))
{
contents = await streamReader.ReadToEndAsync();
}
You will need a couple of packages from nuget:
Microsoft.WindowsAzure.Storage
Microsoft.WindowsAzure.Storage.Blob
You could leverage Microsoft Azure Storage SDK for getting started with Azure Blob Storage quickly. And the Azure Storage SDK is a wrapper of Blob Service REST API. From the official tutorial about Blob Service REST API, we couldn't find anything about downloading the specific block via the Block Id. In addition, you could use Get Blob to download the bytes of your blob in the specified range by specifying the offset and the length of data to download from your blob.
I have searched high and low for this one and can't seem to find a way of accessing the Request.Content in an MVC web api. I basically am trying to create a File Service to and from Azure Blob and Table storage (table for storing metadata about the file, blob for the actual file)....
I was converting the steps in the following link, but this is where I have come unstuck
the back end I have working but can't find a way of the new unified controller passing a fileobject from json post through to the service! Any ideas would be greatly appreciated as always... or am I just going about this the wrong way?
Article here....
UPDATE: so to clarify, what I am trying to do in the new MVC 6 (where you no longer have an apicontroller to inherit from) is to access a file that has been uploaded to the api from a JSON post. That is the long and short of what I am trying to achieve.
I am trying to use the article based on the old Web API which uses the Request.Content to access it, however even if I use the WebAPIShim which they provide I still come unstuck with other objects or properties that are no longer available so I'm wondering if I need to approach it a different way, but either way, all I am trying to do is to get a file from a JSON post to a MVC 6 Web api and pass that file to my back end service....
ANY IDEAS?
Here is an example without relying on model binding.
You can always find the request data in Request.Body, or use Request.Form to get the request body as a form.
[HttpPost]
public async Task<IActionResult> UploadFile()
{
if (Request.Form.Files != null && Request.Form.Files.Count > 0)
{
var file = Request.Form.Files[0];
var contentType = file.ContentType;
using (var fileStream = file.OpenReadStream())
{
using (var memoryStream = new MemoryStream())
{
await fileStream.CopyToAsync(memoryStream);
// do what you want with memoryStream.ToArray()
}
}
}
return new JsonResult(new { });
}
If the only thing in your request is a File you can use the IFormFile class in your action:
public FileDetails UploadSingle(IFormFile file)
{
FileDetails fileDetails;
using (var reader = new StreamReader(file.OpenReadStream()))
{
var fileContent = reader.ReadToEnd();
var parsedContentDisposition = ContentDispositionHeaderValue.Parse(file.ContentDisposition);
fileDetails = new FileDetails
{
Filename = parsedContentDisposition.FileName,
Content = fileContent
};
}
return fileDetails;
}