How to hide sensitive data from open source projects which use continuous deployment? - virtual-machine

I have a Discord bot project on GitHub which deploys automatically to Azure Web Apps. Since my project uses an API, it needs a token in a 'config.json' file. I want to share my source code, but I couldn't figure out how to make this happen without revealing sensitive data like my token.
Is there a way to not hard code it? This may be possible with a Virtual Machine but I couldn't make it work with the App Service plan.

Related

Adding Teams messaging extension to an existing Graph Api project

I am developing a Teams application with AspNetCore and React with typescript.
I would like to add the messaging extension functionality to my app. I have tested the available samples for Action based Messaging extension from here - https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/51.teams-messaging-extensions-action
I could run this sample and successfully connect to the bot I created in my Azure resource group.
Since I already have a Teams project created, I want to know if I can just add the code that makes the messaging extension work from this sample or do I need to have a separate project just for running the bot and messaging extension?
Note- I do not have a Bot implementation in my original project.
It's definitely possible to combine them into the same project, you just need to make sure the endpoints all work correctly. The bot has, by convention, an endpoint at something like "/api/messages", but this is just a convention - you can use that or anything else you like, as long as you configure it properly in the Azure Bot registration so it knows what endpoint is for your bot, compared to the rest of the ASP.Net project.

Sharing user login between Blazor WebServer and ASP.NET Core API

I am building a service-oriented system for personal use (plus few friends may have limited access as well) - the aim is to have a dashboard for controlling my apps running on various machines such as Raspberry Pis (and potentially to be expanded to a VPS or few in future).
The architecture itself is pretty simple. For authentication I want to use AWS Cognito. Services would communicate with WebAPI (and potentially with eachother) using gRPC within a VPN, and dashboard would be served by Blazor server-side (may move to Blazor WASM Hosted model if I find a need for it). Each of the processes may or may not be on the same machine as any other (depending on the purpose). Blazor server may or may not run within VPN (I might want to move it to a separate web hosting later).
I created a simple diagram to visualize it:
The problem comes with authentication. I want to have Blazor server-side and API as a separate processes (for now they're going to run on the same machine, but I may want to move it elsewhere eventually). Ideally authentication should be handled by API, so authentication is client-agnostic, and the API can use it to verify if the logged in user can perform an action - which by itself is simple.
However, I want Blazor server to use and validate that token as well in order to determine what to display to the user. I want to do with the least amount of calls possible - ideally avoiding querying API for every 'should I display it or not?' choice.
I could easily do it by sacrificing possibility to move API elsewhere, and just merge Blazor Server and API Gateway into one project. For my current purpose it would be enough, but it's not an ideal solution, so first I want to look into how could I achieve my original vision.
How could I achieve this (with minimal amount of Blazor server to API queries)?
I was googling for solution a lot, but so far only found either examples of using Blazor server and API as one project, or using client-side calls to API directly.
Thank you good folks in advance.

PowerApps to call Azure API App

I am new to PowerApps development. I am trying to connect PowerApps to my custom APIs (Azure app api) and getting results of "resource not found". I can call the api from browsers, postman no problem. The .json file I use for PowerApps is the same as the one I use for editor.swagger.io (for testing). I checked the log file of the application on azure, all of the requests from browsers logged but not the ones from PowerApps. My question is, how PowerApps calls APIs and what is the right format of the .json file used for PowerApps app?
Thank you.
I would recommend trying again, we had a small issue on our backend that was causing some 404's at times. A fix for it has been deployed so you might see it work.
PowerApps uses Swagger to determine the shape of the REST api to be able to project those APIs into "formulas" that can be used easily in the client.
Also, for development/troubleshooting purposes I highly recommend using Fiddler to see exactly the REST call that PowerApps is doing and making sure the URL and parameters are correct. If not then look into your swagger definition and make sure there are no issues with the paths provided there.
You might also check that your Azure App API has either:
The PowerApps IP Addresses Whitelisted OR
If available, the "Allow Access to Azure Services" option toggled
When building Azure SQL backends for PowerApps, one of these paths must be followed.

Creating a content hub and client application using Piranha CMS

First off, I need to mention that I'm not sure if what I'm trying to achieve is even supported by Piranha CMS (that's partly what I'm trying to determine here). They mention the ability to create a standalone content hub on their website, but my assumptions of what is possible with that model might be incorrect. What I've already done is created an ASP.NET MVC application that is hosting Piranha CMS and I've published it to Azure websites for testing purposes--that part works as expected. The content management interface is the only user facing piece here--it is meant only to serve as the content hub for the client application (just the one for now as this is just proof of concept work).
I am now trying to build a client ASP.NET MVC application that pulls content from the hub. This is where I'm thinking that my assumptions may have been wrong. I was thinking that I'd be able to install the Piranha CMS nuget package(s) on the client as well, and I'd be able to configure the framework to get content from the hub in the same way that it would if the content were hosted on the client site. I realize that I could get the content from the hub using Piranha's REST api, but what I want to do is to be able to use the more friendly entity model based api for this.
So my question is whether it is possible (within reason) to setup Piranha CMS in the way that I've described. If it is, how exactly do I configure the client such that it is aware of the location of the content hub?
There are currently no .net client api consuming the rest services as the simplest scenario would be to deploy .net applications together with the server. In the setups I've done native apps & html5 knockout/angular applications have used the rest api's for getting json data. You should however be able to white such a module, performing the HTTP calls and the deserializing the json without any problems.
Regards
HÃ¥kan

What is a robust browser-based method for uploading (very) large files?

I'm looking at replacing our current file-upload solution, a bespoke java application which transmits files and metadata using sftp, with a browser-based solution. My intention is to have finer-grained control over who can and cannot upload by tying the upload to an authenticated session in a web app. This will also enable me to collect reliable data about who uploaded what when, etc, in a straight-forward manner.
My concern is that we need to be able to support uploading huge files- think 100GB or more. As such, I don't think standard HTTP is appropriate- I don't trust it to be reliable, and I want to be able to provide user feedback such as progress bars.
The best idea I've come up with so far is an embedded applet which uses sftp to push the file, but I'd like to do this using only js or similar if at all possible.
There is project that want to enable resumable uploads: https://tus.io/.
Its client library provides progress bar and resume-on-interruption in the browser.
You can integrate the server part into your app, to manage authentication yourself, while benefiting from the resumability!
Here is a blog post https://tus.io/blog/2018/09/25/adoption.html in which they mention it being used by Cloudflare.