Is there any way to quickly switch private endpoint on/off? - azure-virtual-network

I have an Azure web app that its inbound traffic is limited to a private endpoint and it has only a private ip. The setup is working well and I have a VM inside the subnet that I can browse my Azure web app.
The is a dev environment and I need to experiment/verify something.
Is there anyway to toggle off the private endpoint, have the web app available to public internet for a period of time, the toggle on the private endpoint?
I know I can remove webapp from its vent, but I am hoping to find a quick way to toggle the private endpoint off and off.

I reproduce the issue and find that toggle on and off can't be done.
1 simple thing you can do is to remove the private endpoint by going to the webapps >> networking>>private endpoints >> select the endpoint and remove from your webapp and again when require you can create it.
Added a private endpoint it got turn on but there is no option to turn it off.
Only one option it would be remove the private end to make it turn off.

Related

Is it possible to use Azure Blob Storage on a website that has no authentication?

I need to create a way for anyone who visits my website to upload an image to an Azure Blob Container. The website will have input validations on the file.
I've considered using an Azure Function to write the validated file to the Blob Container, but I can't seem to find a way to do this without exposing the Function URL to the world (similar to this question.
I would use a System-Assigned Managed Identity (SAMI) to authenticate the Function to the Storage account, but because of this, anyone could take the Function URL and bypass the validations and upload.
How is this done in the real world?
If I understand correctly, the user uploads a file via an HTTP POST call to your server, which validates it. You would like to use an Azure Function to then upload the validated file to the Blob Storage.
In this case, you can restrict the access to the Azure Function; so that it can only be called from your server's IP. This way the users cannot reach that Function. This can be done via the networking settings, and is available on all Azure Function plans.
You could also consider implementing the validation logic within the Azure Function.
Finally (perhaps I should have started with this), if you are only considering writing an Azure Function to upload data to a Storage Account, you should perhaps first consider using the Blob Service REST API, specifically the PUT Blob endpoint. There are also official Storage Account SDKs for different languages/ecosystems that you could use to do this.
• Since, you are using an Azure function default generic URL on your website for uploading blobs with no authentication, I would suggest you to please create an ‘A’ host record for your function app. Considering that you have a website, you may be having a custom domain for your website to be unique and as you might be having a custom domain, the custom domain’s DNS records must be hosted on a public DNS server. Thus, similarly, on the same public DNS server, you will have to create an ‘A’ host record for the function app and assign it the same public IP address that is shown and assigned in Azure. This will ensure that your public DNS server has an active DNS resolver for the function app globally and then ensure to create a ‘CNAME’ record for your default generic Azure function app URL with the same URL as the alias in the DNS records and the ‘A’ host record as the assigned value in it.
In this way, whenever, any anonymous person visits your website and tries to upload an image, he will be shown the function app URL as ‘abc.xyz.com’ and not the generic Azure function app URL thus successfully ensuring that your objective is achieved.
• Once the above said has been done, then publish the new ‘CNAME’ record created in the public DNS server as your function app URL. This will not expose the generic Azure function app URL and mask it as well as ensure that it is secured since you will be uploading an SSL/TLS certificate for the website to be HTTPS protected in the function app workspace itself as shown below in the snapshot: -
For more information, kindly refer the below documentation link: -
https://learn.microsoft.com/en-us/azure/dns/dns-custom-domain

logic app, 403 if try to connect to storage behind firewall

i deployed a logic app standard to use vnet integration. In our scenario we want to get attachment from an email and store it to a storage account type datalake. We are using following connectors:
Office 365 and
Azure blob Storage
the problem is that our storage are behind firewall and private endpoint. If storage account are in all network flow work but not work if it is under firewall and we got 403(logic app although is under vnet integration, pass over internet as i can see on log analytics).
i also following microsoft doc and also this link without success:
https://techcommunity.microsoft.com/t5/integrations-on-azure/deploying-standard-logic-app-to-storage-account-behind-firewall/ba-p/2626286
i also tried this and works:
https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-azureblobstorage?WT.mc_id=Portal-Microsoft_Azure_Support&tabs=single-tenant#access-blob-storage-with-managed-identities
but i got file corrupted E.g. body of other connector if is a csv or if attachment is an excel file is corrupted. here the flow via https:
There is a way to use vnet integration and storage private endpoint or there is a way to take the attachment and save it as-is via https connector? (independently by file extension)?
Standard Logic app with private end point, cannot access the storage account with private end point. Storage account can be used as storage for logic app. But accessing the storage account and files from Logic app is not possible, if both resources are in same region.
To achieve this, we need to create Logic app and storage in 2 different region and whitelist the Ip. Refer the below link
https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/access-azure-blob-using-logic-app/ba-p/1451573

.NET Core Web Application - app_offline equivalent / Safely stop web application for maintenance

If hosting a web application with IIS I know there's an app_offline.htm file that can be used, but I'm hosting a .NET core web app in a Linux environment with apache. Anyone know what the safest approach is to taking an app offline in this situation so that I can make changes to my app without breaking anything?
EDIT: The intent is to keep the website online, but prevent login or any interactions within a web application until maintenance tasks are complete and app restarted.
After a little research, I made something that should serve the purposes of an 'app_offline' by using an ActionFilter.
Using dependency injection, this filter takes a boolean value set in my appsettings.json and if set to true it's read into the action filter and redirects users to a 'down for maintenance' page. It is my understanding that using IOptionsSnapshot instead of IOptions will get the value if changed during runtime. This is also nice when you have a public facing website but only want the backend application to prevent any actions until changes have been made and the app restarted. Below is an example of my ActionFilter. Only thing I don't like is that this filter has to be set on pretty much every controller in the app. I'd prefer coming up with something that could be checked sooner in the pipeline, so if anyone has a better approach I'd love to see it.
private readonly ServiceSettings _settings;
public AppStateAttribute(IOptionsSnapshot<ServiceSettings> options)
{
_settings = options.Value;
}
public override void OnActionExecuting(ActionExecutingContext context)
{
if(_settings.AppOffline)
{
var redirectOnFail = "/maintenance";
context.HttpContext.Response.Redirect(redirectOnFail, true);
}
base.OnActionExecuting(context);
}

Access to api from gitlab webhooks

I need to develop a bunch of my own web hooks (or services maybe) for auto deploy, report into project management systems, etc.
But data posted to web hook don't have much information to fill my needs.
For example, I've received simple push event, how can I know is it force push or not? Okay, I have 2 treeishes, let's look at repository and check this push — oops, need user token to do it. Sad.
What is the right method to access gitlab api from web hooks? Probably I've missed something important? I'm really confused.
Upd1:
Let's try to find a solution. Possibilities
Imagine we can have a user who can read all projects in gitlab. But
that user should be connected to each project to have an access. Ok
;-(
What about to read repo by pusher? We can't because we need to use his private token to do this.
Maybe some internal functionality to read all repos or something? Sure not.
So. Maybe database? Nope. Because we need to clone repo at first and can't save data in DB anyway with refreshing caches.
I think we need a security token and maybe many checkboxes with access permissions for each joined web hook or an app (service).
Please feel free to share your ideas.
I've remembered partial solution. So scenario will be like that:
Create web service with your web hook.
Create a ssh key on the same host for some special (usually owner of web hook service) user to have an access to repos.
Add ssh key created at previous step as deploy key.
Finally: Register your webhook and add your deploy key for that hook to project — repeat it for each project what need this hook.
You have event listener (your web hook service), and you have access to that repository (ssh/git).
But still that solution doesn't have access to API itself.
Probably, there is also an another solution.
Create custom admin user with a big random password and some synthetic name like HookBot or something, remember private_token of that user;
Register your web hook;
Use api access to add your deploy key with HookBot (untested);
Use sudo api to get sources or something else. Just mimicry to pusher's account (sudo -u {author_id}) and go on, read repo, work with it, etc.
Maybe some another solutions? More legit?

Web Service missing methods when called from Silverlight

I created WCF web service, deployed it, and debugged it. I wrote a console app, referenced the web service, and everything works.
Now, I'm attempting to consume the web service in a silverlight 3 application. I added the following code to a click event.
TagServiceClient client = new TagServiceClient();
Tag[] tags = client.GetTags();
client.Close();
VS is telling me it can't find the GetTags() and Close() methods. But VS has no problem with these methods in the console app.
I added a using statement for the service reference to the top of my file.
I placed a clientaccesspolicy.xml file in the root domain and in the folder containing the web service. Doesn't seem to change anything regardless where it is.
What's going on? Any suggestions? This is my first time consuming a web service in Silverlight so I may just be missing something.
You will need to generate a new client proxy to use in the Silverlight app - IOW, from the Silverlight app, add a new service reference, and point it to the service.
You will then see that things are a little different - you will find that there are async methods in the proxy, not the synchronous ones you will have seen in the proxy generated for the console app. So in the silverlight app, your code will end up looking something like this:
client.GetTagsCompleted += [my event handler];
client.GetTagsAsync();
and in your event handler:
if (e.Error == null)
if (!e.Cancelled)
List<Tag> tags = new List<Tag>(e.result);
When you add a the service reference to the silverlight app, make sure you have a poke around the advanced settings, because you can change what sort of collection the items are returned in, etc (the default return collection is an ObservableCollection<T>).
If you want to avoid this sort of thing (different proxies for different apps or modules), then consider using svcutil to generate your proxy instead of allowing VS to do it (VS doesn't use svcutil).