we are working on integrating OneDrive into our service. There is one issue we are facing with the OAuth redirects.
We have multiple deployment stages in our development process which include different base-urls. Starting with local development, multiple test deployments and one production. Is there any way to use different base urls for allowed redirect urls? Will this ever be supported? Why is it not? Dropbox and GDrive both support this.
My only idea would be to use different apps for the different stages which would introduce some complexity I would like to avoid.
What is the best process of handling different urls in development and production?
I found Consuming onedrive rest api in development environment using localhost which is not something I want to do since it will create confusion at some point if you forget to change your hosts file back.
Regards,
arne
Daron from OneDrive here. Unfortunately that's not supported right now, so your best bet is to register different apps for your different stages. We've noted your feedback though.
Related
I have a Discord bot project on GitHub which deploys automatically to Azure Web Apps. Since my project uses an API, it needs a token in a 'config.json' file. I want to share my source code, but I couldn't figure out how to make this happen without revealing sensitive data like my token.
Is there a way to not hard code it? This may be possible with a Virtual Machine but I couldn't make it work with the App Service plan.
With a self hosted instance like matomo, and a smart edge router like traefik, I was hoping to find some automated solution for analytics via traefik configuration instead of injecting JavaScript into each hosted service on my docker based server.
It seems to me the best way to track usage in the backend, instead of relying on 'the goodness of the browser', especially with ad blockers targeting matomo.
Has anyone tackled this in any fashion?
Yes, it's possible with Log Analytics: https://matomo.org/log-analytics-/
See also: https://github.com/matomo-org/matomo-log-analytics
I am building a service-oriented system for personal use (plus few friends may have limited access as well) - the aim is to have a dashboard for controlling my apps running on various machines such as Raspberry Pis (and potentially to be expanded to a VPS or few in future).
The architecture itself is pretty simple. For authentication I want to use AWS Cognito. Services would communicate with WebAPI (and potentially with eachother) using gRPC within a VPN, and dashboard would be served by Blazor server-side (may move to Blazor WASM Hosted model if I find a need for it). Each of the processes may or may not be on the same machine as any other (depending on the purpose). Blazor server may or may not run within VPN (I might want to move it to a separate web hosting later).
I created a simple diagram to visualize it:
The problem comes with authentication. I want to have Blazor server-side and API as a separate processes (for now they're going to run on the same machine, but I may want to move it elsewhere eventually). Ideally authentication should be handled by API, so authentication is client-agnostic, and the API can use it to verify if the logged in user can perform an action - which by itself is simple.
However, I want Blazor server to use and validate that token as well in order to determine what to display to the user. I want to do with the least amount of calls possible - ideally avoiding querying API for every 'should I display it or not?' choice.
I could easily do it by sacrificing possibility to move API elsewhere, and just merge Blazor Server and API Gateway into one project. For my current purpose it would be enough, but it's not an ideal solution, so first I want to look into how could I achieve my original vision.
How could I achieve this (with minimal amount of Blazor server to API queries)?
I was googling for solution a lot, but so far only found either examples of using Blazor server and API as one project, or using client-side calls to API directly.
Thank you good folks in advance.
I have several Systems under different subdomains working. Every system serves some Restful APIs, which are used by mobile clients.
Now I looked at apigility for managing those APIs.
But the architecture is not completely clear to me. All of my systems serving APIs are built with Zend Framework 2.
Will I have to add to every system the apigility component? Or can I set up another subdomain where I will use apigility? Can I there configure the APIs for all systems from one place? Or will I have to configure the apigility for every system seperate?
It would be nice for me, if I can access apigility under api.example.com and then see the different APIs of the systems I have configured. The APIs will be served under the subdomains of the system, for example:
system1.example.com/api/documents
system2.example.com/api/pictures
etc.
Is this possible with apigility?
You can have multiple APIs in one Apigility install.
By default they will be access from yourdomain/apiname/resource or something like this.
One thing that maybe you dont know is that Apigility works with some standards like HAL or the preference of the team developers for not use nested resources (rosource1/resource2/3). If you not work with this maybe you will fight with the framework all day.
Fellow grails developers, help me to find out the most groovy way to accomplish following.
So, I have a Tomcat with enabled CAS, which is used by all webapps hosted.
One of these webapps is grails application.
As for files in web-app folder of this grails application, they just can see request.getRemoteUser() as expected.
As for the grails-app (to get user name in controllers, for example), as far as I can understand, I should take some additional steps.
My question is - what is the most groovy way to accomplish this?
Off the top of my head, just trying to resolve this issue by any means, I can forward requests to controllers from web-app part, but it quiet perversive. Or I can install grails CAS plugin, but that means I'm just doubling configs, since I already head configure web.xml.
Or I can (can I?) use somehow for these purposes springSecurityService?
But what is the most elegant way to get user name in grails application when, as for web applications, there is already working CAS?
There's a few different ways to handle this:
Use spring-security-core with spring-security-cas; but like you say, this adds a second CAS layer
Access user information through request.remoteUser, as in any other webapp
configure a PreAuthenticatedAuthenticationProvider and a RequestHeaderAuthenticationFilter to get spring-security to trust the request headers set up tomcat; there's some information on setting this up at http://dickersonshypotheticalblog.blogspot.com/2011/05/grails-spring-security-using.html