I am working in Ektron 8.6.
Does anyone know how API level caching is managed in ektron?
Is there is any config settings to manage API level caching?(web.config or any other config files).Whether API level caching is enabled by default?Is it different in the previous version(Ektron 8.5)?
Starting in version 8.5, Ektron introduced a caching layer that sits beneath its Framework APIs. It is configurable (enable, disable, set ttl, etc) and extensible (provider based so you can implement providers for various cache servers like Reddis, etc).
It is not enabled by default. By default, each API call ultimately hits the database (or the search index). Since this is new in version 8.5+, older versions of Ektron do not have any sort of built-in API level caching, though can obviously take advantage of any standard .NET caching you'd want to create on your own.
Here's a technical webinar that goes into detail on the API level caching in v8.5+. The piece relevant to your question starts at 26:25, but I'd watch the whole thing if you haven't seen it already.
http://www.ektron.com/Webinars/Details/Optimize-Site-Performance-through-Caching/
The default Ektron cache provider uses in-memory / in-proc application scope storage. Once you use that, you may want to take a look at this open source project which implements a 3rd party cache provider for Redis. You can use this as is, or us it as the stub for your own cache provider for another system, or just stick with the OOB in-proc cache provider.
https://github.com/ektron/EktronContrib/blob/master/README.md
Bill
Related
I am in a situation where we have a Feature Service that is not publicly available. Of course, this means that it is not possible to access it directly from JavaScript (and it is not desirable as well as it has some sensitive information).
Therefore, what I was thinking is that instead of our Web App connecting directly to the ArcGIS services, it would connect to our 'proxy / middleware / man in the middle' system, and this would in turn authenticate and query ArcGIS. This would also allow us to restrict any access to sensitive data while enriching the data through our data that lives somewhere else.
I have a gut feeling that this is completely wrong..and the way to do it is to create some publicly available space and do the enriching on ArcGIS itself. From a business case, this is the least desirable route.
Thanks!
I am building a service-oriented system for personal use (plus few friends may have limited access as well) - the aim is to have a dashboard for controlling my apps running on various machines such as Raspberry Pis (and potentially to be expanded to a VPS or few in future).
The architecture itself is pretty simple. For authentication I want to use AWS Cognito. Services would communicate with WebAPI (and potentially with eachother) using gRPC within a VPN, and dashboard would be served by Blazor server-side (may move to Blazor WASM Hosted model if I find a need for it). Each of the processes may or may not be on the same machine as any other (depending on the purpose). Blazor server may or may not run within VPN (I might want to move it to a separate web hosting later).
I created a simple diagram to visualize it:
The problem comes with authentication. I want to have Blazor server-side and API as a separate processes (for now they're going to run on the same machine, but I may want to move it elsewhere eventually). Ideally authentication should be handled by API, so authentication is client-agnostic, and the API can use it to verify if the logged in user can perform an action - which by itself is simple.
However, I want Blazor server to use and validate that token as well in order to determine what to display to the user. I want to do with the least amount of calls possible - ideally avoiding querying API for every 'should I display it or not?' choice.
I could easily do it by sacrificing possibility to move API elsewhere, and just merge Blazor Server and API Gateway into one project. For my current purpose it would be enough, but it's not an ideal solution, so first I want to look into how could I achieve my original vision.
How could I achieve this (with minimal amount of Blazor server to API queries)?
I was googling for solution a lot, but so far only found either examples of using Blazor server and API as one project, or using client-side calls to API directly.
Thank you good folks in advance.
I am currently looking at implementing feature toggles using ff4j for our application. We want to have a remote central config app which will hold all the features in it and the applications will talk to this central config app via REST to get the features. We will not be able to leverage Spring Cloud Config or Archaius for this purpose.
I went through the documentation and it seems there is a support for HttpClient (https://github.com/ff4j/ff4j/wiki/Store-Technologies#httpclient). But I couldn't find any sample for the same. Can someone please let me know if I can leverage this method to build my feature store from a REST endpoint. Also, I would appreciate if someone could point me to a sample of this.
This is a common pattern.
A component holds the Administration UI (console) and the REST API. You can call it the "Admin Component". For security reasons It may be the only component to have access to persistance unit (any of the 15 DB implementation available)
For the "admin component" HERE is sample using standAlone spring-bppt application using JDBC DB, and HERE you find a simple web application.
The REST API can be secured using credentials user/password and/or API Key. More information HERE
All microservices access the REST API as clients and request feature store. You will need the dependency ff4j-webapi-jersey2x or ff4j-webapi-jersey1x that hold the client http> Then you can define the store using :
FeatureStoreHttp storeHTT = new FeatureStoreHttp("http://localhost:9998/ff4j");
Warning : Please consider using cache to limit overhead introduce by accessing the REST API at each feature usage. More info on cache HERE
I am going to run a Web App on JBoss App Server 7. Does JBoss have some sort of inbuilt user management module/API which I can use rather than code my own? Or do I have to make this module myself. I know about the default JAAS pieces providing authentication AND authorisation, however I am looking to manage, add, edit, delete users from the datasource as well.
I'm not being lazy or anything, just want to know if JBoss has an easy inbuilt way before I start :)
Google implies no so I want to make sure by asking here.
As far as I know they don't provide any easy to managed identity provider, they "only" provide way to connect to identity provider using standard protocol like LDAP, SAML and WS-trust, openid to provide container managed authentication.
They have a idm project but it seems to provide standard protocol SSO identity backed by some identity store but doesn't provide way to manage the users.
PicketBox and PricketLink are the tow JBoss project you should look for more information.
These element can be used if you want to use global identity system, existing one, new product deployment or custom build.
(disclaimer: I have sped some time on Picket* projects documentation and I still don't think I get a good knowledge on how it works... )
There is a web interface and a command line interface for management operations. See the Management Clients section of the documentation.
The security realms could be what you're after. I'm not really a security expert though.
Maybe a security domain could be helpful too.
I have a C# .net application which servers both company's internal users and external customers. I need to do fine-grained authorization like who accesses what resource. So I need something like resource-based or attribute-based rather than a role-based authorization.
What comes to my mind is to either:
Implement my own authorization mechanism and sql tables for my .net application
Use/implement a standard mechanism, like a software that has implemented XACML (for instance Axiomatics)
The problem with the first method is that it is not centralized nor standard so other systems cannot use it for authorization.
The problem with the second approach is that it is potentially slower (due to extra calls needed for each resource). Also I am not sure how widely a standard authorization like XACML is supported by applications in the market to make future integrations easier.
So, in general what are the good practices for fine-grained authorization for web applications that are supposed to serve both internal users and external customers?
I would definitely go for externalized authorization. It doesn't mean it will be slower. It means you have cleanly separated access control from the business logic.
Overview
XACML is a good way to go. The TC is very active and companies such as Boeing, EMC, the Veterans Administration, Oracle, and Axiomatics are all active members.
The XACML architecture guarantees you can get the performance you want. Since the enforcement (PEP) and the decision engine (PDP) are loosely coupled you can choose how they communicate, what protocol they use, whether to use multiple decisions, etc... This means you have the choice to go for the integration which fits your performance needs.
There is also a standard PDP interface defined in the SAML profile for XACML. That guarantees you 'future-proof' access control where you are not locked into any particular vendor solution.
Access control for webapps
You can simply drop in a PEP for .Net webapps by using HTTP Filters in ISAPI and ASP.NET. Axiomatics has got one off-the-shelf for that.
Current implementations
If you check Axiomatics's customers page, you'll see they have Paypal, Bell Helicopter, and more. So XACML is indeed a reality and it can tackle very large deployments (hundreds of millions of users).
Also, Datev eG, a leading financial services provider is using Axiomatics's .Net PDP implementation for its services / apps. Since the .Net PDP is embedded in that case, performance is optimal.
Otherwise, you can always choose from off-the-shelf PEPs for .Net that integration with any PDP - for instance a SOAP-based XACML authorization service.
High levels of performance with XACML
Last July at the Gartner "Catalyst" conference, Axiomatics announced the release of their latest product, the Axiomatics Reverse Query which helps you tackle the 'billion record challenge'. It targets access control for data sources as well as RIA. It uses a pure XACML solution so that it remains interoperable with other solutions.
As a matter of fact, Kuppinger Cole will host a webinar on the topic very soon: http://www.kuppingercole.com/events/n10058
Check out the Axiomatics ARQ press release too here: http://www.axiomatics.com/latest-news/216-axiomatics-releases-new-reverse-query-authorization-product-a-breakthrough-innovation-for-authorization-services.html
Definitely look for a drop-in authorization module for your ASP.NET application. I'm not just saying that because I implement drop-in auth systems at BiTKOO, but because I have had to work with home-grown auth implementations in the past. Building your own authorization system for a single application really is not a good use of your time or resources unless you intend to make a career out of implementing security systems.
Externalizing the authorization decision from your app is a good idea from an architectural standpoint. Externalizing the authz decision gives you an enormous amount of flexibility to change your access criteria on the fly without having to shut down your web service or reconfigure the web server itself. Decoupling the web front-end from the authz engine allows you to scale each independently according to the load and traffic patterns of your application, and allows you to share the authz engine across multiple apps.
Yes, adding a network call to your web app will add some overhead to your web response compared to having no authorization at all or using a local database on the web server. That shouldn't be a reason not to consider external authorization. Any serious authorization product you consider will provide some sort of caching capability to minimize the number of network calls required per web request or even per user session across multiple web requests.
In BiTKOO's Keystone system, for example, the user attributes can be cached on the web server per user-session, so there's really only one back-end network request involved on the first page request as part of establishing a user login. Subsequent page requests (within the lifetime of the cached credentials, usually 5 minutes or so) can be handled by the web server without needing to hit the authz service again. This scales well in cloud web farms, and is built on XACML standards.
I need to do fine-grained authorization like who accesses what resource. So I need something like resource-based or attribute-based rather than a role-based authorization.
Check out this: https://zanzibar.academy/. Zanzibar is a project made at Google to solve fine-grained authorization at scale.
Use/implement a standard mechanism, like a software that has implemented XACML (for instance Axiomatics). The problem with the second approach is that it is potentially slower (due to extra calls needed for each resource).
Auth0 is working on a solution called FGA (https://fga.dev) that will be optimized for low latency. It's built upon the Zanzibar paper.
Disclaimer: I am employed at Auth0.