Session Expiration Handling in Apache Click (Clickide-2.3.0.) - apache

I am a newbie to Apache Click Framework and I have been evaluating Clickide-2.3.0.0 over the past few days. I am stuck on one part involving session timeouts.
If I want to handle session expiry by setting session timeout interval in my server configuration (I am using Apache Tomcat 7), there is no support provided by Click as provided in case of other Frameworks like ZK (timeout-uri specified in .zul file)
All the work-arounds that I have found so far involve hard coding the validation and constraint checks explicitly in Java using session objects or context manipulation. No support available from Framework side.

It Page or Control has a Context object. The Apache Click docs has some examples about it.

Related

ColdFusion 2018 - Requests Multiply Executed

with a new project we encountered some strange behaviour on our ColdFusion application.
Whenever a single request is initiated from the browser, the code of the cfml-templates is
executed multiple times. Upon viewing the corresponding log-files we found out, that indeed
for some reason the same request fires the evaluation in our application multiple times. One request
generates several entries. This is especially the case for long-running requests, such as database imports.
The ColdFusion application implements a REST-service, but even on manually requesting a resource,
such as a certain cfml page, on the same application - the code gets executed an unknown amount of times(variable initializations, database write-operations etc. take place), and if the request runs too long (cap at around ~4-6 seconds) there is no response to the browser.
About the infrastructure:
The application is Coldfusion18 with Tomcat Standard Edition
The webserver is an Apache (2.4.6).
Everything runs on a Linux machine with Cent OS 7.7
The corresponding Java version is 11.0.4
Our best guess is that there might be some misscommunication between the coldfusion connector with
the apache webserver. We actually searched for some configuration parameters that could cause the
problem, without success. Upon an installation on a windows machine we did not encounter that error.
Anyone got any idea?
we just found our answer in the following post:
Link to Solution

"The search engine appears to be down or failing to respond to the search query"

I've installed FusionAuth (awesome product) into a Docker Swarm cluster using the official docker-compose.yml file and everything seems to work brilliantly.
EXCEPT
Periodically, when a user goes to login they will be presented with the above error stating that the search engine is not available. If they try again immediately then everything works correctly! I would, obviously, prefer that they never saw the error.
Elasticsearch is definitely running and is responding to API calls correctly, and I can see the fusionauth_user index is present and populated with docs.
I guess my question is two fold:
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
I've search the docs for answers to the above but I can't seem to find anything :-(
Thanks for the kind feedback.
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
Elasticsearch provides full text search of user data. Each time a user is created or updated the user is re-indexed. In this case during login, we are updating the search index with the last login instant.
This service is required and cannot be disabled. We have had clients request to make this service optional for embedded applications or small scale scenarios where Elasticsearch may not be required. While this is not currently in plan, it is possible we may revisit this option in the future.
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
Not currently.
Full disclosure, I am not a Docker or Docker Swarm expert at all - perhaps there are some nuances to Swarm and response time due to spin up and spin down of resources?
Do you see any exceptions in the log when a user sees this error on the login?

How to track down long running calls to IIS?

Our users are restless. They keep complaining about woolly, unmeasurable stuff, particularly slowness, without giving specifics, which of course makes it very difficult to track down.
Nonetheless, it is quite possible that they are right, that there are server calls that are taking way too long to come back. So I want to put some kind of sniffer on the web site (we're using ASP.NET MVC 4 on IIS7) that will log any call that takes more than n seconds to turn around, or that returns more than x megabytes of data, along with all request parameters, the response size, and maybe a certain amount of response data.
I haven't a clue how to do this, though. Any suggestions?
here is my take on this:
FRT
While you can use failed request tracing to log slow requests, in my experience is more useful for finding out why a request fails before it hits your application, rather than why its running slowly. 9/10 times its going to simply show you that the slowdown is in your code somewhere.
Log Parser
Yes you can download and analyze iis logs. I use Log Parser Lizard to do the analysis - its a great gui over log parser. Here's a sample of how you might query slow requests over 1000ms:
SELECT
To_String(To_timestamp(date, time), 'dd/MM/yyyy hh:mm:ss') As Time,
cs-uri-stem, cs-uri-query, cs-method, time-taken, cs-bytes, sc-status
FROM
'C:\inetpub\logs\LogFiles\W3SVC1\u_ex140721.log'
WHERE
time-taken > 1000
ORDER BY time-taken desc
New Relic
My recommendation - go easy on yourself and sign up for a free trial. No I don't work for them, but I've used their APM product a lot. Install the agent on the server - set it up. In 10 mins you will be amazed at the data you see about the site. Trust me.
Its designed to work in production environments and gives you amazing depth of info on what's running slow, down to the database query and stack traces. Its pure awesome. Once its setup wait for the next user complaint, log in and look at traces for the time frame.
When your pro trial ends, you can still get valuable data on the free tier, but it will only keep last 24 hours. We purchased licenses -expensive yes, but worth every cent. Why? Time taken to identify root causes was reduced by an order of magnitude, we can get proactive by looking at what is number 2, 3 and 4 on the slow requests list and working those before they become big problems, and finally the alerting makes us much more responsive when things were going wrong.
Code it
You could roll you own. This blog uses Mvc ActionFilters to do the logging. You could also use an HttpModule similar to this post. The nice thing about this approach is you can compile and implement the module separately from your application, and then just drop in the dll and update web.config to wire up the module. I would be wary of these approaches for a very busy site. Also, getting the right level of detail to fully identify the root is challenging.
View Requests
As touched on by Appleman1234, IIS has a little known feature to look at requests currently executing. Its handy for the 'hey its running slow right now' situation. You can use appcmd.exe or the IIS gui to do it. You will need to install the 'Request Monitor' IIS feature for this to work. This approach is ok for rudimentary narrowing of the problem, but does not show you whats running slowly in your controller.
There are various ways you can do this:
Failed Requests Tracing(FRT) – formerly known as Failed Request Event Buffering (FREB) with custom failure condition of takes over a certain time to load / run
Logging request information with IIS logging functionality and then using a tool like LogParserStudio
Using tools like Fiddler or IISMonitor on the IIS server to capture request information
For FRT the official documentation is available here and information how to capture dumps for long running process is avaliable here
For logging request information in IIS information about log file analysis is located here
For information on configuring Fiddler to capture IIS requests find information here
A summary of the steps in the linked resources is provided below.
For FRT
From IIS Manager for a given site,In the Actions pane, under Configure, click Failed Request Tracing and enter desired values in dialog box to enable Failed Request Tracing.
From IIS Manager for a given site, under IIS click Failed Request Tracing Rules, in order to define rules of failure for a given request. In the Actions pane, click Add and follow the wizard.
The logs will go in the directory you specify and are viewable in a web broswer.
For IIS logging
Logging is enabled by default on IIS
From IIS Manager for a given site,under IIS click Logging, and in the Actions Pane, click Enable to enable logging if it isn't already.
From IIS Manager for a given site,under IIS click Logging, and then configure as desired and click apply.
Install LogParser, .Net 4.x and LogParserStudio (if you need additional steps see here
Open LogParserStudio and add logs to it, you then can use SQL queries to get information from the log files.
For Fiddler
You need to change the user that IIS runs as to a user that can launch applications, like Fiddler (instead of Network Service), and then launch Fiddler with that user.
Also see Monitor Activity on a Web Server (IIS 7) for further information.

How to call Apache NMS from in a sandbox?

I'm trying to call Apache ActiveMQ NMS Version 1.6.0 from my code ('IntPub') that must run in a sandbox in a .NET 4.0 environment for security reasons. The program that creates the sandbox makes my code 'partially trusted' and therefore 'security-transparent' which seems to mean that it can't create a ConnectionFactory (see error log below) because NMS seems to be 'security-critical'. Here's the code that's causing this error:
connecturi = new Uri("tcp://my.server.com:61616");
var connectionFactory = new ConnectionFactory(connecturi);
I also tried this instead with similar results:
connecturi = new Uri("activemq:tcp://my.server.com:61616");
var connectionFactory = NMSConnectionFactory.CreateConnectionFactory(connecturi);
Since I can't change the security level of my assembly (the sandbox prevents it) is there a way to make NMS run as 'safe-critical' so it can be called by 'security-transparent' code? Would I have to recompile it to do so, or does NMS do some operation that would never be considered 'safe-critical?
I appreciate any help or suggestions...
Assembly 'IntPub, Version=1.0.0.0, Culture=neutral, PublicKeyToken=6fa620743b8dc60a' is partially trusted, which causes the CLR to make it entirely security transparent regardless of any transparency annotations in the assembly itself. In order to access security critical code, this assembly must be fully trusted.Detail:
<OrganizationServiceFault xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/xrm/2011/Contracts">
<ErrorCode>-2147220956</ErrorCode>
<ErrorDetails xmlns:d2p1="http://schemas.datacontract.org/2004/07/System.Collections.Generic" />
<Message>Unexpected exception from plug-in (Execute): Test.Client: System.MethodAccessException: Attempt by security transparent method 'Test.Client.Execute(System.IServiceProvider)' to access security critical method 'Apache.NMS.ActiveMQ.ConnectionFactory..ctor(System.Uri)' failed.
From the error message attributes, it looks like you're running a Dynamics CRM 2011 plugin in sandbox mode, which has some very specific rules about what you can and can't do. In particular, you're only allowed to make network connections via HTTP and HTTPS, so attempting raw TCP sockets will definitely fail.
Take a look at this MSDN page on Plug-in Isolation, Trusts, and Statistics. It looks like there may be a way to relax the network restrictions by modifying a system registry entry to include tcp, etc, in the regex value. Below is an excerpt from the page. Note: I have not done this myself, so can't say for sure it'll work.
Sandboxed plug-ins and custom workflow activities can access the
network through the HTTP and HTTPS protocols. This capability provides
support for accessing popular web resources like social sites, news
feeds, web services, and more. The following web access restrictions
apply to this sandbox capability.
Only the HTTP and HTTPS protocols are allowed.
Access to localhost (loopback) is not permitted.
IP addresses cannot be used. You must use a named web address that requires DNS name resolution.
Anonymous authentication is supported and recommended. There is no provision for prompting the logged on user for credentials or saving those credentials.
These default web access restrictions are defined in a registry key on
the server that is running the Microsoft.Crm.Sandbox.HostService.exe
process. The value of the registry key can be changed by the System
Administrator according to business and security needs. The registry
key path on the server is:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSCRM\SandboxWorkerOutboundUriPattern
The key value is a regular expression string that defines the web access restrictions.
The default key value is:
"^http[s]?://(?!((localhost[:/])|([.])|([0-9]+[:/])|(0x[0-9a-f]+[:/])|(((([0-9]+)|(0x[0-9A-F]+)).){3}(([0-9]+)|(0x[0-9A-F]+))[:/]))).+";*
By changing this registry key value, you can change the web access for sandboxed plug-ins.

ASP.NET gurus - small issue when setting app domain name for sharing SQL session in scale-out scenario

We have scaled-out some portions of our ASP.NET app to run on one server, and other portions to run on another server (& under a subdomain).
The two servers share (SQL Server) Session. We used this MS article to create a tiny HTTP Module to sync app domain name between the two servers (sans the cookie domain code, which can be configured in the web.config. I later found this CodeProject article which is essentially the same.)
Everything's working well, except for a small issue: deployment changes or web.config tweaks require a manual app pool recycle (the auto-recycle no longer works - instead we get the "web server is currently unavailable / hit refresh" error).
I tried moving the app domain naming code from the HTTP Module into the Application_Start section of the Global.asax (maybe this is a better place for it?) - but received the same problem.
I know that one solution is to hard-code the app name in one of the SQL Server Session stored procedures; but am a bit hesitant to do this.
Edit: The app is ASP.NET 3.5 under IIS 6.0 (thanks #Chris & #bzlm)
You should check if proper Recycling Events are turned on in IIS, maybe this can help http://support.microsoft.com/kb/332088
Update. We opened a tech support case with Microsoft about this. After a week or so of back & forth, they said they had reproduced the issue in their environment and understand the cause (a timing issue deep inside the ASP.NET internals) - but that there is no resolution that they're aware of. I complained that the HTTP module is Microsoft code, but they said that this code is under "FAST PUBLISH" terms - intended to help & advise customers; yet not warranted.
Ah well. We now just manually recycle the app pool after making a web.config change.