How to get log unique requests and check their status in Lucee - http-headers

I am trying to log specific requests by users to determine if their Lucee request has completed, if it is still running, etc. The purpose of this is to fire of automated processes on demand and ensure to the end users that the processes is already started so they do not fire off a second process. I have found the HTTP_X_REQUEST_ID in other searches, but when dumping the CGI variables, it is not listed. I have set CGI variables to Writable rather than Read Only, but it is still not showing up. Is it something I must add in IIS, or a setting in Lucee Admin that I am overlooking. Is there a different way to go about doing this rather than HTTP_X_REQUEST_ID? Any help is appreciated.

Have yo consider using <cfflush>. When Lucee request start you can send partial information to the client informing that the process has started in the server.

Related

How To Tell weblogic To Not Log Certain Requests In Its Access Log?

I have all the requests going to access.log in weblogic server, I need to stop logging few request patterns. Is there any possibility ?
I already customized the access loggers with CustomELFLogger and seems to be there is no option to stop the logs not to go to access.log file.
Any other thoughts ?
There is no feature to filter the particular requests but a workaround would be , implement the customelflogger and replace cs-uri filed with custom class.
In the custom class, just put a condition for the specific requests that needs to be filtered and leave the rest.
It is working as expected.

Apache server seems to be caching requests

I am running a Flask app on an Apache 2.4 server. The app sends requests to an API built by a colleague using the Requests library. The requests are in a specific format and constructed by data stored in a MySQL database. The site is designed to show the feedback from the API on the index, and the user can edit the data stored in the MySQL database (and by extension, the data sent in the request) by another page, the editing page.
So let's say for example a custom field date is set to be "2006", I would access the index page, a request would be sent, the API does its magic and sends back data relevant to 2006. If I then went and changed the date to "2007" then the new field is saved in MySQL and upon navigating back to index the new request is constructed, sent off and data for 2007 should be returned.
Unfortunately that's not happening.
My when I change details on my editing page they are definitely stored to the database, but when I navigate back to the index the request sends the previous set of data. I think that Apache is causing the problem because of two reasons:
When I reset the server (service apache2 restart) the data sent back is the 'proper' data, even though I haven't touched the database. That is, the index is initially requesting 2006 data, I change it to request 2007 data, it still requests 2006 data, I restart the server, refresh the index and only then does it request 2007 data like it should have been doing since I edited it.
When I run this on my local Flask development server, navigating to the index page after editing an entry immediately returns the right result - it feeds off the same database and is essentially identical to the deployed server except that it's not running on apache.
Is there a way that Apache could be caching requests or something? I can't figure out why the server would keep sending old requests until I restart it.
EDIT:
The requests themselves are large and ungainly and the responses would return data that I'm not comfortable with making available for examples for privacy reasons.
I am almost certain that Apache is the issue because as previously stated, the Flask development server has no issues with returning the correct dataset. I have also written some requests to run through Postman, and these also return the data as requested, so the request structure must be fine. The only difference I can see between the local Flask app and the deployed one is Apache, and given that restarting the Apache server 'updates' the requests until the data is changed again, I think that it's quite clearly doing something untoward.
Dirn was completely right, it turned out not to be an Apache issue at all. It was SQL Alchemy all along.
I imagine that SQL Alchemy knows not to do any 'caching' when it requests data on the development server but decides that it's a good idea in production, which makes perfect sense really. It was not using the committed data on every search, which is why restarting the Apache server fixed it because it also reset the connection.
I guess that's what dirn meant by "How are you loading data in your application?" I had assumed that since I turned off Flask's debugging on the development server it would behave just like it would in deployment but it looks like something has slipped through.

Apache2: upload restarts over and over

We are using different upload scripts with Perl-Module CGI for our CMS and have not encountered such a problem for years.
Our customer's employees are not able to get a successful download.
No matter what kind or size of file, no matter which browser they use, no matter if they do it at work or log in from home.
If they try to use one of our system's upload pages the following happens:
The reload seems to work till approx. 94% are downloaded. Suddenly, the reload restarts and the same procedure happens over and over again.
A look in the error log shows this:
Apache2::RequestIO::read: (70007) The timeout specified has expired at (eval 207) line 5
The wierd thing is if i log in our customer's system using our VPN-Tunnel i never can reproduce the error (i can from home though).
I have googled without much success.
I checked the apache timeout setting which was at 300 seconds - which is more than generous.
I even checked the content length field for a value of 0 because i found a forum entry refering to a CGI bug that related to a content length field of 0.
Now, i am really stuck and running out of ideas.
Can you give me some new ones, please?
The apache server is version 2.2.16, the perl CGI module is version 3.43 .
We are using mod_perl.
We did know our customer didn't use any kind of load balancing.
Without letting anyone else know our customers infrastructure departement activated a load balancer. This way requests went to different servers and timed out.

Apache: simultaneous connections to single script

How does Apache (most popular version nowadays, i guess) handle a connection to a script when this script is already being executed for another connection?
My guess has always been - upon receipt of a request to a script, script's contents are copied-to-memory/compiled/executed, and IF during this process there's another request to this script - same things happen (assuming Apache does not lock the script file, and simply gives another share of memory/cpu for another compilation/memory-storage/execution)
Or is there a queuing/waiting mechanism involved?
Assuming this additional connection is afforded enough memory, cpu, and does not pass maximum connections setting.
The quickly (and easy) answer is every request is processes by a new process.
Apache listens in some port and for each request create a new process that handles that request. That means no shared memory.
Also take a look to processes with "ps" command, you will see one "http" process for each request.
Take a look here for more complex working: http://httpd.apache.org/docs/2.0/mod/worker.html
and look at google too :) http://docstore.mik.ua/orelly/weblinux2/apache/ch01_02.htm

having a script that does something continuosly at the back end without the need for a browser

I am kind of confused. So pls go easy on me. Take any standard web application implemented with mvc, like codeigniter or rails. The scripts gets executed only when a browser sends request right. So when a user logs in and sends request the server recieves it and sends him response.
Now consider the scenario where apart from the regular application i also need something like a backend process. For example a script which checks whether a bidding time is closed and sends the mail to the bidder that the bidding is closed and chooses the bid winner. Now all these actions has to be done automatically as soon as the bidding time ends.
Now if this script is part of a regular app then it should be triggered by the client(browser) but i dont want that to happen. This should be like a bot script which must run on the server checking the DB for events and patterns like this.
How do i go about doing something like this. Also is this possible to have this implemented on a regular shared or dedicated hosting where we dont have shell access but only ftp access.
You'd have to write your script as a standalone program and either have it run continuously in the background or have cron (or some other scheduling service; also only works if you're only interested in time-based events) execute it for you.
There are probably hosts that have shell-less ways to do this (fancy GUI interfaces for managing background processes or something,) but your run of the mill web host with only FTP access definitely doesn't.
You need a cron job, it's easy to set up on linux. That cron job will either call the command line version of PHP with your script or create a local HTTP request with curl or wget.
If you don't have access then you need an external site that automatically generates periodic HTTP requests. A cheap one is setcronjob.