having a script that does something continuosly at the back end without the need for a browser - shared-hosting

I am kind of confused. So pls go easy on me. Take any standard web application implemented with mvc, like codeigniter or rails. The scripts gets executed only when a browser sends request right. So when a user logs in and sends request the server recieves it and sends him response.
Now consider the scenario where apart from the regular application i also need something like a backend process. For example a script which checks whether a bidding time is closed and sends the mail to the bidder that the bidding is closed and chooses the bid winner. Now all these actions has to be done automatically as soon as the bidding time ends.
Now if this script is part of a regular app then it should be triggered by the client(browser) but i dont want that to happen. This should be like a bot script which must run on the server checking the DB for events and patterns like this.
How do i go about doing something like this. Also is this possible to have this implemented on a regular shared or dedicated hosting where we dont have shell access but only ftp access.

You'd have to write your script as a standalone program and either have it run continuously in the background or have cron (or some other scheduling service; also only works if you're only interested in time-based events) execute it for you.
There are probably hosts that have shell-less ways to do this (fancy GUI interfaces for managing background processes or something,) but your run of the mill web host with only FTP access definitely doesn't.

You need a cron job, it's easy to set up on linux. That cron job will either call the command line version of PHP with your script or create a local HTTP request with curl or wget.
If you don't have access then you need an external site that automatically generates periodic HTTP requests. A cheap one is setcronjob.

Related

How can I silently proceed with TeamCity First Start?

I have installed the TeamCity server on my infrastructure. I was e.g. able to silently configure the external database and some other stuff through the TeamCity configuration files. Now, when I start TeamCity, I am facing the "TeamCity First Start" web page, where I can choose if I want to restore my system from a backup or if I want to proceed with a First Start.
I would like to automate that installation as much as possible. Is there a reliable way, e.g. through the TeamCity API, to trigger the action underlying the "Proceed" button of that web page, other than writing a bot that browses the web page and clicks the button? I found no such thing in the API documentation, but I might have missed something.
I think something like this could be called on the server host:
curl -X POST http://localhost:8111/mnt/do/goNewInstallation
but I don't know what data to post and how to get the necessary authentication to be allowed to run that command. Indeed, running the above command yields the following error:
The session is not authenticated. Access denied.
At that time, the super user authentication token has not yet been displayed to the server log file.

How to get log unique requests and check their status in Lucee

I am trying to log specific requests by users to determine if their Lucee request has completed, if it is still running, etc. The purpose of this is to fire of automated processes on demand and ensure to the end users that the processes is already started so they do not fire off a second process. I have found the HTTP_X_REQUEST_ID in other searches, but when dumping the CGI variables, it is not listed. I have set CGI variables to Writable rather than Read Only, but it is still not showing up. Is it something I must add in IIS, or a setting in Lucee Admin that I am overlooking. Is there a different way to go about doing this rather than HTTP_X_REQUEST_ID? Any help is appreciated.
Have yo consider using <cfflush>. When Lucee request start you can send partial information to the client informing that the process has started in the server.

How To Monitor HTTP Request ,Response data?

I have a Java EE Application ,Where i want to monitor some of my HTTP request response & I want to write those req,resp into a file.Also it should be in user control i.e When ever the user wants to monitor he can switch on/off in a application. How to do that?? There are several other tools which we can monitor ,but i don't want it manually.
You can set up a proxy server, like the infamous Burp Proxy does. Here is a small example.
You can also easily read the HTTP requests and responses in this manner, writing them to a file like you want.

tcl shell through apache

I have a tool which supports interactive queries though tcl shell. I want to create a web application through which users can send different queries to the tool. I have done some basic programming using Apache web server and cgi scripts, but i am unable to think of a way to keep the shell alive and send queries to that.
Some more information:
Let me describe it more. Tool builds a graph data structure, after building users can query for information using tcl shell, something like get all child nodes of a particular node. I cannot build the data structure with every query because building takes lot of time. I want to build the data structure and somehow keep the shell alive. Apache server should send all the queries to that shell and return the responses back to the user
You might want to create a daemon process, perhaps using expect, that spawns your interactive program. The daemon program could listen to queries over TCP using Tcl's socket command. Your CGI program would create a client socket to talk to the daemopn.
I'd embed the graph-managing program into an interpreter that's also running a small webserver (e.g., tclhttpd, though that's not the only option) and have the rest of the world interact with the graph through RESTful web accesses. This could then be integrated behind Apache in any way you like — a CGI thunk would work, or you could do request forwarding, or you could write some server-side code to do it (there's many options there!) — or you could even just let clients connect directly. Many options would work.
The question appears to be incomplete as you did not specify what exactly does "interactive" mean with regard to your tool.
How does it support interactive queries? Does it call gets in a kind of endless loop and processed each line as it's read? If so, the solution to your problem is simple: the Tcl shell is not really concerned about whether its standard input is connected to an interactive terminal or not. So just spawn your tool in your CGI request handling code, write the user's query to that process's stdin stream, flush it and then read all the text written by that process to its stdout and stderr streams. Then send them back to the browser. How exactly to spawn the process and communicate with it via its standard streams heavily depends on your CGI code.
If you don't get the idea, try writing your query to a file and then do comething like
$ tclsh /path/to/your/tool/script.tcl </path/to/the/query.file
and you should have the tool to respond in a usual way.
If the interaction is carried using some other way in your tool, then you probably have to split it to a "core" and "front-end" parts so that the core just reads queries and outputs results, and the front-end part carries out interaction. Then hook up that core to your CGI processing code in a way outlined above.

Pyramid/Pylons: How to check if an uploaded file is complete in a POST request?

I'm building a web tool which allows users to upload PDFs to a server using their web browsers. The server is based on Python (Paste + Pyramid).
The problem I have right now is the following: If a user uploads a rather large file (let's say 100 MB) and they cancel the upload before it is completed, my handler code on the server is still called (instead of the request being aborted).
The problem is that the request.POST['myfile'].file is incomplete when that happens. This effectively means that the PDF file is corrupted if I simply write it to some place on the server.
When I watch the server's log, it shows a "broken pipe" exception within the Paste server; however I have no idea how to catch that exception and have it prevent my view/handler code from executing and storing the incomplete file.
Seems like the paster HTTP server does not correctly validate the uploaded form data and simply passes the request down the WSGI pipeline even if the connection (HTTP POST) was closed by the user.
I worked around this issue by simply setting up NGINX to act as a reverse proxy. This also adds some security benefits as it might be better tested than paster.
Update:
My main problem was that I was using runserver (the built in web server of manage.py). After some trial and error we ended up using WSGI.
More specifically, uWSGI and Nginx as web server. Static content is served directly by Nginx while dynamic pages are piped through uWSGI and are handled by the Python web app.
Unless you are doing something fancy (like tracking the upload progress, etc), your pylons controller should not be invoked until the entire file has been uploaded.