How does GNU screen actually work - gnu-screen

So I have been trying to find some sort of information on how GNU screen actually works at a high without actually having to read through the source code but I have been unable to do so.
What does screen do that it is able to stick around even when a terminal session is closed? Does it run as a daemon or something and everyone who invokes screen just connects to it and then it finds which pseudo tty session to attach to or does it do something completely different?

There's a lot of potential questions in this question, so I'm going to concentrate on just one:
What does screen do that it is able to stick around even when a terminal session is closed?
Screen catches HUP signals, so it doesn't automatically exit when its controlling terminal goes away. Instead, when it gets a HUP, it goes into background mode (since it no longer has an actual terminal attached) and waits. When you start screen with various -d/-D/-r/-R/-RR options, it looks for an already running screen process (possibly detached after having received a HUP, and/or possibly detaching it directly by sending it a HUP) and takes over the child terminal sessions of that screen process (a cooperative process whereby the old screen process sends all the master PTYs to the new process for it to manage, before exiting).

I haven't studied Screen itself in depth, but I wrote a program that was inspired by it on the user end and I can describe how mine works:
The project is my terminal emulator which has an emulation core, a gui front end, a terminal front end, and the two screen-like components: attach.d and detachable.d.
https://github.com/adamdruppe/terminal-emulator
attach.d is the front end. It connects to a particular terminal through a unix domain socket and forwards the output from the active screen out to the actual terminal. It also sends messages to the backend process telling it to redraw from scratch and a few other things (and more to come, my thing isn't perfect yet).
https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d
My terminal.d library provides an event loop that translates terminal input and signals. One of them is HUP signals, which are sent when the controlling terminal is closed. When it sees that, the front-end attach process closes, leaving the backend process in place:
https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d#L709
When attach starts up and cannot connect to an existing process, it forks and creates a detachable backend:
https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d#L454
By separating the front end and backend into separate processes, closing one leaves the other intact. Screen does this too: run a ps aux | grep -i screen. The all-caps SCREEN processes are backends. Frontends are lowercase screen processes.
me 3479 0.0 0.0 26564 1416 pts/14 S+ 19:01 0:00 screen
root 3480 0.0 0.0 26716 1528 ? Ss 19:01 0:00 SCREEN
There, I just started screen and you can see the two separate processes. screen forked and made SCREEN, which actually holds the state. Detaching kills process 3479, but leaves process 3480 in place.
The backend of mine is a full blown terminal emulator that maintains all internal state:
https://github.com/adamdruppe/terminal-emulator/blob/master/detachable.d
Here: https://github.com/adamdruppe/terminal-emulator/blob/master/detachable.d#L140 it reads from the socket attach talks to, reads the message, and forwards it to the application as terminal input.
The redraw method, here: https://github.com/adamdruppe/terminal-emulator/blob/master/detachable.d#L318 loops through its internal screen buffer - stored as an array of attributes and characters - and writes them back out to the attached terminal.
You can see both source files aren't terribly long - most the work is done in the terminal emulator core. https://github.com/adamdruppe/terminal-emulator/blob/master/terminalemulator.d and, to a lesser extent, my terminal client library (think: custom ncurses lib) https://github.com/adamdruppe/arsd/blob/master/terminal.d
Attach.d you can see also manages multiple connections, using select to loop through each open socket, and only drawing the active screen: https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d#L300
My program runs a separate backend process for each individual terminal screen. GNU screen uses one process for a whole session, which may have multiple screens. My guess is screen does more work in the backend than I do, but it is basically the same principle: watch for input on each pty and update an internal screen buffer. When you run screen -r to attach to it, it connects through a named pipe - a FIFO in the file system (on my gnu screen config, they are stored in /tmp/screens) instead of the unix socket I used, but same principle - it gets the state dumped to the screen, then carries on forwarding info back and forth.
Anyway, I've been told my source is easier to read than screens' and xterm's, and while not identical, they are similar, so maybe you can get more ideas browsing through them too.

Related

Can you use screen(1) with multiple TTYs to do low bandwidth console sharing?

Working in low bandwidth environments sometimes I need to share a text console (TTY) screen with a co-worker. I often use the screen(1) tool to keep multiple windows open in the same SSH connection and be able to reconnect and continue where I left off, when my connections drop due to bad internet. I wonder if we could not attach multiple SSH sessions / TTYs to the same screen(1) session so that my co-worker(s) and I can both use the same console? It should be possible, but is it?
The answer is as follows:
first screen session is started with
screen ...
then from another console as the same user someone else can attach to the same screen session with
screen -x
And then they can both work, one thing typed by one will be visible by both, etc.
Note: if the screen session has multiple windows, then each attached terminal can switch to a different window independently.

NSAlert prompting for text input spawned by a process running as root

I have a process that runs as root in the background. When a certain even occurs, I have it pop up an NSAlert with an NSTextField on it so the user can provide some info. However, the user is unable to click on the TextField or type anything into it. I can drag the Alert box around and click on the buttons on the Alert just fine.
I'm guessing this is because my process is running as root and not the end user account that is logged into the machine. Is there a way to easily get around this without spawning a separate process as the user and piping back the info via Distributed Objects or the like?
The best thing to do is to have your UI stuff running as the current user, not only because of problems like this, but also because running a GUI app as root poses a large number of security risks, and in fact there have been a number of security vulnerabilities in the past which were caused by some AppKit-enabled process running as root.
With that said, XPC is the mechanism you want to use to communicate between the root tool and the GUI app, not Distributed Objects, which is quite antiquated and has a number of issues of its own.

What happens when i save a Pharo image while serving http requests?

The Seaside book says: "saving [an image] while processing http requests is a risk you want to avoid“.
Why is this? Does it just temporarily slow down serving http requests or will requests get lost or will errors occur?
Before an image is saved registered shutdown actions are executed. This means source files are closed and web servers are shut down. After the image is saved it executes the startup actions, which typically bring up the web-server again. Depending on the server implementation open connections might be closed.
This means that you cannot accept new connections while you save an image and open connections might be temporarily suspended or closed. For both issues there are (at least) two easy workarounds:
Fork the image using OSProcess before you save it (DabbleDB, CmsBox).
Use multiple images and a load balancer so that you can remove images one at a time from the active servers before saving them.
It seems that it's just a question of slowing things down. There is this quite thorough thread on the Seaside list, the most relevant post of which is this case study of an eCommerce site:
Consequently, currently this is what happens:
image is saved from time to time (usually daily), and copied to a separate "backup" machine.
if anything bad happens, the last image is grabbed, and the orders and/or gift certificates that were issued since the last image save
are simply re-entered.
And, #2 has been very rarely done-- maybe a two or three times a
year, and then it turns out it is usually because I did something
stupid.
Also, one of the great things about Smalltalk is that it's so easy to run quick experiments. You can download Seaside and put a halt in a callback of one of the examples. For example:
WACounter>>renderContentOn: html
...
html anchor
callback: [
self halt.
self increase ];
with: '++'.
...
Open a browser on the Seaside server (port 8080 by default)
Click "Counter" to go to the example app
Click the "++" link
Switch back to Seaside. You'll see the pre-debug window for the halt
Save the image
Click "Proceed"
You'll see that the counter is correctly incremented, with no apparent ill effect from the save.

Desing pattern for background working app

I have created a web-service app and i want to populate my view controllers according to the response i fetch(via GET) in main thread. But i want to create a scheduled timer which will go and control my server, if there becomes any difference(let's say if the count of an array has changed) i will create a local notification. As far as i read from here and some google results, i cant run my app in background more then ten minutes expect from some special situations(Audio, Vo-IP, GPS).. But i need to control the server at least one per minute.. Can anyone offer some idea-or link please?
EDIT
I will not sell the app in store, just for a local area network. Let's say, from the server i will send some text messages to the users and if a new message comes, the count of messages array will increment, in this situation i will create a notification. I need to keep this 'controlling' routing alive forever, whether in foreground or background. Does GCD give such a solution do anyone have any idea?
Just simply play a mute audio file in loop in the background, OR, ping the user's location in the background. Yes, that will drain the battery a bit, but it's a simple hack for in-home applications. Just remember to enable the background types in your Info.plist!
Note: "[...] I fetch (via GET) in main thread." This is not a good approach. You should never fetch any network resources on the main thread. Why? Because your GUI, which is maintained by the main thread, will become unresponsive whenever a fetch isn't instantaneous. Any lag spike on the network results in a less than desirable user experience.
Answer: Aside from the listed special situations, you can't run background apps. The way I see it:
Don't put the app in the background. (crappy solution)
Try putting another "entity" between the app and the "server". I don't know why you "need to control the server at least one per minute" but perhaps you can delegate this "control" to another process outside the device?
.
iOS app -> some form of proxy server -> server which requires
"babysitting" every minute.

To find the process running in task manager and to kill the process

I have a windows mobile application
I have noticed that it properly terminate on exiting, they simply minimize and take up memory. I need to cehck whether any instance of the same application is running in the taskmanager.If exists, i need to kill the process.
I need to write small app that would loop through all open application processes and terminate the required one.
Does such an application exist? If not, which onecould I use to write my own app for it?
Typically this is not how you solve this problem.
You should hold a 'mutex' in your application, and when it launches a second time, you first check this mutex, and if it's taken, you know your app is already running.
Find an appropriate global mutex to hold, and then check for it (I'm not sure what sort you can use on whatever version of Windows Mobile you are targetting, but a trivial search should help you find the answer).
If your app shows an [X] in the corner that's a Minimize button. Change the MinimizeButton property of the form and it will become an [ok] button which will close it.
The CF under Windows Mobile already enforces application singleton behavior. If the app is already running, the CF will look for it, find it, and bring it to the fore for you. No extra work needed.
If you really want to find the process and terminate it, that's done with the toolhelp API set. You can P/Invoke it or use the classes in the Smart Device Framework.