Control setup for a command oriented interface? - oop

I have a customizable control setting stored for each user in the database.
Now I am loading the the control settings and the look like this:
some possible user input (eg. ctrl + s) => some command (eg. save file)
Now I have a hash/dictionary that is generated in runtime based on the info from the database.
The hash has a key which is the users' input and a value which is needed to activate the command.
Since this is a command oriented interface, the value needs to allow me to instantiate a new command object, for instance, the value could be a factory.
The issue is that I am discouraged by the concept of have having one factory for each command as it sounds a bit overkill since there are many dozens of commands.
I am wondering if there is a cleaner way to set up a command oriented interface.
I am using Haxe but it's a general oop / design patterns question.

You could register your commands with your command processor and then the processor would ask each command in turn to respond to the input. If the command responds, the processor work is done; if not, it asks the next command in the chain.
Alternatively, for simple matching of a string input with a command, the processor could keep a map (or hashtable) of the string inputs to the commands instances.
This all presupposes that your commands are already instantiated and part of a collection kept by the processor, so they should also be reusable between invocations.

Related

How to use Variables in Automator

please bear with me, I use Automator since not long.
I have good experience in PHP (totally different) and some small scripting knowledge (apple script, shell, etc).
I try to replicate this logic workflow with Automator:
Ask User to insert value (set $variable_a)
Ask User to insert one more value (set $variable_b)
Submit
This triggers a script that uses both values submitted above. A dummy example:
echo $variable_a
echo variable_b
Seems simple, and it's amazing how fast you can set up this logic with Automator.
The problem is, at stage 2 above, my $variable_a is suddenly a mixed value of $variable_a and $variable_b.
Why does this happen?
They do not seem to act as I understand the generic usage of "variables" in any language or programming step.
In other systems, usually, variables keep as value what they got defined (unless variable variables or you modify them consciously in the code)
I attached an Automator "WorkFlow" File that replicates exactly the abovementioned WorkFlow Logic.
It's a ZIP file, unzip it and open in Automator for a test.
You will see (in the results section for the last step) how the values become (IMHO) false.
Has someone a hint?
The reason this is happening is because the output of one action in the workflow is being fed as input into the next action of the workflow. As inputs are received by actions, they can also aggregate in some cases, such as when setting and getting variables.
The reason it does this is so that you could sent multiple variables directly into, say, a Run Shell Script action, and references them using $1, $2, etc. If Automator only ever took the most recent input, you'd never be able to feed more than one variable into a shell script without first combining them manually yourself into a list.
The solution is simple. Every action has an Options button that you can press, which in turn reveals a checkbox called Ignore this action's input. This needs to be checked for those actions that you want to operate independently of previous results.
Here's a screenshot of your workflow with the appropriate checkboxes ticked against the actions that require it:

How to display a status depending on the data flow position

Consider for example this modified Simple TCP sample program:
How can I display the current state of the program like
Wait for Connection
Connected
Connection terminated
on the frontpanel, depending on where the "data flow" currently is.
The easiest way to do this is to place a string indicator on your front panel and write messages to a local variable of this indicator at each point where you want to see a status update.
You need to keep in mind how LabVIEW dataflow works: code will execute as soon as the data it depends on becomes available. Sometimes you can use existing structures to enforce this - for example, if you put a string constant inside your loop and wire it to a local variable terminal outside the loop, the write will only happen after the loop exits. Sometimes you may need to enforce that dataflow artificially, for example by placing your operation inside a sequence frame and connecting a wire to the border of the sequence: then what's inside the sequence will only happen after data arrives on that wire. (This is about the only thing you should use a sequence for!)
This method is not guaranteed to be deterministic, but it's usually good enough for giving a simple status indication to the user.
A better version of the above would be to send the status messages on a queue or notifier which you read, and update the status indicator, in a separate loop. The queue and notifier write functions have error terminals which can help you to enforce sequence. A notifier is like the local variable in that you will only see the most recent update; a queue keeps all the data you write to it in the right order so would be more suitable if you want to log all the updates to a scrolling list or log file. With this solution you could add more features: for example the read loop could add a timestamp in front of each message so you could see how recent it was.
A really good solution to this general problem is to use a design pattern based on a state machine. Now your program flow is clearly organised into different states and it's very easy to add in functionality like sending a different message from each state. There are good examples and project templates for these design patterns included with recent versions of LabVIEW.
You should be able to find more information on any of the terms in bold in the LabVIEW help or on the NI website.

Is it possible to execute a branch as a different user in *nix?

Is it possible to execute a method as a different user in Linux (or SELinux specifically)? The programs that I have run in individual sandboxes, each with a different user and process id. I have a situation where I have to execute a branch of code as a different user and with different process id to prevent the access of the memory and disk space of the code that's spawning it.
If not possible, can you throw some light on how much of the kernel code has to be changed to achieve it? (I understand its subjective. Alternatively, if you can suggest what and how to go about it, that will be much helpful).
Protecting some resources from other codes executing on the same machine is precisely what lead to the process and UID invention.
If you are searching for a mechanism that looks like a simple function call, I would say it's impossible because it requires the memory to be shared between the caller and the callee. However, using fork/exec (or wrappers like system()) will give you some isolation as long as you deal with parameters/results using system objects like program parameters or pipes.
Although, the fact that *nix user is meant to protect processes from one-another, requires that an explicit relationship be built between two users to have one user act on behalf of the other.
Actually, you may want to:
define a sudoers policy which gives the right to your first user to run a command (or a particular command) as the second one.
use popen() (or system()) in your first program to call the less privileged code.
if any, pass the parameters and parse the result from stdout
As an extra, you may use the same binary for both executions, this way, all the code can be at the same location.

Setting permissions based on the program trying to access a kernel module

I have written a kernel module that creates a /proc file and reads values written into it from a user program,say user.c
Now I want restrict permissions for this /proc file.I have restricted permissions based on userid using the 'current' kernel variable by checking current->euid.
My question: Is there a way to restrict this based on the program too? i.e. only user.c should be able to write to this proc file and not any other program.I could not find any parameters in task_struct that would help me do this. Can you please suggest a way to do this?
In your proc writer implementation (that is, inside the kernel module) the best you can do is check the value of current (a struct task *), which holds (among other things) valuable fields such as comm (16-character argv[0]), pid, uid, etc (Basically, everything you see in /proc//status. You can also check the original exe name (like you see in /proc//exe), to see if it's a well known path. You can then return an error.
Caveat: Anyone could rename their opening process to be one of your "allowed" programs, if you go by "comm", and there are ways to defeat the "exe" protection. This will only make it slightly harder, but not impossible for someone to get around. A more comprehensive and stronger solution would require you to peek at the user mode memory of the program, which is possible, but too complicated for a brief answer.
Note: Permission parameters won't work, don't even bother. They go by classic UNIX ACL, which is u/g/o - so you can't filter by PID.

Intersystems Cache routine to write process information to a file on local system?

I am interested in creating a routine that would query the currently running cache processes and then write this information to a file. How could this be done in Cache 2008.2?
PERFMON might be what you're looking for. That's app with it's own UI, but you can call it's functions directly too, as an API.
Check the Cache docs for "Cache Monitoring Guide". That will give you links to PERFMON docs, as well as docs for other system monitoring tools.
You might find something useful in the Class Reference, under packages %SYSTEM, %SYS, and %Monitor.
For some process info you might need to shell out to the OS. In that case check into the $ZF function. That will let you invoke os-level commands from within Cache.
Oh, and you might want to consider saving the process data within the Cache DB, rather than dumping it out to a file. That is, create a Persistent Class with Properties corresponding to each process attribute that you want to capture, then write code to create, populate, and save instances of that class, taking the data from PERFMON or whatever other source you choose.
If you do that you can use Cache SQL to generate whatever kind of report you need. (Cache will automatically generate a SQL Table corresponding to your Persistent Class.) Cache supports ODBC, so you can use an external tool like Crystal Reports or Access for that part.
Obviously that will be more work than just echoing data to a file, but some kind of structure will be needed if you're going to do anything interesting with the information.