How to use Variables in Automator - variables

please bear with me, I use Automator since not long.
I have good experience in PHP (totally different) and some small scripting knowledge (apple script, shell, etc).
I try to replicate this logic workflow with Automator:
Ask User to insert value (set $variable_a)
Ask User to insert one more value (set $variable_b)
Submit
This triggers a script that uses both values submitted above. A dummy example:
echo $variable_a
echo variable_b
Seems simple, and it's amazing how fast you can set up this logic with Automator.
The problem is, at stage 2 above, my $variable_a is suddenly a mixed value of $variable_a and $variable_b.
Why does this happen?
They do not seem to act as I understand the generic usage of "variables" in any language or programming step.
In other systems, usually, variables keep as value what they got defined (unless variable variables or you modify them consciously in the code)
I attached an Automator "WorkFlow" File that replicates exactly the abovementioned WorkFlow Logic.
It's a ZIP file, unzip it and open in Automator for a test.
You will see (in the results section for the last step) how the values become (IMHO) false.
Has someone a hint?

The reason this is happening is because the output of one action in the workflow is being fed as input into the next action of the workflow. As inputs are received by actions, they can also aggregate in some cases, such as when setting and getting variables.
The reason it does this is so that you could sent multiple variables directly into, say, a Run Shell Script action, and references them using $1, $2, etc. If Automator only ever took the most recent input, you'd never be able to feed more than one variable into a shell script without first combining them manually yourself into a list.
The solution is simple. Every action has an Options button that you can press, which in turn reveals a checkbox called Ignore this action's input. This needs to be checked for those actions that you want to operate independently of previous results.
Here's a screenshot of your workflow with the appropriate checkboxes ticked against the actions that require it:

Related

Multiple users executing the same workflow

Are there guidelines regarding how to share a Snakemake workflow among multiple users on the same data under Linux, or is the whole thing considered bad practice?
Let me explain in case it's not clear:
Suppose user A executes a workflow in directory dir/. Assume the workflow terminates successfully, and he/she then properly sets file/directory permissions recursively on all output and intermediate files and the .snakemake/ subdirectory for other users to read/write, of course.
User B subsequently navigates to dir/, adds input files to the workflow, then executes it. Can anything go wrong?
TL;DR: I'm asking about non-concurrent execution of the same workflow by distinct users on the same system, and on the same data on disk. Is Snakemake designed for such use cases?
It's possible to run snakemake --nolock which will prevent locking of the directory, so multiple runs can be made from inside the same directory. However, without lock, there's now an opening for errors due to concurrent runs trying to modify the same files. It's probably OK, if you are certain that this will be avoided, e.g. if you are in constant communication with another user about which files will be modified.
An alternative option is to create a third directory/path, and put all the data there. This way you can work from separate directories/path and avoid costly recomputes.
I would say that from the point of view of snakemake, and workflow management in general, it's ok for user B to add or update input files and re-run the pipeline. After all, one of the advantages of a workflow management system is to update results according to new input. The problem is that user A could find her results updated without being aware of it.
From the top of my head and without more detail this is what I would suggest. Make snakemake read the list of input files from a table (pandas comes in handy for this) or from some configuration file. Keep this sample sheet under version control (with git/github) together with the Snakefile and other source code.
When users update the working directory with new files, they will also need to update the sample sheet in order for snakemake to "see" the new input and other users will know about it via version control. I prefer this setup over dumping files in a directory and letting snakemake process whatever is in there.

Setting permissions based on the program trying to access a kernel module

I have written a kernel module that creates a /proc file and reads values written into it from a user program,say user.c
Now I want restrict permissions for this /proc file.I have restricted permissions based on userid using the 'current' kernel variable by checking current->euid.
My question: Is there a way to restrict this based on the program too? i.e. only user.c should be able to write to this proc file and not any other program.I could not find any parameters in task_struct that would help me do this. Can you please suggest a way to do this?
In your proc writer implementation (that is, inside the kernel module) the best you can do is check the value of current (a struct task *), which holds (among other things) valuable fields such as comm (16-character argv[0]), pid, uid, etc (Basically, everything you see in /proc//status. You can also check the original exe name (like you see in /proc//exe), to see if it's a well known path. You can then return an error.
Caveat: Anyone could rename their opening process to be one of your "allowed" programs, if you go by "comm", and there are ways to defeat the "exe" protection. This will only make it slightly harder, but not impossible for someone to get around. A more comprehensive and stronger solution would require you to peek at the user mode memory of the program, which is possible, but too complicated for a brief answer.
Note: Permission parameters won't work, don't even bother. They go by classic UNIX ACL, which is u/g/o - so you can't filter by PID.

Check for multiple files

Okay, I'll try to explain as good as I can... Quite a particular case.
Tools: SSIS 2008
We have a control flow that now needs to be triggered by an event: the presence of one or multiple files. (1,2 or 3)
The variables used:
BO_FileLocation_1
BO_FileLocation_2
BO_FileLocation_3
BO_FileName_1
BO_FileName_2
BO_FileName_3
There can be one, two or three files: defined in above variables. When they are filled in,
they should be processed. When they are empty, this means there's just one file file, the process should ignore them and jump to the next (file watcher?) task.
For example:
BO_FileLocation_1= "C:\"
BO_FileLocation_2 NULL
BO_FileLocation_3 NULL
BO_FileName_1= "test.csv"
BO_FileName_2 NULL
BO_FileName_3 NULL
The report only needs one file.
I'd need a generic concept that checks the presence of these files, it could be more generic than my SSIS knowledge can handle right now. For example handy, when there's a 4th file in the future. I was also thinking to work with a single script to handle all the logic.
Thanks in advance
A possibly irrelevant image:
If all you want is to trigger the Copy Source File to handle if one or more of the files is present, just use the OR Constraint in your flow. The following image shows you how:
First connect all to the destination:
Then click one of the green arrows. This will make its properties window pop up. Select the Logical ORinstead of the Logical AND:
If everything went well, you should now see the connections as dashed lines:
There are several possible solutions:
Create a sequence container and include all the file imports in the sequence container. Add int variables for RowCountFile1, RowCountFile2, and RowCountFile3 and set the value to 0 (this is the default value when you create an int variable). Add a RowCount transformation to each of the data flows. Create a precedence constraint from the sequence container to the "Do something" task. Set the precedence constraint to success and expression. Set the expression value to #RowCountFile1 > 0 || #RowCountFile2 > 0 || #RowCountFile3 > 0. The advantage of this approach is that you can take an action as soon as the files are detected, you import all available files, and you only take an action after all the files have been imported. You could then schedule running this SSIS package as a SQL Server Agent job step and run it as frequently as you want.
A variant on solution 1 is to use for each file enumerator containers inside the sequence container. This would be useful if you don't know the exact name of the file and you expect to import more than one under some circumstances. For instance, if you get a file every few minutes with a timestamp in its file name and your process doesn't run for some reason, then you may have to process multiple files to get caught up and then take an action once it has been done.
You could use the file watcher task as you outlined in your question. The only problem I have with the file watcher task is that the package has to be in a constantly running state. This makes it hard to troubleshoot problems and performance. It also can introduce other problems since I remember having some problems with the file watcher task years ago when it first came out. It may well be a totally stable task now, but I prefer other methods over the task after having been burned previously. If you really want the package to run continously instead of having it be called by a job, then you could always use a script task to check for file, sleep thread if not found, check again, etc. I'm sure that's what the file watcher task does, but I would trust my own C# over the task. Power to anyone who has had better experiences than me with File Watcher...
Use PowerShell. If you just want to take an action if a file appears and you aren't importing the data, then a PowerShell script could do this just as well as a SSIS package. The drawback is that you have to learn some basic PowerShell, it may be hard to maintain in the future since PowerShell is probably not your bread and butter core language, and you may have to rewrite the code again to a SSIS package if you want to import the data. You would probably call the PowerShell script from a SQL Server Agent job step, so scheduling can be handled pretty easily.
There are more options than what I listed, so let me know if you still want more suggestions.

How to log Command Line activity? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
DOS command to Display result on console and redirect the output to a file
tried various Google searches but nothing seemed to solve my problem.
Basically I'm working for a company who need me to work with their in place database and extract the various data need for reports. They are using Sqlite (please, I've heard enough comments about how it might not be the best choice for a DB, so leave them out) and I either want all my activity on Windows command prompt to be logged, or at least everything I do from the Sqlite command line to appear in a .txt, just in case I need to refer back to it later.
Can anybody here explain to me how to do this? I'm a bit of a beginner and need this stuff broken down step by step. Not done anything like this before.
Cheers!
I'm reasonably certain you can't do this directly -- i.e., the Windows command prompt doesn't provide a way to log the input you provide to it. You can capture outputs (e.g., from commands you run), but for your purposes that's probably not adequate.
You probably need to create a "shell" of your own that takes inputs from the user, logs each one, sends it on to the command prompt, captures the output from the command prompt, and logs that as well.
In an answer to a previous question, I posted some code that handles most of what you need to do. The big difference is that you'll want to look in its handle_output (for example) and instead of just displaying the captured output to the console, write it to a file as well. As it stands right now, that example redirects the child's standard input to come from a file, but changing it to read from the console instead should be fairly straightforward -- you'll basically use a function about like the handle_output and handle_error that it already includes, but instead of displaying output, you'll read input from the user, and each line you'll 1) write to the log, and 2) send to the child via anonymous pipe (much like handle_output and handle_error read from anonymous pipes).

How to implement a system wide text replacement in windows programmatically?

I have a small VB .Net application that, among other things, attempts to substitute system wide typed text by the user(hotstrings concept). To achieve that, I have deployed 'ahk2exe' and 'AutoHotkeySC.bin' with my application and did the following:
When a user assignes a new 'hotstring':
Kill 'hotstring' exe script file if running
Append new hotstring to the script file (if non exist then create a new one)
Convert edited/new script file to exe (using ahk2exe)
Run the newly converted script exe
(somewhere there I also check if the hotstring has been already assigned)
However, I am not totally satisfied with this method for the following two main reasons:
The extra resources deployed with the application.
Lag: The time it takes for the system to kill the process and then restart it takes a minimum of 5 seconds on my fast computer and more on other computers. That amount of time is much more than the time it takes the user to assign the hotstring, minimize/close the window and then test his/her new hotstring. When the user does so initially with no success they will think the process failed. So this method is not very good for user experience.
So, I am looking for a different method or implementation. May be using keyboard hooks? Or maybe adding a .dll library that achieves the same. Are there any resources you know about that might help (free or commercial)? What is the best way to achieve my desired goal?
Many thanks for your help.
Implementing what Autohotkey does would be a pretty non trivial task.
But I'm pretty sure that AHK supports an "autoreload" option for scripts
googling "autohotkey auto reload" turned up several pages discussing that very concept. IF that worked, all you'd have to do is update the script file and that's it, AHK should automatically reload the script.