I am looking for a way to track a last program run and the parameters used.
E.g. for a program with a selection screen I want to check the last runtime and which parameters were entered on the selection screen.
I've checked transaction STAD, but it only shows last runtime and bytes transferred.
Anyone knows any way to also have the parameters or variables used for this program run?
Thank you!
It depends whether the program runs in background or not.
If yes:
It's mandatory that a program running in a background job has a program variant assigned (unless the program has no parameters at all), and what variant a job step has used is stored in the table TBTCP.
The values of the variant can be extracted by calling the function module RS_VARIANT_CONTENTS_255 for instance. The execution date of the job is stored in tablTBTCO.
If no:
If it's a custom program, change it to store the last run information in a custom table
If it's a SAP standard program, change the standard to do the same.
Related
please bear with me, I use Automator since not long.
I have good experience in PHP (totally different) and some small scripting knowledge (apple script, shell, etc).
I try to replicate this logic workflow with Automator:
Ask User to insert value (set $variable_a)
Ask User to insert one more value (set $variable_b)
Submit
This triggers a script that uses both values submitted above. A dummy example:
echo $variable_a
echo variable_b
Seems simple, and it's amazing how fast you can set up this logic with Automator.
The problem is, at stage 2 above, my $variable_a is suddenly a mixed value of $variable_a and $variable_b.
Why does this happen?
They do not seem to act as I understand the generic usage of "variables" in any language or programming step.
In other systems, usually, variables keep as value what they got defined (unless variable variables or you modify them consciously in the code)
I attached an Automator "WorkFlow" File that replicates exactly the abovementioned WorkFlow Logic.
It's a ZIP file, unzip it and open in Automator for a test.
You will see (in the results section for the last step) how the values become (IMHO) false.
Has someone a hint?
The reason this is happening is because the output of one action in the workflow is being fed as input into the next action of the workflow. As inputs are received by actions, they can also aggregate in some cases, such as when setting and getting variables.
The reason it does this is so that you could sent multiple variables directly into, say, a Run Shell Script action, and references them using $1, $2, etc. If Automator only ever took the most recent input, you'd never be able to feed more than one variable into a shell script without first combining them manually yourself into a list.
The solution is simple. Every action has an Options button that you can press, which in turn reveals a checkbox called Ignore this action's input. This needs to be checked for those actions that you want to operate independently of previous results.
Here's a screenshot of your workflow with the appropriate checkboxes ticked against the actions that require it:
I have no idea how to make this in Visual Studio 2010. I'm trying to make a program that has a "grid" with columns. It will have a option to name a new profile/process, which is then added to the grid/list.
I can then edit the profile/process by clicking a button and editing the parameters that the process runs with.
most of the time these processes are the same same program/executable, just multi-instanced.
I can then start the process with the given parameters after I have setup it's profile.
I want to be able to monitor the RAM/CPU usage in one of the colums of the record/profile/process, sort of like the task manager and also "maintain" the process and keep it running/restart automatically it so it doesn't stop or crash unless directed otherwise.
I want these profiles/process parameters to be stored in a sqllite dll embedded database.
I would appreciate your help. thanks.
The very first step is for you to get all process, then place it in a container.
From there you can rename the process via My.Computer.FileSystem.RenameFile(file ,newName)
I prefer using datagridview, as with it you can easily add/remove columns and refer to the data. Assuming you have a datagridview with one column:
For Each ito As Process In Process.GetProcesses
Datagridview1.rows.add(ito.ProcessName.ToString)
Next
this will get you through the first part.
Okay, I'll try to explain as good as I can... Quite a particular case.
Tools: SSIS 2008
We have a control flow that now needs to be triggered by an event: the presence of one or multiple files. (1,2 or 3)
The variables used:
BO_FileLocation_1
BO_FileLocation_2
BO_FileLocation_3
BO_FileName_1
BO_FileName_2
BO_FileName_3
There can be one, two or three files: defined in above variables. When they are filled in,
they should be processed. When they are empty, this means there's just one file file, the process should ignore them and jump to the next (file watcher?) task.
For example:
BO_FileLocation_1= "C:\"
BO_FileLocation_2 NULL
BO_FileLocation_3 NULL
BO_FileName_1= "test.csv"
BO_FileName_2 NULL
BO_FileName_3 NULL
The report only needs one file.
I'd need a generic concept that checks the presence of these files, it could be more generic than my SSIS knowledge can handle right now. For example handy, when there's a 4th file in the future. I was also thinking to work with a single script to handle all the logic.
Thanks in advance
A possibly irrelevant image:
If all you want is to trigger the Copy Source File to handle if one or more of the files is present, just use the OR Constraint in your flow. The following image shows you how:
First connect all to the destination:
Then click one of the green arrows. This will make its properties window pop up. Select the Logical ORinstead of the Logical AND:
If everything went well, you should now see the connections as dashed lines:
There are several possible solutions:
Create a sequence container and include all the file imports in the sequence container. Add int variables for RowCountFile1, RowCountFile2, and RowCountFile3 and set the value to 0 (this is the default value when you create an int variable). Add a RowCount transformation to each of the data flows. Create a precedence constraint from the sequence container to the "Do something" task. Set the precedence constraint to success and expression. Set the expression value to #RowCountFile1 > 0 || #RowCountFile2 > 0 || #RowCountFile3 > 0. The advantage of this approach is that you can take an action as soon as the files are detected, you import all available files, and you only take an action after all the files have been imported. You could then schedule running this SSIS package as a SQL Server Agent job step and run it as frequently as you want.
A variant on solution 1 is to use for each file enumerator containers inside the sequence container. This would be useful if you don't know the exact name of the file and you expect to import more than one under some circumstances. For instance, if you get a file every few minutes with a timestamp in its file name and your process doesn't run for some reason, then you may have to process multiple files to get caught up and then take an action once it has been done.
You could use the file watcher task as you outlined in your question. The only problem I have with the file watcher task is that the package has to be in a constantly running state. This makes it hard to troubleshoot problems and performance. It also can introduce other problems since I remember having some problems with the file watcher task years ago when it first came out. It may well be a totally stable task now, but I prefer other methods over the task after having been burned previously. If you really want the package to run continously instead of having it be called by a job, then you could always use a script task to check for file, sleep thread if not found, check again, etc. I'm sure that's what the file watcher task does, but I would trust my own C# over the task. Power to anyone who has had better experiences than me with File Watcher...
Use PowerShell. If you just want to take an action if a file appears and you aren't importing the data, then a PowerShell script could do this just as well as a SSIS package. The drawback is that you have to learn some basic PowerShell, it may be hard to maintain in the future since PowerShell is probably not your bread and butter core language, and you may have to rewrite the code again to a SSIS package if you want to import the data. You would probably call the PowerShell script from a SQL Server Agent job step, so scheduling can be handled pretty easily.
There are more options than what I listed, so let me know if you still want more suggestions.
I am writing Web app. My program creates relations itself when it is needed, basically when the program is deployed and run first time. But I see that it is very common to create SQL script and run it to initialize data-base for the first time. Is it compulsory to do this?
No, it is not compulsory for the database initialization script to be part of the "first run" of your application; preparing the database can be a deployment step. In fact, depending how long it takes to initialize the database, you might specifically want to avoid initializing the database on the first run, and instead make sure it is deployed and initialized before the first time the application is accessed.
I have a routine that examines thousands of records looking for discrepancies. This can take upwards of 5 minutes to complete and although I provide a progress bar and elapsed time count, I'm not sure I want to encourage folk pressing ctrl-break to quit the report should it be taking longer than expected.
A button in the progress bar won't work as the form is non-modal, so is there any neat way of allowing users to quit in this situation?
You need DoEvents and a variable whose scope is greater than the scope of what you're running. That is, if it's just a procedure, you need a module level variable. If it's more than one module, you need a global variable. See here
Stopwatch at DDoE
Normally, the VB engine will tie up the processor until it's done. With DoEvents, however, VB allows the processor to work on whatever is next in the queue, then return to VB.
I don't think there is a way to do it like you would want it to work. VBA is a scripting language so when you start your procedure, it's gonna run until it's done. If you had another button somewhere that even WOULD let you click it while the original procedure was running, I'm not sure how you would reference that procedure and stop it.
You could do something like ask the user if they want to contine, but that would make it run even longer.
Also you could have your procedure check for a condition outside of Excel and keep running as long as it's true. Something easy might be check if a certain text file is in a folder. If you wanted the procedure to stop, open the folder and move the file. On your loop's next iteration, it wouldn't see the file and stop running. Cludgy, inefficient, and not elegant, but it would work. You could also have it check a cell, checkbox, radiobutton, basically any control in another Excel sheet running in another instance of Excel. Again cludgy.
CTRL+Break works. Accept it and move on. One neat trick about that though, is that if you password protect your code and they hit CTRL+Break, the debug option is unavailable and they will only get Continue or End.
If this is code that is run frequently, have you considered scripting something that runs it during times when a human is not using the computer? I used to run telnet screen scraping macros that would take hours to go through our widgets, but I always had them run either on a separate computer or when I wasn't there (nights/weekends).