is there a way to make gprconfig use $USER for pathnames so that output file is independent of logged-in USER? - development-environment

I created gpr file for mixed language project by running gprconfig. Unfortunately the resulting gpor file has fixed usernames in every path, eg /home/tomsmith/.... I need to make gprconfig generate, instead, something like /home/$USER/... so the gpr file will work for anyone trying to use it.
So far, all of my attempts have failed. I tried modifying the output gpr file directly, replacing tomsmith with $USER, but the tool complains that path /home/user/... is not found --which, of course, it is not. It is not picking up the $USER env variable for some reason(s). Anyone ever use gprconfig to create a gpr file that is user independent?

Related

SCIP - run (nearly) same LP on different instances

I have an LP, formulated in the modelling language Zimpl, that I want to run on many instances, which are in different files.
Additionally, I want to change one parameter in this LP.
For a single call, my file test.zpl looks like this:
param FILE := "file1.dat"
param BOUND := 42
[test_body: Rest of LP]
Now I want to change those two parameters. SCIP has the -c option, to execute some command. But I cannot find by which command to achieve this. All parameter changes I found affect the algorithm, not the data.
The command change to change the problem does not seem to allow new parameters/variables.
In the end, I expect the solution to look something like
scip -c "[set my parameters]; read test_body.zpl; optimize; quit"
How do I set these problem parameters?
I am not aware of any commands that support the modification of model parameters as you wish. However, if you don't hardcode the value of param BOUND in the .zpl file (instead, move the value to the .dat file and use a proper read command in the model), then you could procede as follows:
Make a copy of your data file such that each copy contains a distinct value of param BOUND
Call scip.exe separately with each data file (you could also use a simple batch script)

Check if Windows batch variable starts with a specific string

How can I find out (with Windows a batch command), if, for example, a variable starts with ABC?
I know that I can search for variables if I know the whole content (if "%variable%"=="abc"), but I want that it only looks after the beginning.
I also need it to find out where the batch file is located, so if there is a other command that reveals the file's location, please let me know.
Use the variable substring syntax:
IF "%variable:~0,3%"=="ABC" [...]
If you need the path to the batch file without the batch file name, you can use the variable:
%~dp0
Syntax for this is explained in the help for the for command, although this variable syntax extends beyond just the for command syntax.
to find batch file location use %0 (gives full patch to current batch file) or %CD% variable which gives local directory

load script from other file extension?

is it possible to load module from file with extension other than .lua?
require("grid.txt") results in:
module 'grid.txt' not found:
no field package.preload['grid.txt']
no file './grid/txt.lua'
no file '/usr/local/share/lua/5.1/grid/txt.lua'
no file '/usr/local/share/lua/5.1/grid/txt/init.lua'
no file '/usr/local/lib/lua/5.1/grid/txt.lua'
no file '/usr/local/lib/lua/5.1/grid/txt/init.lua'
no file './grid/txt.so'
no file '/usr/local/lib/lua/5.1/grid/txt.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file './grid.so'
no file '/usr/local/lib/lua/5.1/grid.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
I suspect that it's somehow possible to load the script into package.preaload['grid.txt'] (whatever that is) before calling require?
It depends on what you mean by load.
If you want to execute the code in a file named grid.txt in the current directory, then just do dofile"grid.txt". If grid.txt is in a different directory, give a path to it.
If you want to use the path search that require performs, then add a template for .txt in package.path, with the correct path and then do require"grid". Note the absence of suffix: require loads modules identified by names, not by paths.
If you want require("grid.txt") to work should someone try that then yes, you'll need to manually loadfile and run the script and put whatever it returns (or whatever require is documented to return when the module doesn't return anything) into package.loaded["grid.txt"].
Alternatively, you could write your own loader just for entries like this which you set into package.preload["grid.txt"] which finds and loads/runs the file or, more generically, you could write yourself a loader function, insert it into package.loaders, and then let it do its job whenever it sees a "*.txt" module come its way.

Execute scripts by relative path in Oracle SQL Developer

First, this question relates to Oracle SQL Developer 3.2, not SQL*Plus or iSQL, etc. I've done a bunch of searching but haven't found a straight answer.
I have several collections of scripts that I'm trying to automate (and btw, my SQL experience is pretty basic and mostly MS-based). The trouble I'm having is executing them by a relative path. for example, assume this setup:
scripts/A/runAll.sql
| /A1.sql
| /A2.sql
|
/B/runAll.sql
/B1.sql
/B2.sql
I would like to have a file scripts/runEverything.sql something like this:
##/A/runAll.sql
##/B/runAll.sql
scripts/A/runAll.sql:
##/A1.sql
##/A2.sql
where "##", I gather, means relative path in SQL*Plus.
I've fooled around with making variables but without much luck. I have been able to do something similar using '&1' and passing in the root directory. I.e.:
scripts/runEverything.sql:
#'&1/A/runAll.sql' '&1/A'
#'&1/B/runAll.sql' '&1/B'
and call it by executing this:
#'c:/.../scripts/runEverything.sql' 'c:/.../scripts'
But the problem here has been that B/runAll.sql gets called with the path: c:/.../scripts/A/B.
So, is it possible with SQL Developer to make nested calls, and how?
This approach has two components:
-Set-up the active SQL Developer worksheet's folder as the default directory.
-Open a driver script, e.g. runAll.sql, (which then changes the default directory to the active working directory), and use relative paths within the runAll.sql script to call sibling scripts.
Set-up your scripts default folder. On the SQL Developer toolbar, Use this navigation:
Tools > Preferences
In the preference dialog box, navigate to Database > Worksheet > Select default path to look for scripts.
Enter the default path to look for scripts as the active working directory:
"${file.dir}"
Create a script file and place all scripts associated in it:
runAll.sql
A1.sql
A2.sql
The content of runAll.sql would include:
#A1.sql;
#A2.sql;
To test this approach, in SQL Developer, click on File and navigate and open the script\runAll.sql file.
Next, select all (on the worksheet), and execute.
Through the act of navigating and opening the runAll.sql worksheet, the default file folder becomes "script".
I don't have access to SQL Developer right now so i can't experiment with the relative paths, but with the substitution variables I believe the problem you're seeing is that the positional variables (i.e. &1) are redefined by each start or #. So after your first #runAll, the parent script sees the same &1 that the last child saw, which now includes the /A.
You can avoid that by defining your own variable in the master script:
define path=&1
#'&path/A/runAll.sql' '&path/A'
#'&path/B/runAll.sql' '&path/B'
As long as runAll.sql, and anything that runs, does not also (re-define) path this should work, and you just need to choose a unique name if there is the risk of a clash.
Again I can't verify this but I'm sure I've done exactly this in the past...
you need to provide the path of the file as String , give the patch in double quote it will work
**
For Example
#"C:\Users\Arpan Saini\Zions R2\Reports Statements and Notices\Patch\08312017_Patch_16.2.3.17\DB Scripts\snsp.sql";
**
Execution of Sql
#yourPath\yourFileName.sql
How to pass parameters in file
#A1.sql; (Parameter)
#A2.sql; (Parameter)
This is not absolute or relative path issue. It's the SQL interpreter issue, where by default it will look for files which are having .sql extention.
Please make sure to modify the file name to file_name.sql
Ex: if workspace is having file name called "A", then move the file from A to "A.sql"

Matlab can't find member functions when directory changes. What can I do?

I have a Matlab program that does something like this
cd media;
for i =1:files
d(i).r = %some matlab file read command
d(i).process();
end
cd ..;
When I change to my "media" directory I can still access member properties (such as 'r'), but Matlab can't seem to find functions like process(). How is this problem solved? Is there some kind of global function pointer I can call? My current solution is to do 2 loops, but this is somewhat deeply chagrining.
There are two solutions:
don't change directories - instead give the file path the your file read command, e.g.
d(i).r = load(['media' filesep 'yourfilename.mat']);
or
add the directory containing your process() to the MATLAB path:
addpath('C:\YourObjectsFolder');
As mentioned by tdc, you can use
addpath(genpath('C:\YourObjectsFolder'));
if you also want to add all subdirectories to your path.
Jonas already mentioned addpath, but I usually use it in combination with genpath:
addpath(genpath('path_to_folder'));
which also adds all of the subdirectories of 'path_to_folder' as well.