Label Clearcase branch via batch script - scripting

I need to create a Clearcase label script to run on a UNIX server.
Labels will not always be on the latest build and the script needs to be run via a manual process.
It will label every file a branch of code at a version (currently selected by a timestamp-timestamp is from a Hudson build engine which will create these scripts and ftp to the Unix server).
The build server(Windows) is a different machine than what the script will be run on(UNIX).
The build server currently populates then and builds from a snapshot view.
Users do have clearcase access and permissions.
The code is never built from the UNIX machine- it is a central location where multiple people can go to label the code.
Is it necessary to recreate the view on the UNIX server to label(i.e. do I need to start the view, label and then stop view)? Or could I do something more lightweight?

For this kind of task, I definitively recommend using one dynamic view, combined with a time-based selection rule.
You can:
first create a config spec file with the right selection rule based on timestamp used by the build process
set the config spec to your view (cleartool setcs /path/to/config/spec/file, see setcs)
The all process doesn't require to stop/restart the view.
And since it uses a dynamic view, there is no 'update' time to wait (no file to load).
The OP adds in the comments:
What is the benefit of labeling the current dynamic view(set by a time in the config spec) vs labeling the contents of the dynamic view via selecting a version based on timestamp?
(I take this all to mean it is impossible to label without being in a view)
First, yes, you need to be in a view to label.
And ClearCase will label what it sees in the view (i.e. the versions selected by the current config spec)
Now it is better to have a dedicated dynamic view for that kind of operation because that avoid messing with any other view you might be using for any other operation.
This dynamic view can be the only one needed for labeling operation, and by setting the right time-based config spec selection rule, you ensure labeling what was actually used at the time of your build.

Related

how do I create a qliksense app with multiple paramaterised instances

I'm brand new to qlik. I've been handed over a very complicated application with a lot of business logic that is constantly changing that can run against three different databases - ie dev/test/prod. Basically to decide which one it runs against, the developers have been opening the app and changing a variable at the top to basically point to which environment it should run against - and then running it.
To me - there's nothing about having to change the code each time I want to run that's ok. I know I could duplicate the app for each environment - but that's even worse, because then there are three places to maintain logic when it changes.
what i want is to have three instances that somehow share code - for instance - create three apps - "run_dev", "run_test", "run_prod" that just set a variable and then call the fourth app which is the actual code...
But I have no idea how to do it. What's the best practice way of having a single app with different "modes" of operation - surely people don't always change the code every time they run?
Probably is better to have the variable in external script. So when you want to change the environment just edit the external script and reload the app.
Loading external scripts is done through Include/Must_include. The external script is just a text file with Qlik load script (so you can edit the file with any text editor)
(The difference between Include and Must_include is that Must_include will throw an error if the external script is not found)
Example:
// External script - environmentSetup.qvs
set vDataConnectionName = DEV;
// Actual app that loads the data (pseudo script) (Qlik Sense)
$(Must_Include=lib://Folder-Connection-To-Script-Location/environmentSetup.qvs);
LIB CONNECT TO '$(vDataConnectionName)';
Load *;
SQL
SELECT * FROM `Some_Table`;
;
Another possible option is to use Binary load. This type of load is loading data from other qvf/qvw files. It basically opens the target file and loads all the data from it. Once loaded the whole data model is available.

Is there an easy way to temporarily turn off parts of the scenario?

My aim is to temporarily turn off some of the Text Sinks for a specific batch run. My motive is that I want to save processing time and disk space. My wider aim is to easily switch not only between different text sinks but also parameter files, data loaders, etc.
A few things I've tried:
manually put the xml-files linked to the text sinks in a different folder --> this creates an error message (that possibly can be ignored?) and does not serve my wider aim of having different charts/data loaders/displays/etc.
create a completely new scenario-tree by copying the .rs folder and creating a new Run Configuration for that .rs folder --> if I want to change the parameters in all the scenarios at once, then I need to do it manually
try to create a new scenario.xml file (i.e., scenario2.xml) in the hope this would turn up as an alternative in the scenario tree --> nothing turned up in the GUI
Thus: Is there another easy way to temporarily turn off parts of the scenario?
What we've done in the past is create different scenarios for each type of run (your second option). Regarding the parameters in the scenario folders, you could potentially run a script to copy the version you want to all the scenario folders so you don't have to manually adjust each one.

Update changes from Developement instance to Production instance in Odoo

I have 2 instances of Odoo v9 running in the same server (Ubuntu 14.04). I want to make changes (install modules, change source code or anything) in the developement instance and after confirming they are OK, move the changes to the Production Instance. Is there anyway of doing that without repeating the whole process of development?
Thank you.
As I can understand you do not want to stop the production instance.
If they are only XML files you might be able to get away by only updating the module from the frontend (Apps-> Your Module -> Update. Although if you have modified the __openerp__.py file inside your module you have to enter the debug mode and click Update Apps List first of all.
For changes in files that are inside the static folder of your module, you do not need to stop the server. Although, your users must click ctr + shift + R in order to flush their caches and bring to their browsers the new content.
For Python source code I am afraid that you have to stop both instances of the server so that the code can be correctly recompiled.
(See note 1 on this)
In the end you should stop and update everything because unexpected things might pop up at random times due to resources not been properly updated.
Note 1: The Python documentation about the compilation of Python modules above others mentions:
As an important speed-up of the start-up time for short programs that
use a lot of standard modules, if a file called spam.pyc exists in the
directory where spam.py is found, this is assumed to contain an
already-“byte-compiled” version of the module spam. The modification
time of the version of spam.py used to create spam.pyc is recorded in
spam.pyc, and the .pyc file is ignored if these don’t match.
So theoretically if you modify fileA.py in a module and a new fileA.pyc is generated the server will be able to interpret and use it. In any case I had an issue with two instances running where the py file was creating the field and the XML file was using it and the server reported that a filed had not been created for the XML view, that means that the server did pick up and parse the XML file but did not recompile the py.

How can I update Vim's dynamic SQL completion when the database changes?

I am using Vim's dynamic SQL completion and the dbext plugin which provides completion of tables and columns etc by using a live connection to a database.
e.g. if I type (while in insert mode)<C-c>t a popup list of tables will appear.
However if the database schema changes - which of course it does when I'm developing it - the plugin doesn't update its local cache of the database schema.
The docs say this:
The SQL completion plugin caches various lists that are displayed in
the popup window. This makes the re-displaying of these lists very
fast. If new tables or columns are added to the database it may
become necessary to clear the plugins cache. The default map for this
is:
imap <buffer> <C-C>R <C-\><C-O>:call sqlcomplete#Map('ResetCache')<CR><C-X><C-O>
However when I run the above command <C-c>R or <C-\><C-O> or any combination all vim displays is a message that items from the cache have been removed.
But when I use the completion, its still based on the old schema.
I have also tried pasting
:call sqlcomplete#Map('ResetCache')<CR><C-X><C-O>
directly into the command line, but that doesnt work either.
.
Is there any way I get this cache to update so the completion plugin is based on a current version of the database?
Or even just turn the caching off?

DB Evolution in Play Framework 2.0

On play 1.0 when we change a variable type or for example changing from #OneToMany to #ManyToMany in a Model Play was handling the change automatically but with play 2.0 evolution script drop the database. There is any way to make Play 2.0 apply the change without dropping the DB ?
Yes, there is a way. You need to disable automatic re-creation of 1.sql file and start to write own evolutions containing ALTERS instead of CREATES - numbering them with 2.sql, 3.sql etc.
In practice that means, that if you're working with single database you can also just... manage the database's tables and columns using your favorite DB GUI. The evolutions are useful only when you can't use GUI (host doesn't allow external connections and hasn't any GUI) or when you are planning to run many instances of the app on the separate databases. Otherwise manual writing the statement will be probably more complicated than using GUI.
Tip: sometimes, if I'm not sure if I added all required relations and constraints into my manual evolutions I'm deleting them (under git controlled folder!) and running the app with Ebean plugin enabled and saving proposed 1.sql but I'm NOT applying the changes. Later using git I'm reverting my evolutions and compare with saved auto-generated file and convert CREATE statements to the ALTERs. There's no better option for managing changes without using 3-rd part software.