how do I create a qliksense app with multiple paramaterised instances - qlikview

I'm brand new to qlik. I've been handed over a very complicated application with a lot of business logic that is constantly changing that can run against three different databases - ie dev/test/prod. Basically to decide which one it runs against, the developers have been opening the app and changing a variable at the top to basically point to which environment it should run against - and then running it.
To me - there's nothing about having to change the code each time I want to run that's ok. I know I could duplicate the app for each environment - but that's even worse, because then there are three places to maintain logic when it changes.
what i want is to have three instances that somehow share code - for instance - create three apps - "run_dev", "run_test", "run_prod" that just set a variable and then call the fourth app which is the actual code...
But I have no idea how to do it. What's the best practice way of having a single app with different "modes" of operation - surely people don't always change the code every time they run?

Probably is better to have the variable in external script. So when you want to change the environment just edit the external script and reload the app.
Loading external scripts is done through Include/Must_include. The external script is just a text file with Qlik load script (so you can edit the file with any text editor)
(The difference between Include and Must_include is that Must_include will throw an error if the external script is not found)
Example:
// External script - environmentSetup.qvs
set vDataConnectionName = DEV;
// Actual app that loads the data (pseudo script) (Qlik Sense)
$(Must_Include=lib://Folder-Connection-To-Script-Location/environmentSetup.qvs);
LIB CONNECT TO '$(vDataConnectionName)';
Load *;
SQL
SELECT * FROM `Some_Table`;
;
Another possible option is to use Binary load. This type of load is loading data from other qvf/qvw files. It basically opens the target file and loads all the data from it. Once loaded the whole data model is available.

Related

Is there an easy way to temporarily turn off parts of the scenario?

My aim is to temporarily turn off some of the Text Sinks for a specific batch run. My motive is that I want to save processing time and disk space. My wider aim is to easily switch not only between different text sinks but also parameter files, data loaders, etc.
A few things I've tried:
manually put the xml-files linked to the text sinks in a different folder --> this creates an error message (that possibly can be ignored?) and does not serve my wider aim of having different charts/data loaders/displays/etc.
create a completely new scenario-tree by copying the .rs folder and creating a new Run Configuration for that .rs folder --> if I want to change the parameters in all the scenarios at once, then I need to do it manually
try to create a new scenario.xml file (i.e., scenario2.xml) in the hope this would turn up as an alternative in the scenario tree --> nothing turned up in the GUI
Thus: Is there another easy way to temporarily turn off parts of the scenario?
What we've done in the past is create different scenarios for each type of run (your second option). Regarding the parameters in the scenario folders, you could potentially run a script to copy the version you want to all the scenario folders so you don't have to manually adjust each one.

Execute one feature at a time during application execution

I'm using Karate in this way; during application execution, I get the test files from another source and I create feature files based on what I get.
then I iterate over the list of the tests and execute them.
My problem is that by using
CucumberRunner.parallel(getClass(), 5, resultDirectory);
I execute all the tests at every iteration, which causes tests to be executed multiple times.
Is there a way to execute one test at a time during application execution (I'am fully aware of the empty test class with annotation to specify one class but that doesn't seem to serve me here)
I thought about creating every feature file in a new folder so that I can specify the path of the folder that contains only one feature at a time, but CucumberRunner.parallel() accepts Class and not path.
Do you have any suggestions please?
You can explicitly set a single file (or even directory path) to run via the annotation:
#CucumberOptions(features = "classpath:animals/cats/cats-post.feature")
I think you already are aware of the Java API which can take one file at a time, but you won't get reports.
Well you can try this, set a System property cucumber.options with the value classpath:animals/cats/cats-post.feature and see if that works. If you add tags (search doc) each iteration can use a different tag and that would give you the behavior you need.
Just got an interesting idea, why don't you generate a single feature, and in that feature you make calls to all the generated feature files.
Also how about you programmatically delete (or move) the files after you are done with each iteration.
If all the above fails, I would try to replicate some of this code: https://github.com/intuit/karate/blob/master/karate-junit4/src/main/java/com/intuit/karate/junit4/Karate.java

Update changes from Developement instance to Production instance in Odoo

I have 2 instances of Odoo v9 running in the same server (Ubuntu 14.04). I want to make changes (install modules, change source code or anything) in the developement instance and after confirming they are OK, move the changes to the Production Instance. Is there anyway of doing that without repeating the whole process of development?
Thank you.
As I can understand you do not want to stop the production instance.
If they are only XML files you might be able to get away by only updating the module from the frontend (Apps-> Your Module -> Update. Although if you have modified the __openerp__.py file inside your module you have to enter the debug mode and click Update Apps List first of all.
For changes in files that are inside the static folder of your module, you do not need to stop the server. Although, your users must click ctr + shift + R in order to flush their caches and bring to their browsers the new content.
For Python source code I am afraid that you have to stop both instances of the server so that the code can be correctly recompiled.
(See note 1 on this)
In the end you should stop and update everything because unexpected things might pop up at random times due to resources not been properly updated.
Note 1: The Python documentation about the compilation of Python modules above others mentions:
As an important speed-up of the start-up time for short programs that
use a lot of standard modules, if a file called spam.pyc exists in the
directory where spam.py is found, this is assumed to contain an
already-“byte-compiled” version of the module spam. The modification
time of the version of spam.py used to create spam.pyc is recorded in
spam.pyc, and the .pyc file is ignored if these don’t match.
So theoretically if you modify fileA.py in a module and a new fileA.pyc is generated the server will be able to interpret and use it. In any case I had an issue with two instances running where the py file was creating the field and the XML file was using it and the server reported that a filed had not been created for the XML view, that means that the server did pick up and parse the XML file but did not recompile the py.

Custom Building Block Template wont load reliably

My small collection of document-specific macros and quickpart building blocks is growing! I'm starting to share these with employees, and am looking to be able to set up each remote computer once only. From there on, update collections on a network path. And because each computer looks to the shared location, everyone should always be working with up to date macros and quickparts etc.
So. What I already know:
- Required macros are saved in a separate module, ready to be shared/exported.
- Macros themselves occasionally reference local paths on my computer.
- I will need to reference paths with generic code or use Environ variables.
- Building blocks and quickparts are saved in a separate template file (currently located in Appdata, along with default building block file).
What I dont know:
a) How to point Word to a network path to retrieve macros from custom macro files. (Would I just have to import a fresh macro file at every important update, on each PC?)
b) What's the best way to load a building block item from a CUSTOM path?
My custom BuildingBlock template file is not loaded properly on occasion:
Dim objTemplate As Template
Dim objBB As BuildingBlock
'set template to store the building block
Set objTemplate = Templates("C:\Users\[USER]\AppData\Roaming\Microsoft_
\Document Building Blocks\1033\CustomBBlocks.dotx")
Set objBB = objTemplate.BuildingBlockEntries.Item("[EntryName]")
I know this because the code spits out a 'CollectionDoesntExist' error unless I click the Quickparts gallery prior to running the code for the first time. So it's like Word cant be bothered to open the template file and look inside unless I do it from the UI first.
Of course, if I first open the Quickparts gallery from the UI, prior to running my code, Word seems to figure it out, and inserts the correct Building Block entry without any issue.
In the past I've worked on a product that allows building blocks for Word too. Some sites have hundreds of templates and maybe 1.000 elements (see Composition). The approach we've taken was successful and was different.
You are trying to deploy software elements (macros) across a large number of workstations. You can try to get it working using the possibilities of Microsoft Word and Windows, but it will be sensitive to problems when things change. For instance, switching to Office 2013, splitting a domain into two, work at home without VPN, etc.
Option 1 - DIY deployment: Better put the macros and other stuff behind a webpage, webservice or alike. Deploy on each workstation a generic program that pulls in everything and deploys it locally. You might want to hand over some parameters to the webpage being called to restrict the amount of data. You might want to cache things locally.
Option 2 - Use ClickOnce: write a clickonce deployment script, include the necessary references and put it on a shared network drive or http address. ClickOnce automagically upgrades your software when it finds a new version. It even works across the internet. And it does nothing when there is no new version.
Option 3 - Database: put the elements centrally in a database, allowing end users to change building blocks through forms. Have Microsoft Word in combination with a ClickOnce program pull them in.
For Composition we've used option 2 and 3.

A process monitor based on periodic sql selects - does this exist or do I need to build it?

I need a simple tool to visualize the status of a series of processes (ETL processes, but that shouldn't matter). This process monitor need to be customizable with color coding for different status codes. The plan is to place the monitor on a big screen in the office making any faults instantly visible to everyone.
Today I can check the status of these processes by running an sql statement against the underlying tables in our oracle database. The output of these queries are the abovementioned status codes for each process. I'm imagining using these sql statements, run periodically (say, every minute or so), as an input to this monitor.
I've considered writing a simple web interface for doing this, but I'm thinking something like this should exist out there already. Anyone have any suggestions?
If just displaying on one workstation another option is SQL Developer Custom Reports. You would still have to fire up SQL Developer and start the report, but the custom reports have a setting so they can be refreshed at a specified interval (5-120 seconds). Depending on the 'richness' of the output you want you can either:
Create a simple Table report (style = Table)
Paste in one of the queries you already use as a starting point.
Create a PL/SQL Block that outputs HTML via DBMS_OUTPUT.PUT_LINE statements (Style = plsql-dbms_output)
Get creative as you like with formatting, colors, etc using HTML tags in the output. I have used this to create bar graphs to show progress of v$Long_Operations. A full description and screen shots are available here Creating a User Defined HTML Report
in SQL Developer.
If you just want to get some output moving you can forego SQL Developer, schedule a process to use your PL/SQL block to write HTML output to a file, and use a browser to display your generated output on your big screen. Alternately make the file available via a web server so others in your office can bring it up. Periodically regnerate the file and make sure to add a refresh meta tag to the page so browsers will periodically reload.
Oracle Application Express is probably the best tool for this.
I would say roll your own dashboard. Depends on your skillset, but I'd do a basic web app in Java (spring or some mvc framework, I'm not a web developer but I know enough to create a basic functional dashboard). Since you already know the SQL needed, it shouldn't be difficult to put together and you can modify as needed in future. Just keep it simple I would say (don't need a middleware or single sign-on or fancy views/charts).