I have been reading and trying everything I can but I can't find a way to mock the input data of my javascript project, it is supposed to be a window in a web page that inherits the data from cached variables that it reads when starting.
I can modify the code by hand and put the data inside it, but then there can be errors of forgetting to delete the data when doing the commit.
Any ideas to send the data from a global variable to the project while it starts?
PS: I can mock the service calls through the package.json and generate the desired result in the window, but I prefer to pass the input data and verify that the services work correctly without the need to mock them
Related
I'm brand new to qlik. I've been handed over a very complicated application with a lot of business logic that is constantly changing that can run against three different databases - ie dev/test/prod. Basically to decide which one it runs against, the developers have been opening the app and changing a variable at the top to basically point to which environment it should run against - and then running it.
To me - there's nothing about having to change the code each time I want to run that's ok. I know I could duplicate the app for each environment - but that's even worse, because then there are three places to maintain logic when it changes.
what i want is to have three instances that somehow share code - for instance - create three apps - "run_dev", "run_test", "run_prod" that just set a variable and then call the fourth app which is the actual code...
But I have no idea how to do it. What's the best practice way of having a single app with different "modes" of operation - surely people don't always change the code every time they run?
Probably is better to have the variable in external script. So when you want to change the environment just edit the external script and reload the app.
Loading external scripts is done through Include/Must_include. The external script is just a text file with Qlik load script (so you can edit the file with any text editor)
(The difference between Include and Must_include is that Must_include will throw an error if the external script is not found)
Example:
// External script - environmentSetup.qvs
set vDataConnectionName = DEV;
// Actual app that loads the data (pseudo script) (Qlik Sense)
$(Must_Include=lib://Folder-Connection-To-Script-Location/environmentSetup.qvs);
LIB CONNECT TO '$(vDataConnectionName)';
Load *;
SQL
SELECT * FROM `Some_Table`;
;
Another possible option is to use Binary load. This type of load is loading data from other qvf/qvw files. It basically opens the target file and loads all the data from it. Once loaded the whole data model is available.
I have this scenario since I'm looking at the new fully static release recently. (https://nuxtjs.org/blog/going-full-static/). I have some issues when upgrading to fully static due to my current workflow as follows:
Currently, I am calling an API before build to populate my data -> npx build -> npx export -> deleting data stored. That way, from my understanding, asyncData caches that data on the server side and it works perfectly fine on the client side. This in turn "builds" my new pages if there is new data received from my API during the npx export command.
However, with the new nuxt generate, it only builds when a change is detected in my file. The thing is that my data is populated and deleted, hence nuxt generate will always skip the building phase since no changes are detected -> no new pages are generated from my new data.
I am thinking of the following, but it doesn't sound ideal:
Run a separate js file to populate my API data -> then call npx generate -> then run another separate js file to delete the API data, so that whenever npx generate runs, it detects the data from API. But this will cause the npx generate to always run the build phase which isn't the intended purpose of this (?)
MILLION DOLLAR QUESTION
I am aware the npx generate is supposed to skip build for quicker "exports" and generating of pages. I am wondering if there is a better/correct way of avoiding the build (and saving time, as intended), while being able to generate my pages as new data comes in from my API.
I'm using Karate in this way; during application execution, I get the test files from another source and I create feature files based on what I get.
then I iterate over the list of the tests and execute them.
My problem is that by using
CucumberRunner.parallel(getClass(), 5, resultDirectory);
I execute all the tests at every iteration, which causes tests to be executed multiple times.
Is there a way to execute one test at a time during application execution (I'am fully aware of the empty test class with annotation to specify one class but that doesn't seem to serve me here)
I thought about creating every feature file in a new folder so that I can specify the path of the folder that contains only one feature at a time, but CucumberRunner.parallel() accepts Class and not path.
Do you have any suggestions please?
You can explicitly set a single file (or even directory path) to run via the annotation:
#CucumberOptions(features = "classpath:animals/cats/cats-post.feature")
I think you already are aware of the Java API which can take one file at a time, but you won't get reports.
Well you can try this, set a System property cucumber.options with the value classpath:animals/cats/cats-post.feature and see if that works. If you add tags (search doc) each iteration can use a different tag and that would give you the behavior you need.
Just got an interesting idea, why don't you generate a single feature, and in that feature you make calls to all the generated feature files.
Also how about you programmatically delete (or move) the files after you are done with each iteration.
If all the above fails, I would try to replicate some of this code: https://github.com/intuit/karate/blob/master/karate-junit4/src/main/java/com/intuit/karate/junit4/Karate.java
Using NiFi v0.6.1 is there a way to import backups/archives?
And by backups I mean the files that are generated when you call
POST /controller/archive using the REST api or "Controller Settings" (tool bar button) and then "Back-up flow" (link).
I tried unzipping the backup and importing it as a template but that didn't work. But after comparing it to an exported template file, the formats are reasonably different. But perhaps there is a way to transform it into a template?
At the moment my current work around is to not select any components on the top level flow and then select "create template"; which will add a template with all my components. Then I just export that. My issue with this is it's a bit more tricky to automate via the REST API. I used Fiddler to determine what the UI is doing and it first generates a snippet that includes all the components (labels, processors, connections, etc.). Then it calls create template (POST /nifi-api/contorller/templates) using the snippet ID. So the template call is easy enough but generating the definition for the snippet is going to take some work.
Note: Once the following feature request is implemented I'm assuming I would just use that instead:
https://cwiki.apache.org/confluence/display/NIFI/Configuration+Management+of+Flows
The entire flow for a NiFi instance is stored in a file called flow.xml.gz in the conf directory (flow.xml.tar in a cluster). The back-up functionality is essentially taking a snapshot of that file at the given point in time and saving it to the conf/archive directory. At a later point in time you could stop NiFi and replace conf/flow.xml.gz with one of those back-ups to restore the flow to that state.
Templates are a different format from the flow.xml.gz. Templates are more public facing and shareable, and can be used to represent portions of a flow, or the entire flow if no components are selected. Some people have used templates as a model to deploy their flows, essentially organizing their flow into process groups and making template for each group. This project provides some automation to work with templates: https://github.com/aperepel/nifi-api-deploy
You just need to stop NiFi, replace the nifi flow configuration file (for example this could be flow.xml.gz in the conf directory) and start NiFi back up.
If you have trouble finding it check your nifi.properties file for the string nifi.flow.configuration.file= to find out what you've set this too.
If you are using clustered mode you need only do this on the NCM.
i'm running a vb.net app that uses command line args and store them in variables, and, for example, put them in a textbox. I want another external app to pass data every minute to my app by calling my app with the data as argument.
I know I can get the command line arguments using GetCommandLineArgs. But can I get 'new' args while running, whithout restarting the app?
Example:
- I start the app, using "myapp.exe argument1". This shows "argument1" in the textbox
- Next, I run "myapp.exe argument2" (while myapp.exe is still running), and so myapp should just keep on running, but now display "argument2"
Is this possible using command line args, or do I need to use another approach?
Thanks!
But can I get 'new' args while running, whithout restarting the app?
No, command line arguments are only set once during the lifetime of a running application. You will need to use another approach to pass the data to your application (WCF, sockets, database, files, remoting, named pipes, ...).