I have this scenario since I'm looking at the new fully static release recently. (https://nuxtjs.org/blog/going-full-static/). I have some issues when upgrading to fully static due to my current workflow as follows:
Currently, I am calling an API before build to populate my data -> npx build -> npx export -> deleting data stored. That way, from my understanding, asyncData caches that data on the server side and it works perfectly fine on the client side. This in turn "builds" my new pages if there is new data received from my API during the npx export command.
However, with the new nuxt generate, it only builds when a change is detected in my file. The thing is that my data is populated and deleted, hence nuxt generate will always skip the building phase since no changes are detected -> no new pages are generated from my new data.
I am thinking of the following, but it doesn't sound ideal:
Run a separate js file to populate my API data -> then call npx generate -> then run another separate js file to delete the API data, so that whenever npx generate runs, it detects the data from API. But this will cause the npx generate to always run the build phase which isn't the intended purpose of this (?)
MILLION DOLLAR QUESTION
I am aware the npx generate is supposed to skip build for quicker "exports" and generating of pages. I am wondering if there is a better/correct way of avoiding the build (and saving time, as intended), while being able to generate my pages as new data comes in from my API.
Related
I have been reading and trying everything I can but I can't find a way to mock the input data of my javascript project, it is supposed to be a window in a web page that inherits the data from cached variables that it reads when starting.
I can modify the code by hand and put the data inside it, but then there can be errors of forgetting to delete the data when doing the commit.
Any ideas to send the data from a global variable to the project while it starts?
PS: I can mock the service calls through the package.json and generate the desired result in the window, but I prefer to pass the input data and verify that the services work correctly without the need to mock them
I'm brand new to qlik. I've been handed over a very complicated application with a lot of business logic that is constantly changing that can run against three different databases - ie dev/test/prod. Basically to decide which one it runs against, the developers have been opening the app and changing a variable at the top to basically point to which environment it should run against - and then running it.
To me - there's nothing about having to change the code each time I want to run that's ok. I know I could duplicate the app for each environment - but that's even worse, because then there are three places to maintain logic when it changes.
what i want is to have three instances that somehow share code - for instance - create three apps - "run_dev", "run_test", "run_prod" that just set a variable and then call the fourth app which is the actual code...
But I have no idea how to do it. What's the best practice way of having a single app with different "modes" of operation - surely people don't always change the code every time they run?
Probably is better to have the variable in external script. So when you want to change the environment just edit the external script and reload the app.
Loading external scripts is done through Include/Must_include. The external script is just a text file with Qlik load script (so you can edit the file with any text editor)
(The difference between Include and Must_include is that Must_include will throw an error if the external script is not found)
Example:
// External script - environmentSetup.qvs
set vDataConnectionName = DEV;
// Actual app that loads the data (pseudo script) (Qlik Sense)
$(Must_Include=lib://Folder-Connection-To-Script-Location/environmentSetup.qvs);
LIB CONNECT TO '$(vDataConnectionName)';
Load *;
SQL
SELECT * FROM `Some_Table`;
;
Another possible option is to use Binary load. This type of load is loading data from other qvf/qvw files. It basically opens the target file and loads all the data from it. Once loaded the whole data model is available.
We're using Go.Cd and transitioning to Bamboo.
One of the features we use in Go.Cd is value stream maps. This enables triggering another pipeline and passing information (and build artifacts) to the downstream pipeline.
This is valuable when an upstream build has a particular version number, and you want to pass that version number to the downstream build.
I want to replicate this setup in Bamboo (without a plugin).
My question is: Is there a way to trigger a child plan in Bamboo and pass it information like a version number?
This has three steps.
Use a parent plan/child plan to setup the relationship.
Using the artifacts tab, setup shared artifacts to transfer files of one plan to another.
3a. At the end of the parent build, dump the environment variables to a file
env > env.txt
3b. Setup (using the artifacts tab) an artifact selector that picks this up.
3c. Setup a fetch for this artifact from the shared artifacts in the child plan.
3d. Using the Inject Variables task - read the env.txt file you have transferred over. Now your build number from the original pipeline is now available in this downstream pipeline. (Just like Go.Cd).
I have hard times on migrating one of our enterprise MVC projects to Core 2.1.
I want to move the project to the new structure Razor Pages + View Components from Controllers + Views/Partials. We have a ton of models and components and actions there.
When I convert projects I usually move things around, copy items to new paths, run automated refactorings, create new/change classes to fit the new requirements and design and that BREAKS the project. A build is the last thing I do when everything is already setup, just to see if I missed something.
Now after few refactorings and breaking changes I can't add new items(razor pages, view components and so on) just because project is not buildable.
"There was an error running the selected code generator: Failed to build project..."
Basically it forces me to do everyting manually, check every copied/migrated piece of code just to add a new item !
I'm in nightmare, please someone wake me up, how to disable this thing ? or suggest a migration strategy for large projects.
First, and most importantly, adding a item via scaffolding will always kick off a project build. The scaffolding needs the project to be in a consistent state in order to function correctly. There is no way around this.
Aside from that, Visual Studio will only rebuild on changes if you're actually running the site. So if you've got it running in IIS Express, kill it to avoid that.
For what it's worth, it's better to correct errors as you go, anyways. It much easier to process a few errors at a time than hundreds all at once, and you'll also then be able to take advantage of Visual Studio's refactoring features, which only work when the project can build, making the total amount of work you have to do usually far less.
in MVC:
Add -> View adds scaffold without build
in Core:
Add -> View adds scaffold + build
Add -> New Item -> View adds without build
So if you don't want to run build on every single add in Core use
Add -> New Item -> ...
I have 2 instances of Odoo v9 running in the same server (Ubuntu 14.04). I want to make changes (install modules, change source code or anything) in the developement instance and after confirming they are OK, move the changes to the Production Instance. Is there anyway of doing that without repeating the whole process of development?
Thank you.
As I can understand you do not want to stop the production instance.
If they are only XML files you might be able to get away by only updating the module from the frontend (Apps-> Your Module -> Update. Although if you have modified the __openerp__.py file inside your module you have to enter the debug mode and click Update Apps List first of all.
For changes in files that are inside the static folder of your module, you do not need to stop the server. Although, your users must click ctr + shift + R in order to flush their caches and bring to their browsers the new content.
For Python source code I am afraid that you have to stop both instances of the server so that the code can be correctly recompiled.
(See note 1 on this)
In the end you should stop and update everything because unexpected things might pop up at random times due to resources not been properly updated.
Note 1: The Python documentation about the compilation of Python modules above others mentions:
As an important speed-up of the start-up time for short programs that
use a lot of standard modules, if a file called spam.pyc exists in the
directory where spam.py is found, this is assumed to contain an
already-“byte-compiled” version of the module spam. The modification
time of the version of spam.py used to create spam.pyc is recorded in
spam.pyc, and the .pyc file is ignored if these don’t match.
So theoretically if you modify fileA.py in a module and a new fileA.pyc is generated the server will be able to interpret and use it. In any case I had an issue with two instances running where the py file was creating the field and the XML file was using it and the server reported that a filed had not been created for the XML view, that means that the server did pick up and parse the XML file but did not recompile the py.