Update production database : jhipster - liquibase

I am having hard time understanding how Liquibase works with jhipster. I am using Boxfuse to deploy jar file to AWS. Now , since my application is evolving continuously I need to add/drop columns/tables every week.
Application works fine for the first time when I deploy it for the first time , Also it works fine when there is no change in the Database schema only the change in code. Now , I need to add column to database. I used mvn liquibase:update command wit the changesets containing the respective changes in the master.xml. Even the changes are applied to the database. I confirmed them.
Now, when I deploy the updated jar with the changes , It gives error and is unable to deploy. Also, boxfuse console does not show any specific error . It just says that failed to upload.
Can someone help me with this? Thanks.

Related

I'm having trouble with extended entities

This question is related to I need help upgrading OroCommerce to 4.1.1.
I'm getting several errors related to extended entities... I believe there must be something wrong with cache building but I can't find the root cause (nor a solution :( ).
I checked the db structure in my production server against the VM where everything is working just fine and I can't see any significant difference (meaning the new fields such as digitalAsset_id for oro_attachment_file table or wysiwyg for oro_fallback_localization_val are there).
I just run an extra php bin/console oro:migration:load --force -e prod it didn't make a difference...
Edit:
Just checked the differences in the var/cache directory of both installations and in fact I see that the VM version has the methods that are missing from the prod one.
I uploaded the working code into the production server and re run the platform upgrade but I'm still running into issues.
In case oro:migration:load command (or oro:platform:update that actually triggers migration load) failed for the first time, you have to:
fix errors,
restore from the database dump
and run the command again.
Otherwise, there could be migrations that end up with errors,
but on the second run, they are not executed again, which could lead to the mess with the database schema, entity metadata, or entity config.
Also oro:migration:load command is not self-sufficient. There could be a need to warm up some entity configuration after the schema change. Please, try to run oro:platform:update, even if all the migrations are already executed, it would try to warm up all the caches and could fix an error.

Odoo 12 - XML files not triggering server reload with --dev=all

New Odoo user here.
It's been a few days since I started messing around with Odoo(v12). I managed to build a module, including creating models, views and menus. It's working like a charm, though there is one issue that is really bothering me.
I've read that running odoo-bin with the --dev=all arg -- which requires watchdog, by the way -- is supposed to trigger a server restart whenever .py or .xml files are changed inside one of the addons folders.
The server does restart when I modify PYTHON files, but so far, even after trying it out for hours, I can't seem to make the same thing work for XML files.
For reference, I'm building the openacademy module from the official documentation, and I'd like for the server to read the views from the XML files instead of the database and reload on change, so I can customize the forms and views and see the result without having to upgrade my module every single time.
--dev=all, unfortunately, doesn't seem to work.
Can anyone help?
Edit: here's the full command I'm using to start up Odoo:
py odoo-bin -c odoo.conf --dev=all
P.S: I'm running Odoo 12 source on Windows 10 64bit
--dev=all actually works in conjunction with watchdog, so if you have watchdog package installed in your running odoo python environment, any change in your addons .py file with cause watchdog to notify running server and the server will reload automatically. You can actually see those messages if you have loglevel set to info. In case view update, changes in xml file doesn't actually reload the server but causing a refresh in the browser automatically updates the view. But, if there is any error in the view definitions, i faced this situation that odoo is not updating the changes. So make sure that there is no error in the xml views and update should be done automatically after page refresh.
Late to the party but as I spent a lot of time trying to figure this out on both Mac and Ubuntu it's worth noting that the --dev xml does not update to reflect every change in a .xml file. I was modifying menus and seeing nothing and thinking it was broken but from some investigation I discovered that the changes only pick up modifications to ir.ui.view models and (I think from looking at the source code) ir.rule models.

Entity Framework MVC code first, migration to production server

I've created a project using EF Version 6.0, with settings of AutomaticMigrationsEnabled = true;
Which worked fine, i could able to deploy in production server on first time, It created desire database tables.
Now for second update, I've get script out from migration so that i can able to run in production server. but i tried many Package Manager commands, but it does creating empty .sql files.
My second Migration has name "201601181549424_Version-1.2.0", i used following sequence of steps and commands to generate .sql file.
Added desire data classes (which will be creating tables in database) and MVC views and controller.
Run Package Manger Command is "ADD-MIGRATION Version-1.2.0" has created 201601181549424_Version-1.2.0" file in Migration folder
Than "UPDATE-DATABASE" - has updated local database, check everything works fine.
Than "UPDATE-DATABASE -Script" - has created empty.sql file. I am looking to get sql file with creation of database tables in sql file.
Can you please help me understand how i can deploy this in production database.
Thanks,
I figured out by this way..
Make desire changes to model, (INSERT OR UPDATE MODEL)
Add-Migration
Update-Database -Script
(Which will create script and can store for QA or production deployment purpose)
Update-Database
(Above command will update model in to local database)

Is it safe to delete Selenium.log from my CentOS server?

It seems that I've run out of room on my Master node and I need to clear some space in order to reboot my daily tests. Selenium.log is taking up a lot of space and I'm convinced its not currently being used. Would it be safe to delete?
Edit: I deleted the file and upon starting a new build Selenium created a new log file. I didn't experience any issues during this new build either.
You don't say what creates the file, or where it is, but assuming that you can already see the important details from each build in the Jenkins UI (e.g. in the console log, or in test results etc.), then you shouldn't need to keep any files that are sitting in the workspace or elsewhere.

BigQuery error in query operation : Project id not found

I am getting a project not found error when trying to run queries with the bq command line tool or the BigQuery browser window.
I've registered the BigQuery API with the project. I've also setup billing.
For bq, I've setup the .bigqueryrc with the numeric project id.
When I try to query the system response is using the friendly project id so it seems that BigQuery is aware enough to do this mapping of numeric to friendly ids.
I've used the bq shell to verify that prompt reflects the right project id.
I can run 'bq ls publicdata:samples' just fine so I'm assuming the authorization really kicks in to query the data.
What's missing or wrong here?
It looks like there is an issue recognizing projects created through AppEngine. This is a bug and we're actively working on a fix.
As a workaround, you can use a project created through https://code.google.com/apis/console instead.
In my project I didn't have App Engine enabled. For me it was solved by authenticating again though gcloud:
$ gcloud auth login