I'm having trouble with extended entities - orocommerce

This question is related to I need help upgrading OroCommerce to 4.1.1.
I'm getting several errors related to extended entities... I believe there must be something wrong with cache building but I can't find the root cause (nor a solution :( ).
I checked the db structure in my production server against the VM where everything is working just fine and I can't see any significant difference (meaning the new fields such as digitalAsset_id for oro_attachment_file table or wysiwyg for oro_fallback_localization_val are there).
I just run an extra php bin/console oro:migration:load --force -e prod it didn't make a difference...
Edit:
Just checked the differences in the var/cache directory of both installations and in fact I see that the VM version has the methods that are missing from the prod one.
I uploaded the working code into the production server and re run the platform upgrade but I'm still running into issues.

In case oro:migration:load command (or oro:platform:update that actually triggers migration load) failed for the first time, you have to:
fix errors,
restore from the database dump
and run the command again.
Otherwise, there could be migrations that end up with errors,
but on the second run, they are not executed again, which could lead to the mess with the database schema, entity metadata, or entity config.
Also oro:migration:load command is not self-sufficient. There could be a need to warm up some entity configuration after the schema change. Please, try to run oro:platform:update, even if all the migrations are already executed, it would try to warm up all the caches and could fix an error.

Related

Odoo 12 - XML files not triggering server reload with --dev=all

New Odoo user here.
It's been a few days since I started messing around with Odoo(v12). I managed to build a module, including creating models, views and menus. It's working like a charm, though there is one issue that is really bothering me.
I've read that running odoo-bin with the --dev=all arg -- which requires watchdog, by the way -- is supposed to trigger a server restart whenever .py or .xml files are changed inside one of the addons folders.
The server does restart when I modify PYTHON files, but so far, even after trying it out for hours, I can't seem to make the same thing work for XML files.
For reference, I'm building the openacademy module from the official documentation, and I'd like for the server to read the views from the XML files instead of the database and reload on change, so I can customize the forms and views and see the result without having to upgrade my module every single time.
--dev=all, unfortunately, doesn't seem to work.
Can anyone help?
Edit: here's the full command I'm using to start up Odoo:
py odoo-bin -c odoo.conf --dev=all
P.S: I'm running Odoo 12 source on Windows 10 64bit
--dev=all actually works in conjunction with watchdog, so if you have watchdog package installed in your running odoo python environment, any change in your addons .py file with cause watchdog to notify running server and the server will reload automatically. You can actually see those messages if you have loglevel set to info. In case view update, changes in xml file doesn't actually reload the server but causing a refresh in the browser automatically updates the view. But, if there is any error in the view definitions, i faced this situation that odoo is not updating the changes. So make sure that there is no error in the xml views and update should be done automatically after page refresh.
Late to the party but as I spent a lot of time trying to figure this out on both Mac and Ubuntu it's worth noting that the --dev xml does not update to reflect every change in a .xml file. I was modifying menus and seeing nothing and thinking it was broken but from some investigation I discovered that the changes only pick up modifications to ir.ui.view models and (I think from looking at the source code) ir.rule models.

USQL Unit testing with ADL tools for VS 2017 - Error after upgrading to 2.3.4000.x

One of the team member after upgrading the ADL tools for VS to version 2.3.4000.x, getting the below error..
Error : (-1,-1) 'E_CSC_SYSTEM_INTERNAL: Internal error!
The ObjectManager found an invalid number of fixups.
This usually indicates a problem in the Formatter.'
Compile failed!
Tried to downgrade back to version ( 2.3.3000.2 ), it didn't help much.
If encountered similar issue, found the reason and resolved it, please share it.
After trying out few unsuccessful options, decided to clean up the files in USQLDataRoot including localrunmetadata and catalog folder. Still, when I submit a job to create a database, there was no error, but it didn’t create the database.
We had some powershell scripts to setup the database and other objects. Ran the powershell script, which created the database and procedures. Then we were able to run the tests successfully. One more thing to double check, make sure build platform is set to “x64”.

Delete or reset Gitlab CI builds

Is it possible to delete old builds in Gitlab CI?
I tested a few things and have now about 20 builds that are useless (most are failed anyway).
It also shows stages that I don't have anymore which kinda clutters the Pipelines page and some of the uploaded artifacts are a bit big.
I wasn't able to find any documentation on this, only that disabling CI in the settings doesn't remove the builds.
Using Gitlab 8.10 Community (hosted by Gitlab.com)
There is currently no option in the GUI to completely get rid of a build other than expunge related data from the build. (The erase option in the build)
If you would have a local installation you could modify the database directly but I would advise caution. (I'll put the guide here for completeness sake)
Login to the GitLab database. If you use the default PostgreSQL :
sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h /var/opt/gitlab/postgresql -d gitlabhq_production
Check if there is a table ci_builds. For pSQL: \dt
Delete the builds with normal SQL. For example: DELETE FROM ci_builds WHERE id = 2
(Optional) If you want to cleanup a list of commits which triggered a build you need to midify the table ci_commits.

Is it safe to delete Selenium.log from my CentOS server?

It seems that I've run out of room on my Master node and I need to clear some space in order to reboot my daily tests. Selenium.log is taking up a lot of space and I'm convinced its not currently being used. Would it be safe to delete?
Edit: I deleted the file and upon starting a new build Selenium created a new log file. I didn't experience any issues during this new build either.
You don't say what creates the file, or where it is, but assuming that you can already see the important details from each build in the Jenkins UI (e.g. in the console log, or in test results etc.), then you shouldn't need to keep any files that are sitting in the workspace or elsewhere.

dnu restore fails on mac

I download visual studio code for mac today. I tried to create a simple asp.net 5 web application following these instructions https://code.visualstudio.com/Docs/ASPnet5
When I open my web application folder in visual studio, it says I need to run a restore command.
I ran the dnu restore command just like the instructions tell me but it seems to always fail.
I receive different errors every time I run it. But most of them are like this one:
CACHE https://www.nuget.org/api/v2/package/System.Threading/4.0.10-beta-22816
SharpCompress.Common.ArchiveException: Could not find Zip file Directory at the end of the file. File may be corrupted.
Restore failed
There is a stack trace as well, but for brevity sake I'll omit it for now
Has anyone experienced this?
Try dnu restore --no-cache.
You may also need to remove previously downloaded files - check ~/.dnx/packages. I removed all files from that folder some time before trying the above. Also, see the comments below, if ~/.dnx/runtimes contains unexpected versions removing them may also work. Note that the current runtime version can be controlled using dnvm.
I never saw the NullReference exception, but I was getting the SharpCompress.Common.ArchiveException. I suspect there was a mismatch from what dnu thought was the cache state with the actual cache state (maybe something timed out the first time or something).