installation of a module from a first environment on a second environment odoo15 - odoo-15

I have a question concerning an installation of a module from the first environment on a second environment
I installed in the first environment (production with odoo.sh and github access) a module and I updated it according to my needs. Afterwards I want to put this module in a second environment (production without odoo.sh and github access)
Really I don't know the process to install this module in the second environment
I tried to take a backup of the second environment and put it in the first environment, but the system displays bugs

Related

Do I have to rebuild my frontend for production every time I edit it? I'm using Vue

Basically what I have is my frontend (Vue) and my backend (Node.js and etc.). By following a guide, I've built the frontend for production using npm run build. I got a bunch of files in a build folder I setup within a previous step. These files were then moved to a folder in the backend. It works, but it's more a demo than anything else, and the frontend and backend will have to be modified more as I continue.
I'm just wondering if and when I edit the fronted more (let's say, when I add a new page) am I supposed to go through this process again? So I'll modify the front end folder, build that, move files, etc.
Thanks.
Yes, definitely.
If we are in a development environment, we use npm run dev or yarn run which upholds the development environment running and updates the browser whenever any modifications inside the code happen. We don't use any final build in the development environment because we make code modifications so repeatedly that it would be a sore process to make a build after every modification and check the results using that build.
But, the production is distinct from the development environment. We deploy the only code which is bug-free, entirely working, and ready for users to use. Deploying to production means all changes have been made, and the final code is ready to be deployed. So, we make a final production build and deploy it to our server.
So, don't panic to deploy to production every time you make a small change in the code. First, complete your all changes, and test the changes in the development environment, if everything is working correctly then only create a final production build, and deploy it to the server.**
I hope this helps.

Prevent expo doing full config load during expo install

We have extra config for our app to run inside app.config.ts and some environment variable validation in order to populate it. expo install, as I understand, reads some of the core expo config in order to make some decisions on library versions. It does not need the full configuration with the extra params. We have no way of detecting expo install vs normal build at runtime (i.e. it does not set any specific environment vars, or anything).
For our application, rather than using dotenv at runtime we simply require certain environment variables to exist.
Our local development scripts to start the server use dotenv-cli to populate some environment variables. Our CI builds rely on the environment variables set in CI. For this reason we always validate the required environment variables and don’t pre-populate anything.
We would either like to have a pre-script hook so we can make the same dotenv call before expo install happens, or a way to detect inside app.config.ts that expo install is running (some env) so we don’t need to expose full config.
Does anyone know how this could be achieved?
FYI this exact question was raised via the expo forums over a week ago, but there does not seem to be enough attention/activity there: https://forums.expo.dev/t/expo-install-does-full-config-load-including-extra-any-way-to-pre-hook-and-set-env-vars-or-detect-expo-install-at-runtime/62123

How to backport Ansible extras module?

For a project I'm working on, I'd love to be able to make use of the maven_artifact module in the Ansible Extras repository.
However, the project uses Ansible stable (currently 1.9.3) and the module is documented as only being available from version 2.0 onwards (which looks to still be in alpha).
What's the best way to "backport" this module to our current Ansible install, across many machines?
Will dropping the "maven_artifact.py" file into the "ansible/modules/extras/packaging/language/" directory on each machine work? Or will the line in the source code:
version_added: "2.0"
prevent it from running due to some sort of compatibility check?
Additionally, how can I tell whether the module relies on features present in Ansible version 2.0 and therefore is incompatible and won't run on 1.9.3 or whether it's just that version 2.0 is when it's set to be introduced?
2.0 had very minimal changes to the module subsystem- most 2.0 modules will work fine in 1.9.x (there's no version check). The easiest way to use it is to copy the source for the module you want to use from the Github extras repo to a directory called library next to your playbooks. If you have your Ansible content checked into a source-control repo of some kind, put the library directory in there too- then all your Ansible machines where you've checked out your playbook content can run the module without you needing to copy it around manually.

How to obtain all versions of KRE?

Problem
I want to both use stable versions of KRE and the bleeding edge nightly built KRE. One ASP.NET5 application may be beta2, but another I may want to be beta4. So what I did was install both in powershell as found here.
What happened is that the stable KVM installed in C:/Users/derp/.kre and the nightly build KVM installed in C:/Users/derp/.k
Worse yet, I can only see this now
Attempts
I tried kvm install KRE-CLR-x86.1.0.0-beta2 and it failed
Shall I try moving the packages from /kre file to the /.k file? This seems hacky and like a really bad idea
RTFM - Tried to use the install feature and including the -a, but failed.
I'm doing something the hard way and can't see the obvious.
I search on here
I feel if there is an answer to what I am trying to do above, it is worth being on here for others to find as well. Thank you all for your patience.
ASP.NET 5 is under development and there is no guarantee that changes between different pre-release version are backward compatible (sorry!).
The /.kre -> ./k rename is not backward compatible and you cannot have both the old and the new kvm simultaneously on the PATH. However, you can get can have two versions of kvm on your machine but you will have to use the full path for at least one of them.
I think the key is the path environment variable of your system. You have to use two set of "kvm", one for night builds, one for public beta, to download and set correct path environment variable.
For instance, I get one kvm from Entity Framework 7 repository, which can download and use beta 4 builds. I also have another kvm from Home repository which can download and use public beta builds.
You can use either kvm with "upgrade" or "use" command to set correct path environment variable, then run your application on the runtime you need. I think even Visual Studio 2015 CTP runs your projects based on the Runtime specified in your path environment variable. For the time being, only beta 3 run times can display in the project property dialog of VS 2015 CTP, but when hitting ctrl + F5, my website starts to load beta 4 runtime and assemblies, I can see the loading in output window, I think this is because I have .k folder prior to the .kre folder in the path environment variable.
Can you try the following?
$cmd-prompt>kpm Install KRE-CLR-x86
It worked for me.

How to automate development environment setup? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Every time a new developer joins the team or the computer a developer is using changes, the developer needs to do lots of work to setup the local development environment to make the current project work. As a SCRUM team we are trying to automate everything including deployment and tests so what I am asking is: is there a tool or a practice to make local development environment setup automated?
For example to setup my environment, first I had to install eclipse, then SVN, Apache, Tomcat, MySQL, PHP. After that I populated the DB and I had to do minor changes in the various configuration files etc... Is there a way to reduce this labor to one-click?
There are several options, and sometimes a combination of these is useful:
automated installation
disk imaging
virtualization
source code control
Details on the various options:
Automated Installation Tools for automating installation and configuration of a workstation's various services, tools and config files:
Puppet has a learning curve but is powerful. You define classes of machines (development box, web server, etc.) and it then does what is necessary to install, configure, and keep the box in the proper state. You asked for one-click, but Puppet by default is zero-click, as it checks your machine periodically to make sure it is still configured as desired. It will detect when a file or mode has been changed, and fix the problem. I currently use this to maintain a handful of RedHat Linux boxes, though it's capable of handling thousands. (Does not support Windows as of 2009-05-08).
Cfengine is another one. I've seen this used successfully at a shop with 70 engineers using RedHat Linux. Its limitations were part of the reason for Puppet.
SmartFrog is another tool for configuring hosts. It does support Windows.
Shell scripts. RightScale has examples of how to configure an Amazon EC2 image using shell scripts.
Install packages. On a Unix box it's possible to do this entirely with packages, and on Windows msi may be an option. For example, RubyWorks provides you with a full Ruby on Rails stack, all by installing one package that in turn installs other packages via dependencies.
Disk Images Then of course there are also disk imaging tools for storing an image of a configured host such that it can be restored to another host. As with virtualization, this is especially nice for test boxes, since it's easy to restore things to a clean slate. Keeping things continuously up-to-date is still an issue--is it worth making new images just to propagate a configuration file change?
Virtualization is another option, for example making copies of a Xen, VirtualPC, or VMWare image to create new hosts. This is especially useful with test boxes, as no matter what mess a test creates, you can easily restore to a clean, known state. As with disk imaging tools, keeping hosts up-to-date requires more manual steps and vigilance than if an automated install/config tool is used.
Source Code Control Once you've got the necessary tools installed/configured, then doing builds should be a matter of checking out what's needed from a source code repository and building it.
Currently I use a combination of the above to automate the process as follows:
Start with a barebones OS install on a VMWare guest
Run a shell script to install Puppet and retrieve its configs from source code control
Puppet to install tools/components/configs
Check out files from source code control to build and deploy our web application
I stumbled across this question and was very suprised that no one has mentioned Vagrant yet.
As Pete TerMaat and others have mentioned, virtualization is a great way to manage and automate development environments. Vagrant basically takes the pain away from setting up these virtual boxes.
Within minutes you can have a completely fresh copy of your favourite Linux distro up and running, and provisioned exactly the same way your production server is.
No more fighting with OSX or Windows to get PHP, MySQL, etc. installed. All software lives and runs inside the virtual machine. You can even SSH in with vagrant ssh. If you make a mistake or break something, just vagrant destroy it, and vagrant up to start over fresh.
Vagrant automatically creates a synced folder to your local file system, meaning you don't need to develop within the virtual machine (ie. using Vim). Use whatever your editor of choice is.
I now create a new "Vagrant box" for almost every project I do. All my settings are saved into the project repository, so it's easy to bring on another team member. They simply have to pull the repo, and run vagrant up, and they are literally ready to go.
This also makes it much easier to handle projects that have different software requirements. Maybe you have some projects that rely on PHP 5.3, but some newer ones that run PHP 5.4. Just install the version you want for that project.
Check it out!
One important point is to set up your projects in source control such that you can immediately build, deploy and run after checkout.
That means you should also checkin helper infrastructure, such as Makefiles, ant buildfiles etc., and settings for the tools, such as IDE project files.
That should take care of the setup hassle for individual projects.
For the basic machine setup, you could use a standard image. Another option is to use your platform's tools to automate installation. Under Linux, you could create a meta-package that depends on all the packages you need. Under Windows, a similar thing should be possible using MSI or the like.
Edit:
Ideally, instead of checking in helper infrastructure, you check in the information that allows the build to generate the helper infrastructure. This is the approach taken by e.g. the GNU build system (autotools etc.), or by Maven. This is even more elegant, because you can (theoretically) generate infrastructure for any (supported) build environment, thus you are not bound to e.g. one specific IDE, and settings in the helper infrastructure (paths etc.) don't need to duplicate the main project settings.
However, this also a more complex approach, so if you can't get it to work, I believe checking in stuff like IDE files directly is acceptable.
I like to use Virtual PC or VMware to virtualize the development environment. This provides a standard "dev environment" that could be shared among developers. You don't have to worry about software that the user could add to their system that may conflict with your development environment. It also provides me a way to work to two projects where the development environments can't both be on one system (using two different versions of a core technology).
Use puppet to configure both your development and production environment. Using a top-notch automation system is the only way to scale your ops.
There's always the option of using virtual machines (see e.g. VMWare Player). Create one environment and copy it over for each new employee with minimal configuration needed.
At a prior place we had everything (and I mean EVERYTHING) in SCM (clearcase then SVN). When a new developer can in they installed ClearCase|SVN and sucked down the repository. This also handles the case when you need to update a particular lib/tool as you can just have the dev teams update their environment.
We used two repo's for this so code and tools/config lived in separate places.
I highly recommend Blueprint from DevStructure. It's open-source and your use case is actually the exact reason we originally wrote the software. Our goals have somewhat changed, but it still is the perfect tool for what you are describing. In short, you can create reusable server configs - dead simple configuration management. I hope this helps!
https://github.com/devstructure/blueprint (Blueprint # Github)
I've been thinking about this myself. There are some other technologies that you could throw into the mix. Here's what I'm currently setting up:
PXE based pre-seeded installation images (Debian Squeeze). You can start up a bare-metal machine (or new virtual appliance) and select the image from the PXE boot menu. This has the major advantage of being able to install your environment on physical machines (in addition to virtual appliances).
Someone already mentioned Puppet. I use CFEngine but it's a similar deal. Essentially your configuration is documented and centralized in policy files which are continually enforced by an agent on the client.
if you don't want a rigid environment (i.e. developers may choose a combination of tool-sets) you can roll your own deb packages so new devs can type sudo apt-get install acmecorp-eclipse-env or sudo apt-get install acmecorp-intellij-env, for example.
Slightly off-topic, but if you run a Debian based environment (i.e. Ubuntu), consider installing apt-cacher (package proxy). In addition to saving bandwidth, it will make your installations much faster (since packages are cached on your local network).
If you're using OSX and working with Rails. I'd suggest either:
https://github.com/platform45/let-there-be-light
https://github.com/thoughtbot/laptop
If you use machines in a standard configuration, you can image the disk with a fresh perfectly configured install -- that's a very popular approach in many corporations (and not just for developers, either). If you need separately configured OS's, you can tar-bz2 all the added and changed files once a configured OS is turned into your desired setup, and just untar it as root to make your desired environment from scratch.
if you're using a linux flavor, you've probably got a package management system: thinks .rpm for fedora/redhat, or .deb for ubuntu/debian. many of the things you describe already have packages available: svn, eclipse, etc. you could roll your own packages for company specific software, create a repository (perhaps only available on the local network) and then your setup could be reduced to a single bash script which would add the company repo to /etc/apt/sources.list (debian/ubuntu) and then call a command like,
/home/newhire$ apt-get update && apt-get install some complete package list
you could use buildbot to then automate regular builds for company packages that change often.
Try out DevScript at http://nsnihalsahu.github.io/devscript .
Its one command like ,
devscript lamp or devscript laravel or devscript django . In around a few minutes ,depending on the speed of your internet co