I downloaded latest version of Odoo (Openerp v8) from GitHub. When installed on my localhost (my pc) it works fine. Now whenever I tried to install it on my remote server, database is created but not loaded and I have a blank page when it comes to my-remote-ip:8072/web (I launched it openerp-gevent along my config file but the same happens with the openerp-server command too).
When I check the log file, the process is stuck at:
.... INFO my-db-username openerp.addons.base.ir.ir_http: Generating
routing map
The last request processed by the system is:
'werkzeug.request': http://my-remote-ip:8072/web/js/web.assets_backend/77f77e2' [GET]>
This request is not served and leads to a blank page. I'm on ubuntu 14.04.
I wonder if there is protection in the coding that prevents odoo to be used from a remote server.
Please, can someone help me how to bypass this issue?
Use different DB for your remote server not same as localhost.
try to run remote server using this command
./openerp-server --xmlrpc-port=8072
you can change the port number.
Hope this will help... :)
I've been testing in virtual machines recently. IMHO check the postgress database and if all is working fine, then check python dependences, perhaps you missed some?
I've used this quick guide with success for my ubuntu 14.04 Vm with odoo 8
Thanks but your answer is not satisfying. I said that it works well on a local installation. And I know quite well about the installation process whatsoever.
Now, try to install Odoo on a remote server using your IP adress (which is my case as I haven't given a domain name to my VPS yet). You will notice that the installation process is not going to its end and points to a blank screen. I need to have feedbacks from persons who have tried the same operation. Again, thanks for all your valuable contributions to this issue.
Related
I have setup a Vagrant box with Ubuntu 12.04 and Apache2 (all very vanilla, as per Vagrant's tutorial). I've been testing it for web development and I stumbled across a weird issue (not sure if bug or feature):
I have setup a synced folder across my machine and the VM folder. Apache has been serving the files mostly well, except (up to now) for a JSON file I'm using.
If I edit it locally, it seemingly syncs it to the VM folder. Both copies are the same.
Although, if I XHR it from the browser after modifying it, I still get the previously served version of that file.
At first, I thought the browser had it cached, but after trying with 2 different browsers (Chrome(ium) and Firefox), after clearing their respective cache, the issue remained.
I finally managed to go around it by reloading (vagrant reload) the VM.
What I was wondering is if this is a bug or a feature and how can I go around it. Is Apache configurable to not cache server side for a specific folder/file/filetype?
vagrant use previous setting until you provision that new setting again, so after every change in vagrant do provision to see reflected output. There is no apache2 cache problem.
For that use command
vagrant reload vmname --provision
if your vm name is default then use
vagrant reload default --provision
it will reboot vagrant vm and apply change to vm .After provision you will be able to see changes.
Finally figured it out. This relates to an issue that occurs with both Apache and /or Nginx: the sendfile option in server configuration.
Basically a new file wasn't being sent/updated client side even when it was changed server-side by Vagrant sync mechanism.
Check this answer for a solution: here.
I am running WordPress on a CentOS server with Apache and Mysql and it was running smoothly a couple of days ago but when I am trying to install any plugin or update WordPress, it gives the error with the exact statement "Unable to locate WordPress root directory".
Any suggestions are welcome because I need to install a few plugins.
This error can be caused by the wrong FTP username being given (not an incorrect password – that would be trapped nicely by wordpress – but the username).
I guess this isn’t something that is going to happen often – moving a server and so changing the FTP username – but I’m glad I thought to read around before spending hours diagnosing the server.
This could be because of the web host you are using. If you are hosting it on a free web host it could be somewhat unstable. If you really want to test wordpress out to see how it would word if it did word correctly, try using WAMP.
http://www.wampserver.com/en/
I'm in a very stupid situation and if I didn't get to resolve it my project will be rejected.
I doing a project for a company that they have hard network security and I can't publish the project online or in a local server.
I'm using on my project PHP and MySQL using usbwebserver program "it's offline Apache server".
now the problem is:
my project should to be on a network as a folder example: \server1\project
and to open the project on my computer I have to access the path and run usbwebserver.exe , so it is open perfect for 1 user but if I try to access the same project in other computer when the first user using the program the Apache not running and I think because the port of apache already in use."maybe I'm wrong"
my question is if there any solution that when other users on the network click on usbwebserver.exe
the apache become running on it .. because I have to use the same DB.
If there is other program that can solve my situation to everyone can work on same apache and same DB offline also I will appreciate!!
I thinking in this way if it possible:
Install Apache on your machine and call mysql from server folder that doesn't run Apache!!!
it is possible ? and if yes how can done it ?
Does anyone have a solution to this... Running RHEL 5.6, with Apache httpd 2.2.3-65.el5_8 and get this error when trying to start the webserver:
httpd: Syntax error on line 445 of /etc/httpd/conf/httpd.conf: Syntax error on line 2 of /etc/httpd/conf/mod_jk.conf: Cannot load /data/cf10/config/wsconfig/1/mod_jk.so into server: /data/cf10/config/wsconfig/1/mod_jk.so: undefined symbol: ap_get_server_description
I've looked all over Google, and there are some recommendations to compile my own connector, but I need the one from Adobe for CF10. Also the adobe site lists CF10 compatibility w/ Apache HTTPD 2.2.21, well with RedHat Enterprise they don't move the version number up, it gets reverse patched in the app repo.... ANY help would be awesome.
We are 50 days from going live with CF10 (or planning to), and really could use some help on getting this issue resolved.
In response to one of the posters here, I have indeed verified I'm using the x64 connector in my x64 OS based system.
Response from Adobe w/ SOLUTION!
Here's the response and resolution: You may download the connector from the following “RHEL_mod_jk.zip” web-link at:
http://helpx.adobe.com/coldfusion/kb/rhel-connector-configuration.html
Please note that you may proceed with the installation choosing not to configure the web server initially. Once CF is installed you may proceed to create the connector using the wsconfig tool at
\ColdFusion10\cfusion\runtime\bin
Find the instructions at http://help.adobe.com/en_US/ColdFusion/10.0/Admin/WSc3ff6d0ea77859461172e0811cbf364104-7fd9.html
Once the connector is in place you may simply navigate to \ColdFusion10\config\wsconfig\ folder and replace the mod_jk.so file with downloaded copy and restart Apache.
I wanted to add to the discussion my experience with install issues of CF. I've bumped up on two common issues with installing CF, the problems has been so consistent that I wrote a short "how to" blogpost so I wouldn't forget the next time I needed to deploy another server. While my blog post addresses CF 10 and Cent OS 6.4, this method has always worked as far back as I can remember (eg: CF 8)
http://www.greenvalleyconsulting.org/2013/05/16/installation-problem-coldfusion-10-and-centos-6-x/
Response from Adobe w/ SOLUTION!
Here's the response and resolution: You may download the connector from the following “RHEL_mod_jk.zip” web-link at: http://helpx.adobe.com/coldfusion/kb/rhel-connector-configuration.html
Please note that you may proceed with the installation choosing not to configure the web server initially. Once CF is installed you may proceed to create the connector using the wsconfig tool at \ColdFusion10\cfusion\runtime\bin
Find the instructions at http://help.adobe.com/en_US/ColdFusion/10.0/Admin/WSc3ff6d0ea77859461172e0811cbf364104-7fd9.html Once the connector is in place you may simply navigate to \ColdFusion10\config\wsconfig\ folder and replace the mod_jk.so file with downloaded copy and restart Apache.
So I have managed it. I can clone mercurial-repositories remotely using HTTP to my Windows Server 2003 machine and the ipaddress from that machine. Although I did deactivate IIS6 and am using Apache 2.2.x now. But not all works right now...darn! Here's the thing:
Cloning goes smooth! But when I want to push my changes to the original repository I get the message "cannot lock static http-repository". On the internet I get to read several explanations that Mercurial wasn't designed to push over HTTP connections. Still, on the Mercurial website there's something about configuring an hgrc file.
There's also the possibilty to configure Apache to host via HTTPS (or SSL). For this you have to load the module enabling OpenSSL and generating keys.
Configuring the hgrc file
Just add "push_ssl = false" under the [web] line. But where to put this file when pushing your changes back?! Because I placed it in the root of the server, in the ".hg" directory, nothing works.
Using SSL/HTTPS with Apache
When I try to access 'https://myipaddress' it fails, displaying a dutch message which would mean something like "server taking too long to respond". Trying to push also gives me a dutch error message which means about the same. It can not connect to my server via https although I followed the steps exactly at this blog.
I don't care which of the above solutions will work for me. Turns out none of them work so far. So please, can anyone help me with one of the solutions above? Pick the easiest! Help will be greatly appreciated, not only from me.
Summary
-Windows Server 2003
-Apache 2.2 with OpenSSL
-Mercurial 1.8.2
-I can clone, but not push!
Thank you!
Maarten Baar(s)
It seems like you might have apache configured incorrectly for getting it to do what you want. Based on your question it sounds like you have a path (maybe the root of the server) pointing to the repository you want to serve.
Mercurial comes with a script for this exact purpose, in the latest version it is hgweb.cgi. There are reasonably good instructions for setting it up on the mercurial site. It should allow both cloning and pushing. You will need the push_ssl=false if you will not be configuring https and also an allow_push line which will let certain users, or all (*) push to the repository. But all that should be part of the setup docs.