So I have managed it. I can clone mercurial-repositories remotely using HTTP to my Windows Server 2003 machine and the ipaddress from that machine. Although I did deactivate IIS6 and am using Apache 2.2.x now. But not all works right now...darn! Here's the thing:
Cloning goes smooth! But when I want to push my changes to the original repository I get the message "cannot lock static http-repository". On the internet I get to read several explanations that Mercurial wasn't designed to push over HTTP connections. Still, on the Mercurial website there's something about configuring an hgrc file.
There's also the possibilty to configure Apache to host via HTTPS (or SSL). For this you have to load the module enabling OpenSSL and generating keys.
Configuring the hgrc file
Just add "push_ssl = false" under the [web] line. But where to put this file when pushing your changes back?! Because I placed it in the root of the server, in the ".hg" directory, nothing works.
Using SSL/HTTPS with Apache
When I try to access 'https://myipaddress' it fails, displaying a dutch message which would mean something like "server taking too long to respond". Trying to push also gives me a dutch error message which means about the same. It can not connect to my server via https although I followed the steps exactly at this blog.
I don't care which of the above solutions will work for me. Turns out none of them work so far. So please, can anyone help me with one of the solutions above? Pick the easiest! Help will be greatly appreciated, not only from me.
Summary
-Windows Server 2003
-Apache 2.2 with OpenSSL
-Mercurial 1.8.2
-I can clone, but not push!
Thank you!
Maarten Baar(s)
It seems like you might have apache configured incorrectly for getting it to do what you want. Based on your question it sounds like you have a path (maybe the root of the server) pointing to the repository you want to serve.
Mercurial comes with a script for this exact purpose, in the latest version it is hgweb.cgi. There are reasonably good instructions for setting it up on the mercurial site. It should allow both cloning and pushing. You will need the push_ssl=false if you will not be configuring https and also an allow_push line which will let certain users, or all (*) push to the repository. But all that should be part of the setup docs.
Related
This is my first post under the Apache tag, so not sure if I have posted it in the correct spot. Apologies if it's not.
We recently had an audit done on our Apache server. It's running on a Windows Server 2012 R2, and I installed Apache 2.4.27 through WAMP.
The results from the Audit are fairly specific, but I don't know where to go in the Config file to fix these. My IMIT department has gone through a number of changes and we no longer have someone who can help me, so I'm stuck.
The three areas I need to correct are:
1) MISSING SECURITY HEADERS Recommendation: Implement HTTP security headers in the web applications to prevent exploitation of vulnerabilities.
2) Recommendation: Make sure that browsable directories do not leak confidential informative or give access to sensitive resources. Additionally, use access restrictions or disable directory indexing for any that do.
3) The remote web server supports the TRACE and/or TRACK methods. TRACE and TRACK are HTTP methods that are used to debug web server connections. Recommendation: Disable these methods.
I have looked in the config and in various documentation online but the Windows install for Apache seems to be unique, and I don't want to risk screwing up something that breaks the install.
Any ideas would be greatly appreciated.
Chris
Find httpd.conf file. It should be in the conf folder in the localization where Apache is installed like for ex:
C:/Apache/Apache/conf/httpd.conf
If you're not sure where that is - open task manager, find httpd.exe and check it's properties.
Then add required configuration there.
Check out this helpful github:
https://github.com/h5bp/server-configs-apache/blob/master/dist/.htaccess
You can check your configuration files for syntax errors without starting the server by using apachectl configtest or the -t command line option.
we try to upgrade from archvia 1.3.6 to 2.1.1 but suddenly the remote repositories (including proxy connectors) stopped working. The remote repository view shows error marks in the column "Remote check" but no error message is shown.
Is there a possibility to find out what is going on?
We are using a proxy, we tried with proxy activated, deactivated. I even installed archiva locally on my machine with a fresh database, but still no success.
(how does this remote check even work when the proxy is activated/deactivated in the proxy connectors?)
Eclipse (with newest m2e) says Missing artifact junit:junit:jar:3.8.9. It goes so fast, that i don't think archiva is trying to reach the central-Repository.
The logs on archiva-side are empty.
Does anybody have some hints or the same problem? I think i will try it at home tonight, to see if it is a network issue.
Thanks in advance for any tips!
Update
It really seems that the proxy connector does not work since the internal Repository is empty. http://localhost:8080/archiva/repository/internal/ only shows .indexer
Update 2
The proxy configuration seems bugged in Archiva 2.1.1. I can see the same behaviour as here: Mailing List
A JIRA task for this would be nice.
Does anybody know a workaround to set the proxy for a proxy connector? Or is there a possibility to set a global proxy via a settings file?
Update 3
Rellay seems like a bug in archiva. I sent a mail to the mailing lists. Hopefully this is getting fixed soon because this is a blocker for every user with a proxy.
I won't delete this question for documentation if someone has the same problem. The issue can be found in JIRA here
I also had this problem and the simple solution was to change the proxy protocol from "http" to "https".
I also had the same problem. On first glance the solution given by Christian Quast seemed to work, but it didn't solve the problem. I eventually used a work around by using JVM proxy settings:
-Dhttp.proxyHost=[your_proxy_address]
-Dhttp.proxyPort=[your_proxy_port]
-Dhttp.proxyUser=[your_proxy_user_name]
-Dhttp.proxyPassword=[your_proxy_user_password]
-Dhttp.nonProxyHosts=localhost|127.0.0.1|::0|[any_other_hosts_not_to_use_proxy]
Update
I know it may sound weird but, using the settings above, the error/warning icon on "Remote Check" may still appear. If you add the "network proxy" (mine is using https protocol) to your remote repository (the error/warning icon is still there) but editing the remote repository again and removing it's "network proxy" will show the OK/sun icon.
In my case <networkProxy> under conf\settings.xml gets updated correctly including the port information (probably because my port is not a default 8080) but remote repository connection is still failing.
Also, changing proxy protocol to https did not help.
I know the proxy is right because I use the same for maven .m2\settings.xml
Fortunately I am only evaluating open source repo management tools. Started with Archiva as it is by Apache and we use Maven in our project. Would have moved ahead if this critical issue had a fix or work around. Guess I will have to take a shot at Nexus.
Exactly same problem here. I can't vote on your BUG report because I have no jira account.
As far as I figured out there seems to be a problem with the configuration file ~/.m2/archiva.xml. The Proxy is set without port information.
Hopefully this bug will be fixed as soon as possible.
Extending João Ferreira's reply, to access repositories with https URLs (such as Maven Central), you will also need:
-Dhttps.proxyHost=[your_proxy_host]
-Dhttps.proxyPort=[your_proxy_port]
I downloaded latest version of Odoo (Openerp v8) from GitHub. When installed on my localhost (my pc) it works fine. Now whenever I tried to install it on my remote server, database is created but not loaded and I have a blank page when it comes to my-remote-ip:8072/web (I launched it openerp-gevent along my config file but the same happens with the openerp-server command too).
When I check the log file, the process is stuck at:
.... INFO my-db-username openerp.addons.base.ir.ir_http: Generating
routing map
The last request processed by the system is:
'werkzeug.request': http://my-remote-ip:8072/web/js/web.assets_backend/77f77e2' [GET]>
This request is not served and leads to a blank page. I'm on ubuntu 14.04.
I wonder if there is protection in the coding that prevents odoo to be used from a remote server.
Please, can someone help me how to bypass this issue?
Use different DB for your remote server not same as localhost.
try to run remote server using this command
./openerp-server --xmlrpc-port=8072
you can change the port number.
Hope this will help... :)
I've been testing in virtual machines recently. IMHO check the postgress database and if all is working fine, then check python dependences, perhaps you missed some?
I've used this quick guide with success for my ubuntu 14.04 Vm with odoo 8
Thanks but your answer is not satisfying. I said that it works well on a local installation. And I know quite well about the installation process whatsoever.
Now, try to install Odoo on a remote server using your IP adress (which is my case as I haven't given a domain name to my VPS yet). You will notice that the installation process is not going to its end and points to a blank screen. I need to have feedbacks from persons who have tried the same operation. Again, thanks for all your valuable contributions to this issue.
Recently I have been plagued by an error on committing to a single SVN repo using TortoiseSVN (1.8.7.25475) or AnkhSVN (2.5.12471.17):
Error running context: The server sent an improper HTTP response
Here is a screenshot of the error in TortoiseSVN:
The pixels differ of course, but the error is the same in AnkhSVN.
This only seems to affect attempts to commit modifications, not additions or deletions; and I can commit mods to several other SVN repos on the same server just fine.
Since my teammates continue to commit mods to the repo in question and the issue has only struck my commits to that repo, I tried committing simple mods after a fresh checkout of the repo: a few one-mod-at-a-time commits worked, but then...same error.
I also searched for, reviewed, and tried some possible solutions (e.g. in a thread on the TortoiseSVN forums to which Stefan Küng replied) - a registry tweak (deleting HKEY_CURRENT_USER\Software\Tigris.org - after exporting it for backup of course), checking my global properties, and ensuring that I am not using a proxy. Same error.
Finally, I tried both repairing and downgrading TortoiseSVN. Same error.
Has anyone else encountered this error under similar circumstances and found a solution to it?
Note that some related search results mention tweaking httpd.conf or other aspects of the SVN server, but server tweaks seem inappropriate to me. Again, my teammates continue to commit mods to the same repo using the same version of TortoiseSVN, the same OS (Win 7 Pro 64-bit) etcetera. Maybe I have missed something on the server that could just happen to affect me, though.
Upgrade your Subversion client to the latest version.
Outdated answer:
ON THE CLIENT MACHINE! Open %APPDATA%\Subversion\servers in a text editor and add the line http-bulk-updates = yes, save the file and see if it helps.
If it helps, you'd better configure Apache HTTP Server's httpd.conf with SVNAllowBulkUpdates prefer directive so that all Subversion 1.8 clients could connect without any errors.
If there are more than just you who get this error in your organization and adjusting server's configuration is still unacceptable, you can change the setting http-bulk-updates = yes via Windows Registry so adjusting this on all affected machines can be done via AD Group Policy.
Read more info in Apache Subversion 1.8 Release Notes.
P.S.: faulty network hardware / firewall / antivirus is still the root cause here. The above is just a workaround to revert to the behavior of Subversion 1.7 and older client with neon network library. BTW, I guess that the installed antivirus is NOD32 or BitDefender.
In my case it was problem with nginx's gzip (I run SVNEdge SVN server behind Nginx).
I disabled gzip and everything started working.
I installed Subversion on a rootserver running CentOS 6. Took me a while, but now I can access the repository using Chrome. I can add files (svn import at command line level), but only when specifying a file:/// path for the destination, https:// giving me "svn: The project archive was moved permanently to [...]; please relocate". I didn't find a single answer helping me with that particular error / message. So I don't even know what it means, what triggers it, ...
On my client I want to use UEStudio (UltraEdit Studio) which has built-in support for Subversion. When trying to do a checkout in UEStudio using the account I created when installing Subversion on the server it tells me "unable to connect to a repository at URL [...]" and also asks for a password. I saved username and password in UEStudio and can login using the exact same credentials in Chrome. The URL UEStudio isn't able to find a repository at is the same I use to browse my repository in Chrome. I'm puzzled!
So I need help setting up Subversion and UEStudio so they finally work together. I cannot offer more details because I'm not sure which ones are necessary. I already spent a couple of hours trying to solve this so I'm not sure what counts any more.
Please feel free to ask for additional details if needed, I'm happy to help!
This Stackoverflow discussion and pointing UEStudio to the x86-version (used x64 so far) of the Subversion client binaries helped, it works now! Also tried UEStudio again and it works as well!! So the problem was I didn't offer a project at checkout, but the parent directory (the repository itself?), as well as offering the x64 binaries to UEStuido.
Thank you for pointing me in the right direction, Robert!! :)