I installed Subversion on a rootserver running CentOS 6. Took me a while, but now I can access the repository using Chrome. I can add files (svn import at command line level), but only when specifying a file:/// path for the destination, https:// giving me "svn: The project archive was moved permanently to [...]; please relocate". I didn't find a single answer helping me with that particular error / message. So I don't even know what it means, what triggers it, ...
On my client I want to use UEStudio (UltraEdit Studio) which has built-in support for Subversion. When trying to do a checkout in UEStudio using the account I created when installing Subversion on the server it tells me "unable to connect to a repository at URL [...]" and also asks for a password. I saved username and password in UEStudio and can login using the exact same credentials in Chrome. The URL UEStudio isn't able to find a repository at is the same I use to browse my repository in Chrome. I'm puzzled!
So I need help setting up Subversion and UEStudio so they finally work together. I cannot offer more details because I'm not sure which ones are necessary. I already spent a couple of hours trying to solve this so I'm not sure what counts any more.
Please feel free to ask for additional details if needed, I'm happy to help!
This Stackoverflow discussion and pointing UEStudio to the x86-version (used x64 so far) of the Subversion client binaries helped, it works now! Also tried UEStudio again and it works as well!! So the problem was I didn't offer a project at checkout, but the parent directory (the repository itself?), as well as offering the x64 binaries to UEStuido.
Thank you for pointing me in the right direction, Robert!! :)
Related
I've been strugling for a couple of days with this problem, but can't seem to fix it, I think I'm almost there.... but... not quite :(
This is where I am at.
I'm on a headless debian server, running virtualmin / webmin for creating my domains / users etc. I don't know if this will mess things up, but I'm happy to modify the config files manually (via webmin or via ssh/vim).
I am attempting to run fossil as a cgi service over apache.
its an internal site, named as homeserver.net I can reach the default pages just fine, and add in and create links etc as I want to.
Please note that the solution to my problem is at the end of the question.
so the files are located on disk at, which tallys up with my apache document root
/home/homeserver/www
I would like to run fossil to have both the internal site, and later on and dev work that I practice on in separate files. So I created a new directory for these repositories.
/home/homeserver/repos/web/site.fossil
/home/homeserver/repos/dev/ [no repository yet!]
reading the instructions on the fossil page I have inserted a short cgi file called 'fos_repo.cgi' that reads as.
#!/usr/bin/fossil
directory: /home/homeserver/repos
notfound: http://www.homeserver.net/site404.htm
when I open the link to
www.homeserver.net/cgi-bin/fos_repo.cgi
I get redirected to the 404 page that I have written. So the script is clearly being read and working.
From reading the fossil pages I understand that I should be able to use the following link to open/access the repo.
www.homeserver.net/cgi-bin/repos/web/site
I'm not sure why this isn't working...
so far I have tried the following.
I opened the repository from the cli, and had the server run in the background
fossil server site.fossil &
I though maybee the file should have been inside the main repo directory, not inside a sub directory, so I moved it... it now lives in
/home/homeserver/repos/site.fossil
I tried creating an alias to the file in apache
Alias /home/homeserver/repos/web/site.fossil /home/homeserver/www/repos
When I browse to
www.homeserver.net/repos/site
I get nothing, but going to
www.homeserver.net/repos/site.fossil
will attempt to downloaded the file (which is a binary)
so I think I'm getting somewhere, but I'm not sure what I'm missing.
I've used fossil before, but I ran it as a local server, and started it up as and when I needed it.
I'm running it like this so as I can eventually push the site out to a live VPS (maybe even finish up hosting the fossil site on the VPS also).
ps I really liked fossil when I used it before, and loved the whole integrated wiki and bug tracker, and the fact I could simply copy the file to my external drive to do a backup. Personally don't really want to change to something else, but if I have to....
thanks in advance.
David
Edit: trying other options.
So I thought I would try the single repository method shown on the fossil page, so adjusted my cgi script accordingly.
Now when I navitage to : www.homeserver.net/cgi-bin/fos_repo.cgi I get the following message returned
SQLITE_CANTOPEN: cannot open file at line 30276 of [f5b5a13f73]
SQLITE_CANTOPEN: os_unix.c:30276: (21) open(/home/homeserver/repos)
however if I ssh to the server an start it manually with
fossil server site.fossil
I can get to the server with www.homeserver.net:8081
So I either have a problem with my SQLite usage in apache or something else wrong. Plesse help
Solution
So for reasons of simplicity I've decided that using a single cgi file for each repo is what I am going to go with.
My initial directory structure was as follows:
/home/homeserver/www
/home/homeserver/www/repos
/home/homeserver/www/repos/web # for web site development
/home/homeserver/www/repos/dev # for other development
I think part of my problem was that I was hoping that having the directory: pont to my repos/ location fossil would find the site.fossil file (located in repos/web) and the dev.fossil file (located in repos/deb).
Obviously this didn't work.
The reason I wanted it too look like this was for separation of the information on my system.
For some reason I had decided that pointing fossil as repos/ would give me a nice fossil style front page and links to my repositories automatically. However After having used the directory: version and getting the following error message
Unable to find or open the project repository
I realised that I was still going to need to write my front page to the repositories, and that my expectation was a little too much.
So I've decided to run with a single cgi file pointing to each repo that i need to make.
Instead of
www.homeserver.net/cgi-bin/repos/web/site
try
www.homeserver.net/cgi-bin/repos.cgi/index
Reading your ( very long ) question again, I suggest trying
www.homeserver.net/cgi-bin/fos_repos.cgi/index
we try to upgrade from archvia 1.3.6 to 2.1.1 but suddenly the remote repositories (including proxy connectors) stopped working. The remote repository view shows error marks in the column "Remote check" but no error message is shown.
Is there a possibility to find out what is going on?
We are using a proxy, we tried with proxy activated, deactivated. I even installed archiva locally on my machine with a fresh database, but still no success.
(how does this remote check even work when the proxy is activated/deactivated in the proxy connectors?)
Eclipse (with newest m2e) says Missing artifact junit:junit:jar:3.8.9. It goes so fast, that i don't think archiva is trying to reach the central-Repository.
The logs on archiva-side are empty.
Does anybody have some hints or the same problem? I think i will try it at home tonight, to see if it is a network issue.
Thanks in advance for any tips!
Update
It really seems that the proxy connector does not work since the internal Repository is empty. http://localhost:8080/archiva/repository/internal/ only shows .indexer
Update 2
The proxy configuration seems bugged in Archiva 2.1.1. I can see the same behaviour as here: Mailing List
A JIRA task for this would be nice.
Does anybody know a workaround to set the proxy for a proxy connector? Or is there a possibility to set a global proxy via a settings file?
Update 3
Rellay seems like a bug in archiva. I sent a mail to the mailing lists. Hopefully this is getting fixed soon because this is a blocker for every user with a proxy.
I won't delete this question for documentation if someone has the same problem. The issue can be found in JIRA here
I also had this problem and the simple solution was to change the proxy protocol from "http" to "https".
I also had the same problem. On first glance the solution given by Christian Quast seemed to work, but it didn't solve the problem. I eventually used a work around by using JVM proxy settings:
-Dhttp.proxyHost=[your_proxy_address]
-Dhttp.proxyPort=[your_proxy_port]
-Dhttp.proxyUser=[your_proxy_user_name]
-Dhttp.proxyPassword=[your_proxy_user_password]
-Dhttp.nonProxyHosts=localhost|127.0.0.1|::0|[any_other_hosts_not_to_use_proxy]
Update
I know it may sound weird but, using the settings above, the error/warning icon on "Remote Check" may still appear. If you add the "network proxy" (mine is using https protocol) to your remote repository (the error/warning icon is still there) but editing the remote repository again and removing it's "network proxy" will show the OK/sun icon.
In my case <networkProxy> under conf\settings.xml gets updated correctly including the port information (probably because my port is not a default 8080) but remote repository connection is still failing.
Also, changing proxy protocol to https did not help.
I know the proxy is right because I use the same for maven .m2\settings.xml
Fortunately I am only evaluating open source repo management tools. Started with Archiva as it is by Apache and we use Maven in our project. Would have moved ahead if this critical issue had a fix or work around. Guess I will have to take a shot at Nexus.
Exactly same problem here. I can't vote on your BUG report because I have no jira account.
As far as I figured out there seems to be a problem with the configuration file ~/.m2/archiva.xml. The Proxy is set without port information.
Hopefully this bug will be fixed as soon as possible.
Extending João Ferreira's reply, to access repositories with https URLs (such as Maven Central), you will also need:
-Dhttps.proxyHost=[your_proxy_host]
-Dhttps.proxyPort=[your_proxy_port]
Recently I have been plagued by an error on committing to a single SVN repo using TortoiseSVN (1.8.7.25475) or AnkhSVN (2.5.12471.17):
Error running context: The server sent an improper HTTP response
Here is a screenshot of the error in TortoiseSVN:
The pixels differ of course, but the error is the same in AnkhSVN.
This only seems to affect attempts to commit modifications, not additions or deletions; and I can commit mods to several other SVN repos on the same server just fine.
Since my teammates continue to commit mods to the repo in question and the issue has only struck my commits to that repo, I tried committing simple mods after a fresh checkout of the repo: a few one-mod-at-a-time commits worked, but then...same error.
I also searched for, reviewed, and tried some possible solutions (e.g. in a thread on the TortoiseSVN forums to which Stefan Küng replied) - a registry tweak (deleting HKEY_CURRENT_USER\Software\Tigris.org - after exporting it for backup of course), checking my global properties, and ensuring that I am not using a proxy. Same error.
Finally, I tried both repairing and downgrading TortoiseSVN. Same error.
Has anyone else encountered this error under similar circumstances and found a solution to it?
Note that some related search results mention tweaking httpd.conf or other aspects of the SVN server, but server tweaks seem inappropriate to me. Again, my teammates continue to commit mods to the same repo using the same version of TortoiseSVN, the same OS (Win 7 Pro 64-bit) etcetera. Maybe I have missed something on the server that could just happen to affect me, though.
Upgrade your Subversion client to the latest version.
Outdated answer:
ON THE CLIENT MACHINE! Open %APPDATA%\Subversion\servers in a text editor and add the line http-bulk-updates = yes, save the file and see if it helps.
If it helps, you'd better configure Apache HTTP Server's httpd.conf with SVNAllowBulkUpdates prefer directive so that all Subversion 1.8 clients could connect without any errors.
If there are more than just you who get this error in your organization and adjusting server's configuration is still unacceptable, you can change the setting http-bulk-updates = yes via Windows Registry so adjusting this on all affected machines can be done via AD Group Policy.
Read more info in Apache Subversion 1.8 Release Notes.
P.S.: faulty network hardware / firewall / antivirus is still the root cause here. The above is just a workaround to revert to the behavior of Subversion 1.7 and older client with neon network library. BTW, I guess that the installed antivirus is NOD32 or BitDefender.
In my case it was problem with nginx's gzip (I run SVNEdge SVN server behind Nginx).
I disabled gzip and everything started working.
We in our team are planning to use gerrit. So, to get introduced, I did set up a server, used open-id for authentication and created some test-users and test-projects in it.
Now we are ready to use it. But we actually prefer LDAP for real use.
So, can I change my authentication system from open-id from LDAP? What will happen to current users then?
I want to clear test projects and changes. How can I do them?
Can I complete delete existing gerrit setup and initiate a fresh setup in same machine? (I tried extracting the jar in different folder, but I faced some problems in it)
I am using Ubuntu 12.04 as my server.
Please help.
Delete the database (you're not using the H2 database anymore, but some MySQL or PostgreSQL server, don't you?) plus the directory where Gerrit is running (the -d parameter, see docs). Additionally, remove the git repos, if you configured them to be located on a different path.
Then all your data is gone and you can start from scratch.
So I have managed it. I can clone mercurial-repositories remotely using HTTP to my Windows Server 2003 machine and the ipaddress from that machine. Although I did deactivate IIS6 and am using Apache 2.2.x now. But not all works right now...darn! Here's the thing:
Cloning goes smooth! But when I want to push my changes to the original repository I get the message "cannot lock static http-repository". On the internet I get to read several explanations that Mercurial wasn't designed to push over HTTP connections. Still, on the Mercurial website there's something about configuring an hgrc file.
There's also the possibilty to configure Apache to host via HTTPS (or SSL). For this you have to load the module enabling OpenSSL and generating keys.
Configuring the hgrc file
Just add "push_ssl = false" under the [web] line. But where to put this file when pushing your changes back?! Because I placed it in the root of the server, in the ".hg" directory, nothing works.
Using SSL/HTTPS with Apache
When I try to access 'https://myipaddress' it fails, displaying a dutch message which would mean something like "server taking too long to respond". Trying to push also gives me a dutch error message which means about the same. It can not connect to my server via https although I followed the steps exactly at this blog.
I don't care which of the above solutions will work for me. Turns out none of them work so far. So please, can anyone help me with one of the solutions above? Pick the easiest! Help will be greatly appreciated, not only from me.
Summary
-Windows Server 2003
-Apache 2.2 with OpenSSL
-Mercurial 1.8.2
-I can clone, but not push!
Thank you!
Maarten Baar(s)
It seems like you might have apache configured incorrectly for getting it to do what you want. Based on your question it sounds like you have a path (maybe the root of the server) pointing to the repository you want to serve.
Mercurial comes with a script for this exact purpose, in the latest version it is hgweb.cgi. There are reasonably good instructions for setting it up on the mercurial site. It should allow both cloning and pushing. You will need the push_ssl=false if you will not be configuring https and also an allow_push line which will let certain users, or all (*) push to the repository. But all that should be part of the setup docs.