Cloning Fossil with self-hosted cgi script yields "unknown repository" - repository

I've been trying for two weeks to set up fossil to work and if I did not see the advantage of having a repository I would have given up by now. I am really at my last wits end with this.
I have set up fossil on both my server and my computer. My server is linux with virtualmin taking care of most of the hosting. My computer is windows 7.
My fossil binary are both in my paths.
I chose to have fossil host itself using a self-hosted cgi script. To be really specific the exact same script as the one on this page under the title heading of: "Another solution to automatically serve multiple repositories" -
http://fossil-scm.org/fossil/wiki?name=Cookbook
It seems to work, if I go to my server's ip to the /cgi-bin/p file I can see a list of my repositories which I've greated using:
fossil init ider.fsl
I can see the wiki, and the general web gui of the fossil repository however....
From my windows machine when I try to clone the source by using the following:
fossil clone 192.168.1.200:81/cgi-bin/p ider
I only hear the constant echo of:
unknown repository: 192.168.1.200:81/cgi-bin/p
Could it be the permissions set on the ider.fsl file? Obviously I am new at SCM but there is something very wrong since I can't find anything in the documentation or any references from google describing this problem, unless I am trying to find out if I want to clone a dinosaur... :/

Just guessing here, but from eyeballing the script, it looks like it requires the repo name to be in the CGI variable PATH_INFO, i.e. the bit of the URL after the script name.
If this is the case, you need to clone the repo using:
fossil clone http://192.168.1.200:81/cgi-bin/p/ider ider
# ^^^^^
Fossil also has a pretty active mailing list that you should be able to get help from.

Related

Create repository in non-empty remote folder

It's been 14 years since I last worked with svn and appearently I have forgotten everything...
I have an existing web-project, consisting of a bunch of php, html, js and other files in a directory tree on a V-Server. Now I want to take these folders under version control and create a copy on my local machine using svn. So I installed subversion according to these instructions: https://www.linuxcloudvps.com/blog/how-to-install-svn-server-on-debian-9/
Using the already-present apache2.
But now I kinda hit a roadblock. If I try svnadmin create on the existing folder, it tells me that is is not empty and does nothing really. All the questions and answers I find here and elsewhere are either
a) focussing on an already existing folder on the local machine
b) assuming more prior knowledge than I have right now aka I don't understand them.
Is there a step-by-step guide for dummies anywhere on how to do this? Or can anyone tell me in laymans terms how to do this?
I can't believe this case never comes up or that it is really very complicated.
At the risk of failing to understand your exact needs, I think you can proceed as follows. I'll use this terms:
Code: it's the unversioned directory at V-Server where you currently have the bunch of php, html, js and other files
Repository: it's the first "special" directory you need to create in order to store your Subversion history and potentially share it with others. There must be one and there can only be one.
Working copy: it's the second "special" directory you need to create in order to work with your php, html, js... files once they are versioned and it'll be linked to a given path and revision of your repository. At a given time there can be zero, one or many of them.
Your code can become a working copy or not, that's up to you, but it can never become a repository:
$ svnadmin create /path/to/code
svnadmin: E200011: Repository creation failed
svnadmin: E200011: Could not create top-level directory
svnadmin: E200011: '/path/to/code' exists and is non-empty
Your repository requires an empty folder but it can be located anywhere you like, as long as you have access to it from the machine you're going to use in your daily work. Access means it's located in your PC (thus you use the file: protocol) or it's reachable through a server you've installed and configured (svn:, http: or https:).
$ svnadmin create /path/to/repo
$ 😎
Your working copies can be created wherever you need to work with your IDE. It can be an empty directory (the usual scenario) or a non-empty one. The checkout command retrieves your files from the repo and puts them in the working copy so, at a later stage, you're able to run a commit command to submit your new and changed files to the repository. As you can figure out it isn't a good idea to create a working copy in random directories because incoming files will mix with existing files. There's however a special situation when it can make sense: when the repository location is new and is still empty. In that case you can choose between two approaches:
If you want code to become a working copy, you can check out right into in and then make an initial commit to upload all files:
$ svn checkout file://path/to/repo /path/to/code
Checked out revision 0.
$ svn add /path/to/code --force
A code/index.php
$ svn commit /path/to/code -m "Import existing codebase"
$ Adding /path/to/code/index.php
$ Transmitting file data .done
$ Committing transaction...
$ Committed revision 1.
If you don't care about code once it's stored in the repository or you want your working copy elsewhere, you can import your files from code and create a working copy in a fresh directory:
$ svn import /path/to/code file://path/to/repo -m "Import existing codebase"
Adding code/index.php
Committing transaction...
Committed revision 1.
$ svn checkout file://path/to/repo fresh
A fresh/index.php
Checked out revision 1.

"Bare" git repository: how can I let Apache always "see" the latest commit?

We've got a "bare" git repository on a server, for a Web portal project. Several programmers, designers, etc... perform dozens of push and pull from/to it.
Now we want to test the project on the server itself, and always test the last commit through an Apache web server which is installed on the same machine the "bare" git repository is stored in.
How can we 'unbare' the repository, and let the working directory contain always and only the last commit deriving from the last push?
Or anything else aiming to achieve the same result?
You can use a post-receive hook to do a git pull inside your webserver document root/repository.
in your bare repository
do
mv hooks/post-receive.sample hooks/post-receive
chmod +x .git/hooks/post-receive
the post receive should be something like
#!/bin/sh
WEB_ROOT='/var/www/project'
cd $WEB_ROOT
git pull
A more elegant solution that doesn't involve that the web server area being a git repository, you can also review the git documentation about hooks
Note: if you use the simple solution, please make sure that your webserver doesn't serve the .git directory, this would give some hackers/crackers the access to the website source code!

Transfer a trac database from one desktop to another

I'm using Trac 0.12.2 that came as a part of Bitnami Trac Stac.
I am very new to Trac & just started with Trac, working with a local repository on a desktop a few weeks ago & created some issues. Now I wanted to transfer the all those issues onto my new Trac installation on another desktop. So I simply tried replacing the empty(I believed) database folder of new installation with my old Trac DB folder.
Specifically this folder:
C:\BitNamiTracStack\repository\db\
When I tried doing so, the admin tab on the trac interface disappeared.
Also I got a message:
Warning: Can't synchronize with repository "(default)" (The repository directory has changed, you should resynchronize the repository with: trac-admin $ENV repository resync '(default)'). Look in the Trac log for more information.
How do I successfully transfer my issues from one desktop to another ?
Check your installation and find the correct directory called 'Trac environment' as per Remy's advice.
While his answer is the safe road and general advice without doubt, you may still succeed with a less complete transfer, depending on what you already put into the Trac environment in question. Assuming you do use BitNami's default Trac db backend (SQLite) you'll need at least
the latest db named trac.db from the db folder
the configuration file conf/trac.ini
If you've worked with attachments to tickets or wiki pages, the whole directory structure below attachements is needed as well.
Other things might not have been touched by a self-declared "very new" Trac user within the first weeks. Of course a diff -Nur <path_to_old_dir> <path_to_new_dir> | <your_favorite_editor> will remind you of anything you may have already forgotten.
You shouldn't copy the database alone, but the complete Trac environment. That's the directory containing the attachments, conf, db, htdocs, log, plugins and templates directories. In your case, this seems to be the directory:
C:\BitNameTracStack\repository
(I'm not familiar with the BitNami stack, but the name "repository" sounds suspect. I hope they don't put the Trac environment below the Subversion repository.)
See the official Trac documentation on backing up a Trac environment and restoring it. You should be able to use this to migrate your config to another server.

How to update my server files from a git repo automatically everyday

I am a noob in these server related work. I am writing some PHP code in my local system and has been updating my repo in github regularly. Each time I want to test my application, I copy all the files from my local system onto my server through FTP and then do it. Now I want to know whether is there a way to automatically make the commits that I make to reflect in the files in the server. Is there a way to automatically make the server get the files from the repo periodically? (say, once everyday).
Can this be done other way, like when I make a push from my local machine, the repo gets updated and in turn the files on the server also get updated?
My Server Details: Apache 2.2.15, Architecture i686 with Linux Kernel 2.6.18-194.32.1.el5
In addition to cronjobs, you can use a post-receive hook: http://help.github.com/post-receive-hooks/
If you have cronjobs you can use them. First set up the repository on your server. Then you can set up the cronjob, choose a time in which it should be executed, and then in the cronjob execute the following command:
cd your/repository/folder; git pull master origin
Perhaps explore the git archive command, which you can use to get a zip file or similar of your code. You could then perhaps use a script to copy that to your (other) server?
git archive --format=zip --output=./src.zip HEAD
will create a zip file called src.zip from the HEAD of your repo
More info:
http://www.kernel.org/pub/software/scm/git/docs/git-archive.html
Do a "git export" (like "svn export")?

user specific maven settings in repository

http://maven.apache.org/settings.html As per documentation the user specific settings can be either copied to the .m2 folder or under the maven installation. If a developer changes a machine or gets a new user id, such properties have to be copied manually to these newer machines.
Would it be possible to store user specific setting information in the repository itself (say SVN) and somehow have the mvn scripts load it on startup.
If the content of the settings.xml is not that user specific (e.g. for mirrors), you could store the whole Maven install in SVN with a customized conf/settings.xml and have the developers grab it from SVN to "install" it on a new machine as described in this previous answer.
If the content of the settings.xml is really user specific (e.g. it contains secret things like passwords), then it must be located in ~/.m2 and you will have to somehow make it available at the new location. If a developer logs on another machine, you could use "Roaming user profile". If a developer gets another id, then you'll really have to duplicate it. The technical solution may depend on the level of confidentiality required.
And if you have several developers sharing a userid but still need different settings.xml, then you'll have to pass it to Maven using the -s option. One could imagine storing these custom settings.xml in the project in that case (assuming it doesn't contain sensitive information). For example:
mvn -s settings-user1.xml <goal>
Nope, the whole point of having user settings is to store them outside the maven projects. There's nothing stopping you from creating your own svn repository and storing your configuration files there, though. You could write some shell scripts to bootstrap a new workstation from that repository, but it really depends how often you do this to make it worthwhile.
I would suggest that you setup your own repository such as Archiva, Nexus or Artifactory. Which will get your dependencies/plugins , then you can use mirror to specify explicitly just one repository to be used(the one you setup on your network). So whenever developer changes machine or dependencies are needed for multiple developers the internal mirror can be used as repo, your dependencies/plugins will download in no time to your local repository/ies