Group permissions reset on mercurial update - permissions

I've configured my hg repository according to the docs described here:
MultipleCommitters.
However, when I execute "hg update -C" to recreate the working copy locally, the file permissions have changed such that it eventually causes errors on push when other developers attempt to commit changes. Supposidly, when configured properly, hg update will preserve file permissions. Yet it doesn't appear to be doing so:
-rwxrwxr-x 1 root mercurial 2948 2010-06-24 15:27 .hg/store/data/src/public/index.php.i
vs. (actual source file, after deleting the working copy and recreating with "hg update -C")
-rw-r--r-- 1 root mercurial 820 2010-06-28 12:07 src/public/index.php
How can mercurial be configured such that when users create new files or modify existing files, the group and it's permissions are preserved?
UPDATE
2010.06.28
Here is a sample of the errors I'm seeing:
remote: resolving manifests
remote: getting src/configs/application.ini
remote: abort: Permission denied: /hg/repo/path/src/configs/application.ini
remote: warning: changegroup hook exited with status 255
remote: calling hook changegroup.notify: hgext.notify.hook

I had this same problem and solved it by setting the sticky bit on the remote repo directory.
chmod +s `find . -type d`
That will solve the problem that the OP ran into.

Which method exactly you've used? Describe more what is your setup.
Yes, mercurial does remember file permissions on commit. When you do hg update -C, it will recreate files with permissions that were set on last commit.
Your error message seems to be telling that repository files on repository server have wrong permissions/owner, so you cannot modify them with hg push. This could be because someone commited and pushed files as a different repository server user.
I would reccomend shared ssh method ( https://www.mercurial-scm.org/wiki/SharedSSH ): you set up separate user account for repository management, add developer's ssh public keys (you should restrict them to be used only with mercurial with specific repositories) and then use ssh://hguser#server/path/to/repository as an url.
BTW: by default mercurial doesn't run any hooks if a user used to do push/pull is not in trusted list. See trusted section in man hgrc.
BTW2: don't run any regular software as a root. use normal account for that.

Related

Nexus Repo - could not lock user prefs

I'm running Sonatype Nexus 3 inside a docker container, with the following startup command:
docker run -d -p 80:8081 --ulimit nofile=65536:65536 --name nexus -v nexus-data:/nexus-data -e INSTALL4J_ADD_VM_PARAMS="-Xms4g -Xmx4g -XX:MaxDirectMemorySize=6717m -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs" sonatype/nexus3
After updating the docker image version from 3.30.0 to 3.40.1, I keep getting the following warnings regarding user prefs.
2022-07-18 13:14:45,860+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2022-07-18 13:15:15,860+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Could not lock User prefs. Unix error code 2.
As you can see from the startup command, the user prefs directory is inside the docker volume and at directory /nexus-data/javaprefs . I have tried looking for existing locks inside the directory, but found none. I've also tried completely deleting the directory and saw that the warning still came up and the folder itself wasn't being created by Nexus.
I honestly don't even know if this is an important issue or not, since there is little to no documentation about the user preferences folder.
Even a way to turn off the warning log which fires every 30s would be useful.
----UPDATE----
I've tried doing a clean installation of Nexus through Docker, following the simple instructions inside the github sonatype nexus3 docker repository, and still find these warnings.
I even tried on a different OS (Windwos instead of linux, through Docker Desktop) and with and without a volume for /nexus-data.
At this point I believe it to be a bug in a newer Nexus version.
TLDR: Adding -Djava.util.prefs.userRoot=/nexus-data/javaprefs should solve the problem, assuming the nexus data directory is at /nexus-data/.
Just had the same issue after upgrading from 3.38.1 to 3.42.0. After some investigation found that indeed the java.util.prefs.userRoot property got lost somewhere between those versions. The default value in the vanilla Nexus 3.38.1 is /nexus-data/javaprefs.

Using Gitlab deploy keys with write access

I am currently running CE version 8.17.4 and am attempting to setup a deploy key with write access (as of 8.16) so that my runner instance may commit build artifacts back to the repository. I took the following steps to set this up:
On the runner instance, I generated the ssh keypair with the command: 
sudo ssh-keygen -t rsa -C "label" -b 4096
The generated keypair was saved to /home/gitlab-runner/.ssh/id_rsa and password protected.
Within Gitlab, I created a public deploy key from the admin console and pasted the contents of id_rsa.pub into the appropriate field and verified that the key fingerprints matched. I checked the "Write access allowed" box. 
In the private project that I wished to enable repository access from the runner, I enabled the newly created public deploy key.
This is a LaTeX document respository, so in the .gitlab-ci.yml file, I issue the following script after building the pdf:
after_script:
  - "git commit -am 'autobuild PDF'"
  - "git push origin master"
When the changes were committed, the build ran successfully on the runner up until the git push origin master command, and this error was thrown:
fatal: Authentication failed for 'http://gitlab-ci-token:xxxxxxxxxxxxxxxx#host/project.git/'
Ok. A couple questions:
If the deploy key is just an SSH key, shouldn't it be connecting on the secure port or does this matter? I haven't found much documentation on using this new write-permission deploy key feature, so am I missing something in the steps I took above?
Do I need to include [ci skip] in the commit message to avoid looping CI builds? I saw this concern come up in the original issue tickets for this feature, but did not see whether this step was required or not. 
Thanks for any help!
Jawad's comment worked for me: you need to force SSH. for example
git remote add ssh_remote git#host:user/project.git
git push ssh-remote HEAD:dev
thanks jawad

Changing permissions of added file to a Docker volume

In the Docker best practices guide it states:
You are strongly encouraged to use VOLUME for any mutable and/or user-serviceable parts of your image.
And by looking at the source code for e.g. the cpuguy83/nagios image this can clearly be seen done, as everything from nagios to apache config directories are made available as volumes.
However, looking at the same image the apache service (and cgi-scripts for nagios) are run as the nagios user by default. So now I'm in a pickle, as I can't seem to figure how to add my own config files in order to e.g. define more hosts for nagios monitoring. I've tried:
FROM cpuguy83/nagios
ADD my_custom_config.cfg /opt/nagios/etc/conf.d/
RUN chown nagios: /opt/nagios/etc/conf.d/my_custom_config.cfg
CMD ["/opt/local/bin/start_nagios"]
I build as normal, and try to run it with docker run -d -p 8000:80 <image_hash>, however I get the following error:
Error: Cannot open config file '/opt/nagios/etc/conf.d/my_custom_config.cfg' for reading: Permission denied
And sure enough, the permissions in the folder looks like (whist the apache process runs as nagios):
# ls -l /opt/nagios/etc/conf.d/
-rw-rw---- 1 root root 861 Jan 5 13:43 my_custom_config.cfg
Now, this has been answered before (why doesn't chown work in Dockerfile), but no proper solution other than "change the original Dockerfile" has been proposed.
To be honest, I think there's some core concept here I haven't grasped (as I can't see the point of declaring config directories as VOLUME nor running services as anything other than root) - so provided a Dockerfile as above (which follows Docker best practices by adding multiple volumes) is the solution/problem:
To change NAGIOS_USER/APACHE_RUN_USER to 'root' and run everything as root?
To remove the VOLUME declarations in the Dockerfile for nagios?
Other approaches?
How would you extend the nagios dockerfile above with your own config file?
Since you are adding your own my_custom_config.cfg file directly into the container at build time just change the permissions of the my_custom_config.cfg file on your host machine and then build your image using docker build. The host machine permissions are copied into the container image.

How to tag a git repo in a bamboo build

I'm trying to tag the git repo of a ruby gem in a Bamboo build. I thought doing something like this in ruby would do the job
`git tag v#{current_version}`
`git push --tags`
But the problem is that the repo does not have the origin. somehow Bamboo is getting rid of the origin
Any clue?
Yes, if you navigate to the job workspace, you will find that Bamboo does not do a straightforward git clone "under the hood", and the the remote is set to an internal file path.
Fortunately, Bamboo does store the original repository URL as ${bamboo.repository.git.repositoryUrl}, so all you need to do is set a remote pointing back at the original and push to there. This is what I've been using with both basic Git repositories and Stash, creating a tag based on the build number.
git tag -f -a ${bamboo.buildNumber} -m "${bamboo.planName} build number ${bamboo.buildNumber} passed automated acceptance testing." ${bamboo.planRepository.revision}
git remote add central ${bamboo.planRepository.repositoryUrl}
git push central ${bamboo.buildNumber}
git ls-remote --exit-code --tags central ${bamboo.buildNumber}
The final line is simply to cause the task to fail if the newly created tag cannot be read back.
EDIT: Do not be tempted to use the variable ${bamboo.repository.git.repositoryUrl}, as this will not necessarily point to the repo checked out in your job.
Also bear in mind that if you're checking out from multiple sources, ${bamboo.planRepository.repositoryUrl} points to the first repo in your "Source Code Checkout" task. The more specific URLs are referenced via:
${bamboo.planRepository.1.repositoryUrl}
${bamboo.planRepository.2.repositoryUrl}
...
and so on.
I know this is an old thread, however, I thought of adding this info.
From Bamboo version 6.7 onwards, it has the Git repository tagging feature Repository Tag.
You can add a repository tagging task to the job and the Bamboo variable as tag name.
You must have Bamboo-Bitbucket integrated via the application link.
It seems that after a checkout by the bamboo agent, the remote repository url for origin is set as file://nothing
[remote "origin"]
url = file://nothing
fetch = +refs/heads/*:refs/remotes/origin/*
That's why we can either update the url using git remote set-url or in my case I just created a new alias so it does not break the existing behavior. There must be a good reason why it is set this way.
[remote "build-origin"]
url = <remote url>
fetch = +refs/heads/*:refs/remotes/build-origin/*
I also noticed that using ${bamboo.planRepository.<position>.repositoryUrl} did not work for me since it was defined in my plan as https. Switching to ssh worked.

How can I recover after a checksum mismatch with 'git svn clone'?

I'm cloning an SVN repository to git as part of our migration plan. I've hit various snags along the way, forcing me to continue the clone with a git svn fetch command. The most recent failure I can't figure out how to solve:
$ git svn fetch
Checksum mismatch: dc/trunk-4632-jh/dc-smtpd/lib/Qpsmtpd/Address.pm.t 8ce3aea3f47dc115e8fe53bd62d0f074cfe93ec6
expected: 59de969022e46135fa6dc7599fc2f3b4
got: 4334926a01c905cdb7fce71265e370c1
I found this related answer, however that solution doesn't work because git svn log is not yet functional, as the repo is not fully in place:
$ git svn log dc/trunk-4632-jh/dc-smtpd/lib/Qpsmtpd/Address.pm.t
fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions
log --no-color --first-parent --pretty=medium HEAD: command returned error: 128
How can I proceed?
Another answer to an old question but straight forward solutions are tough to find for this problem so hopefully this helps others.
I think this issue occurs due to a corrupted file during transfer. Not sure how or why it happens, but in my case, I get the same error at different revisions every time I do a new clone and sometimes not at all.
Using the questioners error message
$ git svn fetch
Checksum mismatch: dc/trunk-4632-jh/dc-smtpd/lib/Qpsmtpd/Address.pm.t
8ce3aea3f47dc115e8fe53bd62d0f074cfe93ec6
expected: 59de969022e46135fa6dc7599fc2f3b4
got: 4334926a01c905cdb7fce71265e370c1
The following steps allowed me to resume and progress :-
View all branches. These will all be remote branches. git branch -a
Checkout branch affected. git checkout remotes/origin/trunk-4632-jh
This will take some time to complete.
Find the last revision that the problematic file was changed. git svn log dc-smtpd/lib/Qpsmtpd/Address.pm.t
Note the highest revision #
Reset back to this rev. git svn reset -r (rev #) -p
Carry on. git svn fetch
Good luck.
I know this is old but maybe it will be helpful for future reference as all search results on this are not helpful.
I've hit similar issue on our huge repository which takes days to clone and unfortunately at one point I had to restart my machine. I am currently working out how to resolve the problem, so please keep in mind this is more a suggestion than tested solution.
I think you need to try creating a branch and checking out the commits you currently have from previous fetch:
git checkout -b master git-svn
After that is done you should have working tree up to that commit. Another fetches will probably fail due to object mismatch but at that point at least it should be possible to use "git svn reset" to revert faulty svn fetches (see OP's related answer link). If that's true find offending commit, reset before it and then continue fetching.
You might want to rebase and revert to state before that broken commit on your master branch or convert back to bare repository, if that's what you're after (in my case it is).
Hope this works. I'll post an update when my checkout is done (will take at least few hours... sigh).
Edit: That seemed to work. I successfully discarded some git-svn commits and am able to re-fetch them again. :)
Edit2: Make sure to reset until you don't get any object mismatch warnings on git svn fetch (otherwise you will run into the same issue soon).
Cheers,
Henryk
See also: Git svn rebase : checksum mismatch
In our case the additional treatment of the files (server-side includes in Apache) caused the checksum problem.
Disabling SSI in Apache's /etc/httpd.conf file for the period of migration by commenting out the
AddType text/html .shtml
AddOutputFilter INCLUDES .shtml
directives solved the problem, caused by the interpretation of .shtml files by the front-end Apache server, which produced a new content (and thus a new hash), other than the hash of the original file itself.
That means some files in the repository got corrupted. It can be caused by various reasons such as software bugs, bit rots in drives, etc. I was recently transitioning very old ~10GB svn repository to git, therefore some corruption was expected.
To fix the corruption, you basically need to dump the entire repository and import it while filtering the errors out. Note that our goal is to complete the import process no matter why or how the repository got corrupted. You cannot simply fix the corruption without having a backup and diffing through the revision files.
First basic one-off command you could use is:
svnadmin create repo2
svnadmin dump repo | sed '/^Text-content-md5/d' | svnadmin load repo2
This removes the checksum calculation from the dump so the new repo will have updated checksums.
If you encountered more errors during the dump and load (which is expected), try incremental approach so you can continue from the point you left. Below command will dump the revisions starting from 101 to 150 (inclusive).
svnadmin dump --incremental -r101:150 repo | sed '/^Text-content-md5/d' | svnadmin load repo2
Some common errors and solutions:
'Premature end of content data in dumpstream': That means Content-length of some file does not match the repository version, so some data is lost in the specified file. We must skip it. Add | svndumpfilter exclude path/to/file.jar command like this:
svnadmin dump --incremental -r101:150 repo | svndumpfilter exclude path/to/file.jar | sed '/^Text-content-md5/d' | svnadmin load repo2
Property errors: Add --bypass-prop-validation to svnadmin load command
After populating your second repo, you would simply svnserve -d -r repo2 and try git svn fetch again.
Good luck!