This question already has answers here:
What is the best practice for dealing with passwords in git repositories?
(9 answers)
Closed 8 years ago.
Along the code there could be very sensitive information such as passwords, amazon s3 keys, etc that I don't want to be sent to git at all.
I'd like those very specific fields to either be replaced with "SECRET" or something like that.
Also, is git private repo solving this?
Since git tracks text and not just files, replacing these lines with some other text would be interpreted by git as a change on the code, so it would overwrite the original sensitive info in the next commit.
What I use to do in these cases is to modularize my code so this info get isolated in a single file, and then I add a line with the file name to the .gitignore file.
The .gitignore file is a collection of patterns, one per line, of file names to be ignored by git while tracking changes in your repo.
For example, if I'm writing a web system in php, I create a file that only store info about credentials for connecting to the database (frameworks use to do so too, so you could guess it's a good practice...). So I write this file once with the test server credentials (which my collaborators are supposed to know) or with some dummy credentials, commit it and push it to my remote, and then I add the file name to my .gitignore.
In the other hand, you have the command git add -p, which interactively let you skip lines, but that would result on a file without the mentioned lines in your remote repo, an you having to manually skip the lines every time you add the file...
A Good reference for git is Progit. Highly recommended if you are starting with git... Also, Github's help center is a very good place to look.
I hope it would be helpful! good luck!!!
Related
Is it possible to create branches in Perforce in a similar style to Git? I.e. without creating a new folder.
I would prefer for my client to manage the branches transparently whilst I work against a single copy of the directory tree on disk.
It seems awfully wasteful for the client to create an exact copy of the entire tree if you're only modifying say a couple of files. I much prefer Git's workflow in this regard.
If it's not possible using straight Perforce I'm happy to move to GitSwarm.
For info I'm running Perforce version 2015.1/1233444.
Possible yes, but with the centralized version of the system it involves a bit of 'magic'. Basically, the branch part doesn't need to involve the client at all anymore. Take a peek at p4 populate. That'll create another folder on the server, but won't do anything locally. Then you can edit your client workspace to map the branched files instead of the trunk files, and it'll just re-sync over top the files on your disk.
Now, having said that, if you wanted to take a look at our DVCS version of working, then you can just do "p4 switch -c " and it'll create a new branch locally, switch your workspace over to it (shelving any open current work in the process) and away you go.
My original answer was deleted because I thought a link was a better idea than repeating content. My mistake.
At any rate, I believe the DVCS features in Perforce Helix supply exactly the sort of thing you're after. In a blog I wrote in the subject (link here for reference) I explained how to create a new in-place branch with a single command:
p4 switch -c newBranchName
That will create a new branch with the name "newBranchName" and save any existing work in progress by default. To discover on which branch you're working you can use the switch command with the list argument as follows:
p4 switch -l
That would show you output like this, the asterisk showing that you're now working on the newBranchName branch.
newBranchName *
main
You can switch back and forth as you like, changing contexts as needed as often as you like. Your work in progress will continue to be saved on each branch in progress. When you're ready to merge your work back to main and push it back to the server, you can use the following sequence of commands:
p4 switch main
p4 merge --from newBranchName
p4 resolve –as
The first command switches back to the main branch, the second merges your work from the newly created branch into main, and the third resolves any potential conflicts automatically. If there are any conflicts that can't automatically be merged, then you can use the usual commands to walk through the resolution process.
Alternately, if you prefer to stick with Git, you can use that directly with our Helix Versioning Engine through our Git Fusion technology or use Git directly with our new GitSwarm technology. That is a pretty amazing option (in my opinion) as it makes it possible to mirror content automatically and bidirectionally between GitSwarm and the back end server. That way you get all the features of Git with GitSwarm (which itself is based on GitLab) and all the goodies from the rest of Helix.
Hope that helps!
If you use streams (Perforce's "managed" version of a branch, as opposed to doing completely ad hoc inter-file branching with arbitrary paths), it's pretty simple. As P4Gabe said, "switch -c" is a one-shot option on a local server.
On a shared server it's only a little more complicated because you have to do the "populate" explicitly (this is to keep naive users from accidentally branching lots of files lots of times on a shared server), but it's still only a few steps and it's something that you as an advanced user could script easily:
p4 stream -P (current stream) -t development (new stream name)
p4 populate -r -S (new stream name)
p4 switch (new stream name)
The equivalent is possible using ad hoc ("classic") branches as well if you have a good understanding of how client views work -- use populate to create the new branch, modify your client view to map the new branch into the namespace currently occupied by the old branch, and sync.
This blog post on what exactly "p4 switch" does might help if you're trying to engineer your own solution that's similar-to-but-not-quite the "switch" command: https://www.perforce.com/blog/150428/p4-switch-switching-it
I have a project, hosted on launchpad, which contains a fairly user-specific configuration file.
Once the project is initially checked out, obviously this .cfg file should also be downloaded. However, further updates (via "bzr update") would ideally not alter this .cfg file, as the user would have made their own edits to it. These edits would be overridden / merged should (with potential conflicts) I push an update using the code with my own .cfg file - I don't want this to happen!
What's the best practice to avoid this? I can't really "bzr ignore", as then any future users checking out via bzr would then not have the .cfg file.
I could, of course, replace my .cfg file with the "stock" one each time I do a commit, but this seems a bit clunky.
Or equivalently clunky, supply the .cfg file separately.
What I'm looking for is a "single-shot" download, but never update subsequently.
Any advice?
This is a tricky problem, because version control systems in general are not engineered to have the fine-grained commit strategies needed for this approach. If you were operating in a controlled environment, you could use plugins or hooks to exclude certain files from commits etc., but that doesn't seem to be an option here. I'll add that bzr ignore won't help you, either, because it only prevents files from being added; it doesn't prevent commits or checkout of those files.
What you can do is generate the config file during build/setup/installation if it doesn't already exist. Something like:
#!/bin/sh
if [ ! -e configuration.cfg ]; then
cp etc/configuration.cfg.in configuration.cfg
fi
Here, you'd check in etc/configuration.cfg.in normally and run the above script at build/setup/installation (this could also be automated by a post_change_branch_tip hook in a controlled environment). You'd put the original in a different directory so that there's less of a risk of it getting edited by accident.
We have a need to refactor a code base. The thing is that this will be done by one person and it would be desirable to avoid having the rest of the development team sitting idle while this job takes place.
We therefore tried the following scenario to see if it is possible to work in parallel.
Created file test.txt in directory first in developer A's workspace.
Promoted this file.
Updated developer B's workspace, thereby getting file test.txt
In A's workspace moved file test.txt to directory second.
Promoted this move.
In B's workspace edited file test.txt while it still resides in directory first (no update is made thereby emulating that work is done while refactoring is taking place).
Tried to promote and got a message saying that file test.txt had been modified (correct, file has been moved).
Tried to merge but got an error message saying that AccuRev can't merge since the file is missing in directory second (where it has been moved).
Tried to update B's workspace but that is not allowed since there is a modified file that needs to be merged first.
We are now stuck in a catch 22 situation.
We did try to place a fake file in directory second but that is not being recognized since this file does not belong to the workspace.
Has anyone out there tried something like this and gotten it to work?
It is of course possible to copy files but if there is a better way we would be grateful to hear about this. Or if this is a known bug or limitation in the tool.
We will contact also contact AccuRev support but I thought that I might be able to get some useful tips from the community.
Currently we are using AccuRev client 5.5.0.
Thanks for any suggestions on how to make the tool support this operation.
Referring to your steps 6 & 7: In AccuRev 5.5 after a file is edited and has a (modified) status you first have to keep before you can promote.
At step 8 you could try doing the merge from the Browse Versions view of the file. That way you can select any node to merge with, including the one that has been moved.
Step 9. An AccuRev update will not run successfully if one of the files to be updated is (modified). This is by design. You can keep the file so it has (kept)(member) status then run the update.
David Howland
After contact with AccuRev support the answer is that the only option available is to copy the file to some temp directory, revert the changes, update the workspace and copy the file into the new location in the workspace.
AccuRev will at least tell you which files you have to copy since they will be marked as modified.
I could experimentally verify David's remark to step 9 using AccuRev 5.5.
Let's assume that in the workspace of user A the file was moved and the move was promoted, while in the workspace of user B the file was modified and user B is about to promote his/her change.
Before the file is kept, it will not be possible for user B neither to merge nor to update. But after keeping the modified file the update is possible. The file is first marked as overlap, then the merge succeeds in the new location. Basically, this avoids creating a copy of the file, reverting it and restoring it in the new location after an update, which can be quite cumbersome, as AccuRev does not reveal easily where the move goes.
If user B promotes the modification before user A promotes the move, all goes smoothly, i.e. on update the moved file appears as overlap, but easily merges into the moved file in the new location.
Similar results are obtained when the two users have workspaces connected to different streams and the overlap occurs on a common parent stream. Only if the file is unkept, an error can occur (i.e. only if the move is promoted before the change). Then a simple keep allows to proceed as usual (update, merge, then promote).
I have a drupal site, and I am storing the codebase in a git repository. This seems to be working out well, but I'm also making changes to the database. I'm considering doing periodic dumps of the database and committing to git. I had a few questions about this.
If I overwrite the file, will git think it is a brand new file or will it recognize that it is an altered version of the same file.
Will this potentialy make my repo huge (the database is 16mb)
Can I zip this file? or will this mess Git up ... the zipped version is only 3mb
Any other suggestions?
If you have enough space, a non-compressed dump in source control is pretty handy because you can compare using a diff program what rows were added/modified/deleted.
Another solution is to use the features module which is supposed to capture drupal config in code. It stores this captured data as a feature module which you can put into version control.
For my database applications, I store scripts of DDL statements (like CREATE TABLE) in some sort of version control system. These scripts sometimes include static "seed" data as well. All the version control systems I use are good at recognizing differences in these files, and they are much smaller than the full database with data.
For the dynamically-generated data, I store backups (e.g. from mysqldump) in an appropriate location (depending on the importance of the data, that may include offsite backups).
1) It's all text, so GIT will just see it as it would any other file.
2) No, due to the above it should add 16mb to the repo (or less, due to GITs own compression), it won't add a new file every time, just the changes, so the repo will change by the size of the additions to the repository
3) No, or GIT won't be able to see the differences - GIT does it's own compression anyway
A colleague has imported a CVS repository into a pre-existing SVN repository using a cvs2svn dumpfile (like "svnadmin load --parent-dir /path < dumpfile") , which I originally created from the CVS repo.
Now that I'm trying to checkout and build from SVN, I've noticed that some files seem to be missing in the SVN checkout that were present when I checked out the same branch from CVS, although the majority are present. They are mostly but not exclusively binary files (jars and gifs etc.) and I think (though I haven't checked exhaustively) that they are also files that have not been modified on the branch that I'm trying to check out. I should also point out that they don't show up using cvsweb (I would provide a link to the cvsweb documentation but I have no way of knowing its version etc), although they do appear doing a standard checkout of the branch.
If anyone has any idea what's wrong here, or where to start looking to address this, I'd be very grateful! New to SVN so not sure if this is normal! Also, I know I could fairly easily "fix" it by copying over the files but I'd ideally like to keep their revision history so a more complete solution would be preferable. Thanks!
That sounds that the configuration which has been used during the conversion has been wrong. May be a property in svn exists which represent the CVS revision information. If not you're lost..more or less...A good suggestion is to test such migrations and check the contents of the resulting SVN repository...and of course do make backups ...BTW. Are these branches are removed in CVS before?
This is not normal; such files should be handled just fine by cvs2svn. Your best bet is to create a reproducible test case (instructions for doing so are in the cvs2svn FAQ) and report the problem to the cvs2svn users' mailing list.