I'm working on a ClearCase repository. Some of the files in it have, for some reason, execution permissions while they shouldn't (they're text files basically). I'll also mention I'm a user without root privilege.
If I check them out, change their permissions (the usual way, not with ct permission -chmod, which doesn't work), then try to check them back in - I'm told that:
ct: Error: By default, won't create version with data identical to predecessor.
How do I override this default? Or am I going about this the wrong way?
Regarding cleartool checkin, the correct option would be:
-ide/ntical
(meaning cleartool -ide or cleartool -identical: short and long form of the identical option)
Checks in the element even if the predecessor version is identical to the checked-out version.
By default, the checkin operation is canceled in such cases.
But in your case, this should not be needed: you do not need to checkin a protect change.
Try first a cleartool protect command (not cleartool permission)
cleartool protect -chmod 644 aFile
(provided the restrictions allow the command to work, with or without ACLs)
Check also your umask (for instance 002) used in your view (if you are on Unix)
Related
We have a legacy COBOL application based on OpenVMS for-which we do not have a clear idea of configuration. In this context, by "configuration" I am talking about:
Which executable files comprise the application;
Which pristine source files correspond to which executable files.
It may seem odd that 1 above is something that is not known, but over time what has happened is that executables have "come and gone" (and many still remain used). The knowledge of which executable files constitute the application as it exists today is not known since knowledge of which executables are no longer required has been lost in time. In practical terms, the team faithfully compiles all source code files and deploy the resultant executables despite the fact that there are obviously programs that are no longer used.
It goes without saying that there is no formal configuration management process and the source code is not kept in a version control system. Since the application runs on OpenVMS, the corresponding Files-11-based file system keeps older versions of files (including source files) and this has long been the excuse for not putting the application source into a version control system (despite the reasons for using a VCS extending far beyond merely having a record of previous versions).
There are a number of ways in which the configuration can be determined, of course, but I'd like to start with a first "small step", that is: determine the set of executables that comprise the application. At this point I should mention that the executable components of the application are not limited to OpenVMS images, but also DCL command files. I would like to:
Log all invocations of images that reside in a certain directory or set of directories;
Log all invocations of command files that reside in a certain directory or set of directories.
If we run this logging on our production system over an extended period of time, say two months, we can get a pretty good idea of what the application comprises. Together with user consultation, we'll be able to confirm the need for the executable files that aren't being called.
I think I have an idea of how to do 1 above, although I'm not sure of the specifics, that is, use SET/AUDIT. The second part, at this stage, I have no idea of how to do.
So, the main criterion for this effort is that as little of the existing system be affected in order to gain the above information. Due to the question mark around the configuration (and the complete lack of automated tests), changing anything is a nerve-wracking undertaking.
Using operating-system-level services like SET/AUDIT would allow one to get to know what's being run without the need to change source and/or recompile anything. So, my question is a multi-parter:
Is this the optimal way to do this on OpenVMS?
What would I need to do to restrict SET/AUDIT to only monitor images in a particular directory?
How would I log command file invocation without changing the .COM source files?
What should I expect in terms of performance degradation as a result of logging such information?
Ad 2., 3.
I would try security auditing with ACLs. From a a privileged account, something like ...
Make sure ACL auditing is enabled:
$ show audit
should show
System security audits currently enabled for:
...
ACL
...
If it doesn't, enable it with
$ set audit/audit/enable=acl
and then you may want to disable it when you are done with
$ set audit/audit/disable=acl
Set audit ACLs on all the wanted files:
$ set sec/acl=(audit=security,access=success+execute) [.app]*.com
$ set sec/acl=(audit=security,access=success+execute) [.app]*.exe
and you may want to delete the ACLs when you are done with
$ set security/acl=(audit=security,access=success+execute)/delete [.app]*.com
$ set security/acl=(audit=security,access=success+execute)/delete [.app]*.exe
You can check what ACLs are set with:
$ show security [.app]*.*
Run you application ...
Get the results from the audit file
$ analyze/audit [vms$common.sysmgr]security.audit$journal/sel=access=execute/full/since=17:00/out=app.log
Check your report for your files:
$ pipe type app.log |search sys$pipe "File name", ,"Access requested"
File name: _EMUVAX$DUA0:[USER.APP]NOW.COM;1
Access requested: READ,EXECUTE
Auditable event: Object access
File name: _EMUVAX$DUA0:[USER.APP]ECHO.EXE;1
Access requested: READ,EXECUTE
$
Sorry, I have no answer for 1. and 4.
It would help to know the OpenVMS Version (e.g. 6.2, 7.3-2, 8.4...) and the architecture (Vax, Alpha,Itanium).
Recent OpenVMS versions have great sda extensions
http://h71000.www7.hp.com/doc/84final/6549/6549pro_ext1.html
or
http://de.openvms.org/Spring2009/05-SDA_EXTENSIONS.pdf
such as LNM to check the logical names used by a process, PCS for PC sampling of a process, FLT to check the faulting behavior of applications, RMS for RMS data structures, PERF only for Itanium performance tracing, PROCIO for the reads and writes for all files opened by a process
Post a
dir sys$share:*sda.exe
so that we know which Sda extensions are available for you.
You can always check what a process with a pid of 204002B4 does with
$ ana/sys
set proc/id=204020b4
sh process /channel
exam #pc
and repeat while the process moves on.
I have a project, hosted on launchpad, which contains a fairly user-specific configuration file.
Once the project is initially checked out, obviously this .cfg file should also be downloaded. However, further updates (via "bzr update") would ideally not alter this .cfg file, as the user would have made their own edits to it. These edits would be overridden / merged should (with potential conflicts) I push an update using the code with my own .cfg file - I don't want this to happen!
What's the best practice to avoid this? I can't really "bzr ignore", as then any future users checking out via bzr would then not have the .cfg file.
I could, of course, replace my .cfg file with the "stock" one each time I do a commit, but this seems a bit clunky.
Or equivalently clunky, supply the .cfg file separately.
What I'm looking for is a "single-shot" download, but never update subsequently.
Any advice?
This is a tricky problem, because version control systems in general are not engineered to have the fine-grained commit strategies needed for this approach. If you were operating in a controlled environment, you could use plugins or hooks to exclude certain files from commits etc., but that doesn't seem to be an option here. I'll add that bzr ignore won't help you, either, because it only prevents files from being added; it doesn't prevent commits or checkout of those files.
What you can do is generate the config file during build/setup/installation if it doesn't already exist. Something like:
#!/bin/sh
if [ ! -e configuration.cfg ]; then
cp etc/configuration.cfg.in configuration.cfg
fi
Here, you'd check in etc/configuration.cfg.in normally and run the above script at build/setup/installation (this could also be automated by a post_change_branch_tip hook in a controlled environment). You'd put the original in a different directory so that there's less of a risk of it getting edited by accident.
I am trying to fully purge a change from Gerrit and running into some problems.
Previously I tried to follow this guide to achieve my goal:
https://www.onyxpoint.com/deleting-abandoned-commits-from-gerrit-code-review/
I messed this up however, and somehow managed to do the following:
Purge the offending change-id from all the tables in the Gerrit gsql database
The change still appears in the web-interface, but if I click on it, it fires an error: "The page you requested was not found, or you do not have permission to view this page."
If I run 'gerrit query' for the change, it still shows up, replete with all information.
Where is the change information coming from if it is not in the DB??? I also tried flushing all caches. Is it somewhere in the search index for lucene or something?
This is not super important, but it is really driving me nuts!
In my case, I didn't have to perform a re-index. But an additional step(5):
Open the GSQL interface
$ gerrit-cli gsql
The following command will mark the change set as a draft change set in the Gerrit database.
gerrit> update changes set status='d' where change_id='64581';
Next, update the associated patch sets.
gerrit> update patch_sets set draft='Y' where change_id='64581';
gerrit> \q
Prior to making further changes, you need to make sure that the Gerrit caches have been flushed. If you don’t do this, you may end up with strange
results when using the Web UI in relation to this change set.
$ gerrit-cli flush-caches --cache changes
Ensure that the administrator has "View Drafts" and "Delete Drafts" permission on the repo for refs/*
Finally, delete the patch set(s) that you had previously abandoned.
In this case, we’re going to assume that you have two patch sets to
delete.
$ gerrit-cli review 64581,1 --delete
$ gerrit-cli review 64581,2 --delete
Deleting each patch-set one by one can be a PITA. Nuke 'em in one go using the GUI:
Queries use Gerrit's secondary index (by default Lucene-based) so if you modify the database outside of Gerrit you have to reindex the data with the reindex command:
$ java -jar path/to/gerrit.war reindex -d path/to/gerrit-site-dir
This command should only be executed when Gerrit isn't running.
We have a need to refactor a code base. The thing is that this will be done by one person and it would be desirable to avoid having the rest of the development team sitting idle while this job takes place.
We therefore tried the following scenario to see if it is possible to work in parallel.
Created file test.txt in directory first in developer A's workspace.
Promoted this file.
Updated developer B's workspace, thereby getting file test.txt
In A's workspace moved file test.txt to directory second.
Promoted this move.
In B's workspace edited file test.txt while it still resides in directory first (no update is made thereby emulating that work is done while refactoring is taking place).
Tried to promote and got a message saying that file test.txt had been modified (correct, file has been moved).
Tried to merge but got an error message saying that AccuRev can't merge since the file is missing in directory second (where it has been moved).
Tried to update B's workspace but that is not allowed since there is a modified file that needs to be merged first.
We are now stuck in a catch 22 situation.
We did try to place a fake file in directory second but that is not being recognized since this file does not belong to the workspace.
Has anyone out there tried something like this and gotten it to work?
It is of course possible to copy files but if there is a better way we would be grateful to hear about this. Or if this is a known bug or limitation in the tool.
We will contact also contact AccuRev support but I thought that I might be able to get some useful tips from the community.
Currently we are using AccuRev client 5.5.0.
Thanks for any suggestions on how to make the tool support this operation.
Referring to your steps 6 & 7: In AccuRev 5.5 after a file is edited and has a (modified) status you first have to keep before you can promote.
At step 8 you could try doing the merge from the Browse Versions view of the file. That way you can select any node to merge with, including the one that has been moved.
Step 9. An AccuRev update will not run successfully if one of the files to be updated is (modified). This is by design. You can keep the file so it has (kept)(member) status then run the update.
David Howland
After contact with AccuRev support the answer is that the only option available is to copy the file to some temp directory, revert the changes, update the workspace and copy the file into the new location in the workspace.
AccuRev will at least tell you which files you have to copy since they will be marked as modified.
I could experimentally verify David's remark to step 9 using AccuRev 5.5.
Let's assume that in the workspace of user A the file was moved and the move was promoted, while in the workspace of user B the file was modified and user B is about to promote his/her change.
Before the file is kept, it will not be possible for user B neither to merge nor to update. But after keeping the modified file the update is possible. The file is first marked as overlap, then the merge succeeds in the new location. Basically, this avoids creating a copy of the file, reverting it and restoring it in the new location after an update, which can be quite cumbersome, as AccuRev does not reveal easily where the move goes.
If user B promotes the modification before user A promotes the move, all goes smoothly, i.e. on update the moved file appears as overlap, but easily merges into the moved file in the new location.
Similar results are obtained when the two users have workspaces connected to different streams and the overlap occurs on a common parent stream. Only if the file is unkept, an error can occur (i.e. only if the move is promoted before the change). Then a simple keep allows to proceed as usual (update, merge, then promote).
The title may seem trivial, but this isn't as easy as it sounds. You can't just check the permissions on the file, because the file may not exist, and you may have the necessary permissions to create it and then write to it. But only if you have write permissions on the directory, and maybe execute permissions, and maybe permissions for all the parent directories. Or maybe not. I'm not sure.
So, given a filename, what are all the cases that I need to account for in order to correctly test whether I could open and write to a file with that filename? This isn't specific to any one programming language. I just want the logic. But examples in real programming languages are welcome.
Such a test wouldn't necessarily be very useful -- you're just setting yourself up
for a race condition, if the file becomes unwriteable for some reason between your check
and the write attempt. (Some other process could change the permissions, move or delete
the parent directory, use up the last free space on the device, etc...)
I'd just go ahead and attempt the write, and be diligent about checking for errors
at each step (opening, each write attempt, closing) where an operation could conceivably
fail.
It depends on the owner of the process who runs the program, whether the owner has permissions to write to that directory or not. For example, apache running as www user may not be able to write to a directory owned by root and no permissions for other or group.
You may do it hit or trail way, like try creating the file to see if it's successful or not, in case it fails to catch the proper error code and like no permission or directory full and take corrective action.
You may programmatically check if the user has permissions to write into directory if the directory has space, if the file already exists etc. By using certain apis the system exposes and the language exposes, this much better approach taking care of cases rather than handling failure cases.