Re-structuring CVS repositories and retaining history - repository

Currently, we have the following in our CVS Repository :
Module1
|
|
+-----A
|
+-----B
We want o restructure this module such that the sub directories A and B appears as high level modules. What I could do is to check Module1 out and then pull A and B out and then do a fresh cvs add for A and B individually, thus making them new cvs modules. But I am sure if I do this, I am going to lose the history as well as I would have to remove all internal CVS folders under A and B.
Q1: So is there a way to restructure this and retain the history?
What I essentially am trying to do is to filter out access between A and B.
So -
Q2: Is there a way to set up security so that certain users can check out Module1/A only and not Module1/B ? and vice-versa?

Q1: So is there a way to restructure this and retain the history?
Like you wrote in your comment, if you have sys privs you can mv modules around the repository and keep the history of all the files below A and B but in doing so, you lose the history that /A used to be Module1/A and /B used to be in Module1/B (not to mention build scripts probably break now). Subversion resolves this for you by offering the move (or rename) command which remembers the move/rename history of a module.
Q2: Is there a way to set up security so that certain users can check out Module1/A only and not Module1/B ? and vice-versa?
There sure is, used group permissions. From this page,
http://www.linux.ie/articles/tutorials/managingaccesswithcvs.php
Here's the snip I'm referring to in case that page ever goes away
To every module its group
We have seen earlier how creating a
cvsusers group helped with the
coordination of the work of several
developers. We can extend this
approach to permit directory level
check-in restrictions.
In our example, let's say that the
module "cujo" is to be r/w for jack
and john, and the module "carrie" is
r/w for john and jill. We will create
two groups, g_cujo and g_carrie, and
add the appropriate users to each - in
/etc/group we add
g_cujo:x:3200:john,jack
g_carrie:x:3201:john,jill>
Now in the repository (as root), run
find $CVSROOT/cujo -exec chgrp g_cujo {} \;
find $CVSROOT/carrie -exec chgrp g_carrie {} \;
ensuring, as before, that all
directories have the gid bit set.
Now if we have a look in the
repository...
john#bolsh:~/cvs$ ls -l
total 16
drwxrwsr-x 3 john cvsadmin 4096 Dec 28 19:42 CVSROOT
drwxrwsr-x 2 john g_carrie 4096 Dec 28 19:35 carrie
drwxrwsr-x 2 john g_cujo 4096 Dec 28 19:40 cujo
and if Jack tries to commit a change
to carrie...
jack#bolsh:~/carrie$ cvs update
cvs server: Updating .
M test
jack#bolsh:~/carrie$ cvs commit -m "test"
cvs commit: Examining .
Checking in test;
/home/john/cvs/carrie/test,v <-- test
new revision: 1.2; previous revision: 1.1
cvs [server aborted]: could not open lock file
`/home/john/cvs/carrie/,test,': Permission denied
jack#bolsh:~/carrie$
But in cujo, there is no problem.
jack#bolsh:~/cujo$ cvs update
cvs server: Updating .
M test
jack#bolsh:~/cujo$ cvs commit -m "Updating test"
cvs commit: Examining .
Checking in test;
/home/john/cvs/cujo/test,v <-- test
new revision: 1.2; previous revision: 1.1
done
jack#bolsh:~/cujo$
The procedure for adding a user is now
a little more complicated that it
might be. To create a new CVS user, we
have to create a system user, add them
to the groups corresponding to the
modules they may write to, and (if
you're using a pserver method)
generate a password for them, and add
an entry to CVSROOT/passwd.
To add a project, we need to create a
group, import the sources, change the
groups on all the files in the
repository and make sure the set gid
on execution bit is set on all
directories inside the module, and add
the relevant users to the group.
There is undoubtedly more
administration needed to do all this
than when we jab people with a pointy
stick. In that method, we never have
to add a system user or a group or
change the groups on directories - all
that is taken care of once we set up
the repository. This means that an
unpriveleged user can be the CVS admin
without ever having root priveleges on
the machine.

Related

How to fetch a launch plan using flyte api's without specifying a sha?

I would like to use the flyte api's to fetch the latest a launchplan for a deployment environment without specifying the sha.
Users are encouraged to specify the SHA when referencing Launch Plans or any other Flyte entity. However, there is one exception. Flyte has the notion of an active launch plan. For a given project/domain/name combination, a Launch Plan can have any number of versions. All four fields combined identify one specific Launch Plan. Those four fields are the primary key. One, at most one, of those launch plans can also be what we call 'active'.
To see which ones are active, you can use the list-active-launch-plans command in flyte-cli
(flyte) captain#captain-mbp151:~ [k8s: flytemain] $ flyte-cli -p skunkworks -d production list-active-launch-plans -l 200 | grep TestFluidDynamics
NONE 248935c0f189c9286f0fe13d120645ddf003f339 lp:skunkworks:production:TestFluidDynamics:248935c0f189c9286f0fe13d120645ddf003f339
However, please be aware that if a launch plan is active, and has a schedule, that schedule will run. There is no way to make a launch plan "active" but disable its schedule (if it has one).
If you would like to set a launch plan as active, you can do so with the update-launch-plan command.
First find the version you want (results truncated):
(flyte) captain#captain-mbp151:~ [k8s: flytemain] $ flyte-cli -p skunkworks -d staging list-launch-plan-versions -n TestFluidDynamics
Using default config file at /Users/captain/.flyte/config
Welcome to Flyte CLI! Version: 0.7.0b2
Launch Plan Versions Found for skunkworks:staging:TestFluidDynamics
Version Urn Schedule Schedule State
d4cf71c20ce987a4899545ae01286f42297a8f3b lp:skunkworks:staging:TestFluidDynamics:d4cf71c20ce987a4899545ae01286f42297a8f3b
9d3e8d156f7ba0c9ac338b5d09949e88eed1f6c2 lp:skunkworks:staging:TestFluidDynamics:9d3e8d156f7ba0c9ac338b5d09949e88eed1f6c2
248935c0f189c928b6ffe13d120645ddf003f339 lp:skunkworks:staging:TestFluidDynamics:248935c0f189c928b6ffe13d120645ddf003f339
...
Then
flyte-cli update-launch-plan --state active -u lp:skunkworks:staging:TestFluidDynamics:d4cf71c20ce987a4899545ae01286f42297a8f3b

How to get information on latest successful pod deployment in OpenShift 3.6

I am currently working on making a CICD script to deploy a complex environment into another environment. We have multiple technology involved and I currently want to optimize this script because it's taking too much time to fetch information on each environment.
In the OpenShift 3.6 section, I need to get the last successful deployment for each application for a specific project. I try to find a quick way to do so, but right now I only found this solution :
oc rollout history dc -n <Project_name>
This will give me the following output
deploymentconfigs "<Application_name>"
REVISION STATUS CAUSE
1 Complete config change
2 Complete config change
3 Failed manual change
4 Running config change
deploymentconfigs "<Application_name2>"
REVISION STATUS CAUSE
18 Complete config change
19 Complete config change
20 Complete manual change
21 Failed config change
....
I then take this output and parse each line to know which is the latest revision that have the status "Complete".
In the above example, I would get this list :
<Application_name> : 2
<Application_name2> : 20
Then for each application and each revision I do :
oc rollout history dc/<Application_name> -n <Project_name> --revision=<Latest_Revision>
In the above example the Latest_Revision for Application_name is 2 which is the latest complete revision not building and not failed.
This will give me the output with the information I need which is the version of the ear and the version of the configuration that was used in the creation of the image use for this successful deployment.
But since I have multiple application, this process can take up to 2 minutes per environment.
Would anybody have a better way of fetching the information I required?
Unless I am mistaken, it looks like there are no "one liner" with the possibility to get the information on the currently running and accessible application.
Thanks
Assuming that the currently active deployment is the latest successful one, you may try the following:
oc get dc -a --no-headers | awk '{print "oc rollout history dc "$1" --revision="$2}' | . /dev/stdin
It gets a list of deployments, feeds it to awk to extract the name $1 and revision $2, then compiles your command to extract the details, finally sends it to standard input to execute. It may be frowned upon for not using xargs or the like, but I found it easier for debugging (just drop the last part and see the commands printed out).
UPDATE:
On second thoughts, you might actually like this one better:
oc get dc -a -o jsonpath='{range .items[*]}{.metadata.name}{"\n\t"}{.spec.template.spec.containers[0].env}{"\n\t"}{.spec.template.spec.containers[0].image}{"\n-------\n"}{end}'
The example output:
daily-checks
[map[name:SQL_QUERIES_DIR value:daily-checks/]]
docker-registry.default.svc:5000/ptrk-testing/daily-checks#sha256:b299434622b5f9e9958ae753b7211f1928318e57848e992bbf33a6e9ee0f6d94
-------
jboss-webserver31-tomcat
registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift#sha256:b5fac47d43939b82ce1e7ef864a7c2ee79db7920df5764b631f2783c4b73f044
-------
jtask
172.30.31.183:5000/ptrk-testing/app-txeq:build
-------
lifebicycle
docker-registry.default.svc:5000/ptrk-testing/lifebicycle#sha256:a93cfaf9efd9b806b0d4d3f0c087b369a9963ea05404c2c7445cc01f07344a35
You get the idea, with expressions like .spec.template.spec.containers[0].env you can reach for specific variables, labels, etc. Unfortunately the jsonpath output is not available with oc rollout history.
UPDATE 2:
You could also use post-deployment hooks to collect the data, if you can set up a listener for the hooks. Hopefully the information you need is inherited by the PODs. More info here: https://docs.openshift.com/container-platform/3.10/dev_guide/deployments/deployment_strategies.html#lifecycle-hooks

How to prevent ._ (dot underscore) files?

I am using Textmate 2 to edit a rails project on remote Linux server via sshfs.
When I save a file (e.g. README.rdoc) there is another file created (i.e. ._README.rdoc):
-rw-rw-r-- 1 4096 Feb 17 17:19 ._README.rdoc
-rw-rw-r-- 1 486 Feb 17 17:19 README.rdoc
The Textmate doc mentioned how to disable extended attributes:
defaults write com.macromates.textmate OakDocumentDisableFSMetaData 1
but ._ files are still created after the above defaults write.
Is there a way to disable creation of ._ files when using sshfs + Textmate 2?
To disable Extended Attributes in Textmate 2, use:
defaults write com.macromates.TextMate.preview volumeSettings '{ "/Users/ohho/Mount/" = { extendedAttributes = 0; }; }'
Where /Users/ohho/Mount/ is the parent folder of all my sshfs mounted folders.
As I have tried with following command THIS IS NOT WORKING :
NOT WORKING:-"defaults write com.macromates.textmate OakDocumentDisableFSMetaData 1"
I have taken reference from :
https://github.com/textmate/textmate/wiki/Hidden-Settings
and its working fine now.
TextMate use extended attributes to store caret position, etc.
On file systems which don’t support extended attributes
(most network file systems), OS X will create an auxiliary file with
a dot-underscore prefix (e.g. ._filename).
If you don’t want these files, you can disable the use of extended
attributes. This is presently controlled with the volumeSettings key.
Its values are (1) an associative array with path prefix; and (2) another associative array with settings for that path. (Presently, only extendedAttributes
is supported.)
So, if we wanted to disable extended attributes for files under /net/:
defaults write com.macromates.TextMate.preview volumeSettings '{ "/net/" = { extendedAttributes = 0; }; }'

Find UID from EUID

I am having a AIX 5.3 host to which we login and when needed uses pbrun tool to become root.Now the question is how do I find from command line as what user I have logged in to get this privileged/root user. If I am not wrong how do I find UID from my current EUID. Tried whoami and who am i both gives output as root.
"who am i" is coming from utmp. If utmp shows you as root then your pbrun tool must be changing it from what it was when you first logged in.
You could do:
ps l $$
which prints out a line with the PID and PPID. Take the PPID and do that again:
ps l <PPID>
The UID column is your numeric user id. If the PPID shows as 1, then pbrun did an exec rather than a folk / exec (which implies that it is a function or alias within your shell). In that case, you could revert to "last" which will show who logged in to which tty at what time.
======
Another idea. You can get the terminal the program is executing on via ps. This is called the controlling terminal. You can also get it via the "tty" command:
tty
/dev/pts/18
Now, feed that to "last" but remove the leading /dev/ part and take the first hit:
last pts/18 | head -1
myname pts/18 myhost.mydomain.com Nov 14 10:22 still logged in.
That is the last person to log into that particular terminal. Will that work?

Import postgres database without roles

I have a database that was exported with pg_dump, but now when I'm trying to import it again with:
psql -d databasename < mydump.sql
It fails trying to grant roles to people that don't exist. (error says 'Role "xxx" does not exist')
Is there a way to import and set all the roles automatically to my user?
The default behavior of the import is that it replaces all roles it does not know with the role you are doing the import with. So depending on what you need the database for, you might just be fine with importing it and with ignoring the error messages.
Quoting from http://www.postgresql.org/docs/9.2/static/backup-dump.html#BACKUP-DUMP-RESTORE
Before restoring an SQL dump, all the users who own objects or were granted permissions on objects in the dumped database must already exist. If they do not, the restore will fail to recreate the objects with the original ownership and/or permissions. (Sometimes this is what you want, but usually it is not.)
The answer that you might be looking for is adding the --no-owner to the pg_restore command. Unlike the accepted answer at the moment, the command should create every object with the current user even if the role in the dump don't exist in the database.
So no element will get skipped by pg_restore but if some elements imported are owned by different users, all of the records will be now owned by only one user as far as I can tell.
With pg_restore you can use the --role=rolename option to force a role name to be used to perform the restore. But the dump must be non plain text format.For example you can dump with:
pg_dump -F c -Z 9 -f my_file.backup my_database_name
and than you can restore it with:
pg_restore -d my_database_name --role=my_role_name my_file.backup
for more info:
http://www.postgresql.org/docs/9.2/static/app-pgrestore.html
Yes, you can dump all the "Global" objects from your source DB with pg_dumpall's -g option:
pg_dumpall -g > globals.sql
Then run globals.sql against your target DB before importing.
I used the following:
pg_dump --no-privileges --no-owner $OLD_DB_URL | psql $NEW_DB_URL
From
https://www.postgresql.org/docs/12/app-pgdump.html
-O
--no-owner
Do not output commands to set ownership of objects to match the
original database. By default, pg_dump issues ALTER OWNER or SET
SESSION AUTHORIZATION statements to set ownership of created database
objects. These statements will fail when the script is run unless it
is started by a superuser (or the same user that owns all of the
objects in the script). To make a script that can be restored by any
user, but will give that user ownership of all the objects, specify
-O.
This option is only meaningful for the plain-text format. For the
archive formats, you can specify the option when you call pg_restore.
-x
--no-privileges
--no-acl
Prevent dumping of access privileges (grant/revoke commands).
In newer versions of pg_restore it will complain about text file imports.
However, ou can just remove these lines with awk. You can pipe it to a new file to make sure it didn't break anything or just pipe it directly into psql like this:
cat my-import-file.sql | awk '!/old-role/' | psql -U target_owner -d target_db_name -1 -v ON_ERROR_STOP=1
replace my-import-file.sql with your import file
replace target_owner with your username
replace target_db_name with the name of the database to create
replace old-role with the role it is complaining about
If you have multiple roles:
cat my-import-file.sql | awk '!/role1|role2|role3/' | psql -U target_owner -d target_db_name -1 -v ON_ERROR_STOP=1
This will remove all lines that mention that role. Everything will be created owned by the user logged in to psql.
If you are trying to import a backup by using pgadmin enable this flags when you are restoring your bd:
I hope it helps you.
cheers!
Well You can just create new role with same name as that you are missing, and then import dump with no errors.
error says 'Role "xxx" does not exist' - so create it :)