I have a state which creates a user on a minion.
This works fine.
The user gets created on the first run, the next N days (or years) it won't be created again.
I want to do some action on a different host after the user was created. For performance reasons I only want to execute this action once, on the first run when the user gets created.
Up to now I search some sort of trigger which gets fired if a state changes. Other solutions are welcome.
Use case
After creating the user on the minion I need to insert the minions ssh host-key to a .ssh/known_hosts file to make password less logins work.
To tackle the use case and not the question I suggest the following:
use Salt Mine to collect the public keys of your minions
put the ssh host-keys of the minions into /etc/ssh/ssh_known_hosts
You can use the openssh formula as a starting point. It contains the scripts for Salt Mine and also how to create a ssh_known_hosts file. It adds a lot of magic with dig to discover host names and IP adresses that might be oversized for your environment.
Once it is all set up it should work as follows:
add a user: host's ssh_known_hosts file will used, nothing else needs to be done
add a minion: update the mine, run the provisioning to on all minions to update the host's ssh_known_hosts file.
Related
i am using the sshd_config variable PermitUserEnvironment
#/etc/ssh/sshd_config
PermitUserEnvironment yes
to set something like "REALUSERNAME" on every key in the /root/.ssh/authorized_keys file.
#/root/.ssh/authorized_keys
environment="REALUSER=custom_value" ssh-rsa AAAAB3....
But i have trouble accessing the value in the script triggered by pam_exec in /etc/pam.d/sshd
my best guess is that the pam script is executed before the environment variables are set? So what are my options?
i tried pam_env
#/etc/security/pam_env.conf
PAM_REALUSER DEFAULT="unkonwn" OVERRIDE=${REALUSER}
this is the custom part of my pam.d/ssh file
#/etc/pam.d/sshd
session required pam_env.so readenv=1
session optional pam_exec.so seteuid /usr/local/bin/scripts/my_script
even vars like SSH_CONNECTION seem not to be available which feels odd to me. The information must surely be avaible at the time of script executing but the variable is not set or i am doing it wrong.
i used to (successfully) trigger the script within /etc/profile so i am very confident that the issue is not within my custom script
But I have trouble accessing the value in the script triggered by pam_exec in /etc/pam.d/sshd
my best guess is that the pam script is executed before the environment variables are set? So what are my options?
Yes, you are right. The environment variables from authorized keys are set up in do_setup_env() function, which is called after pam_session.
If you want to access these variables, I recommend you to set up ForceCommand or special shell for the user, which will be wrapper around normal shell, after you evaluate your variables.
But note that setting this for root, which is unlimited will allow your users to do whatever they want (even changing the keys, your environment variables), regardless your setup.
What I'm trying to do is seperate my existing MS Access application into a front-end (which will run locally on a user's machine) and backend (which will be hosted on a networked file server) and allow users to choose between "read-only" and "write" modes. The idea is that only one user can use the "write" mode at a time, thus preventing the same piece of inventory being allocated to mutliple customers. My problem is that the application currently handles concurrency by requiring users to open a .bat file which only allows them to enter application if a .ldb file does not already exist (there is no read-only mode currently), so I need to prevent users accessing the production data in "read-only" mode from creating a .ldb file and unessarily blocking out other users.
The biggest challenge to implemnting this is that users must have write access to the temporary tables in the MS Access (.mdb) file installed locally. I have tried to implement this using a linked table, but I'm not sure how I can control when records become locked using linked tables (which creates a .ldb file).
You could change the sharing setting back to Exclusive Mode. Then only one person can access the file at a time. Check out this link and the other sharing options you have.
http://office.microsoft.com/en-us/access-help/set-options-for-a-shared-access-database-mdb-HP005188297.aspx
Side note: Yikes. Using Access in a shared network environment is not fun. I hope nothing important/time sensitive/secure is in this file. The .ldb file not being deleted and blocking other users is something that I use to see happen regularly in this situation. I believe splitting the Access file into a front-end and back-end like you've done is the first step. Then using linked tables to a SQL Server database can help resolve these issues. But if you're going to this level of effort you may want to consider dumping Access and get a COTS product or create a new application.
Depending on which version of Access you are using, theres alot of flexibility in the UI developement. In other words, this sounds more like an "interface" issue as apposed to a "database" issue. Given everybody is able write to a table, you should be able to check in somewhat real time (performance can be an issue with larger datasets), whether a particular has been added to inventory or not.
They I handled this problem is have two tables, an incomming and outgoing log, and set up a query that did the math against the inventory list on the amount of products. And like a general ledger, select set amount of time to "close the log" (monthly, quarterly) so that the query is not taking into account stuff that happened two years ago.
If you need more help with Access related stuff, Access Monster is a good forum site that deal with nothing but access.
My problem is that the application currently handles concurrency by requiring users to open a .bat file which only allows them to enter application if a .ldb file does not already exist (there is no read-only mode currently), so I need to prevent users accessing the production data in "read-only" mode from creating a .ldb file and unessarily blocking out other users.
--> If every user has his own copy of the front-end on his own machine, you'd have to check the .ldb file of the back-end.
I guess it would be easier to give everyone write access to the backend and manage the actual writing programmatically with a "locked by User X" field in the backend:
You said:
preventing the same piece of inventory being allocated to mutliple customers
If this is the only reason for putting all users but one in read-only mode, you could put a "locked by User X" field on the inventory table. If someone starts to modify (or even opens) a piece of inventory, update the record with his user name, and delete the user name again when he's done.
If another user tries to open the same piece of inventory as well, the name of the first user will already be in the "locked by User X" field, so you can put the second user in read-only mode.
If the inventory pieces are not the only problem and all the other users really are not allowed to change anything as soon as someone else already is editing, you can create a new table with only one column and one row and use this as the "locked by User X" field. As soon as there is a user name inside, you can put everyone else in readonly mode.
No matter how you do it, you will have to provide some kind of admin menu, so if someone's front-end crashes while editing, someone else needs to be able to unlock this user's locked data (=delete his username from the "locked by User X" field).
I've set up a repository that is served through apache2. Users first need to authenticate to apache in order to read / write to the repository.
I has come to my attention, that if users set some crazy name as 'username', this name will used for the commit - and not the apache authentication name.
Now, is there a way so that either
the username is replaced by the apache login name?
or I add the apache login name to the username as defined in the commit?
I know that subversion & apache will always use the apache login name, so that should be possible with mercurial too, right?
EDIT:
I think what I need is to write a hook which extracts the http username and checks whether it matches the commit username. if it doesn't, then the push request should be rejected.
Does anyone know how to do this?
This is the wrong approach to this, and is guaranteed to cause more headache and problems than whatever problem it is that you're trying to solve right now.
Let's assume that you succeeded in implementing the proposed method, what would happen?
Well, in my local repository, that I'm trying to push, I have changesets 1, 2, and 3, with hashes ABC, DEF and KLM. For some reason, I did not use the apache username when committing, so they're wrong, according to your proposed changes.
I push to the server.
In-flight, your code changes my commits to have the apache username instead. This causes the hashes of those changesets to become recalculated, and different. In other words, my changeset 1, 2, and 3 will now have hashes XYZ, DEF and JKL.
So now my changes are on the server. I did not get a conflict during push since I was the last person cloning.
However, if I now try to pull, I now suddenly discover there are 3 changesets I don't have, so I pull those, and discover that I now have those 3 changesets in parallel with the 3 I had, with the same contents, a different committer name, and different hashes.
This is how every push and pull will behave from now on.
You push, and immediately you can pull the "same" changesets back, with new hashes, in a parallel branch to yours.
And now the fun begins. How does your local client discover how what to push? It asks the server, "what do you have?", and then compare that. Well, the server still doesn't have your 3 original changesets, so the outgoing-command is going to figure, well, those 3 changesets should be pushed.
But if you try to push that, you then recreate the same 3 new changesets, which can't be pushed, so you're going to have troubles with that.
What you have to do is impose the following workflow on your users:
Push the new changesets
Pull the new changesets back, in their new form
Strip out the original changesets that was pushed
A better approach would be for the server to prevent the push in the first place, with a message about using the wrong commit name.
Then you place the burden on the user to fix those changesets before trying to push, for instance by importing them into MQ and reapplying them one at a time.
Or... not.
What if I do a pull from you? You fix a bug, and you're not yet ready to push everything to the server, so you allow me to pull from you, and now I have outgoing changesets with your name on it, and a server that will enforce my name on them all.
About now you should realise that this approach is going to cause a lot of problems, you're basically trying to make a distributed version control system behave like a centralised version control system.
I am writing a program in vb.net that requires a user to log in before he can use the application. The main user is created when the program is installed, similar to how windows works when it is installed.
The main user can add additional users to the program. I already know that I should store the passwords encrypted. My question is, where should I store the usernames and passwords? The registry, Isolated storage or .config file. I don't want any user to be able to modify/delete that file as the other user would obviously not be able to log in. Also, this file should be accessible for any user that logs into the computer.
The computer is not guaranteed to be connected to the internet, so it must be stored locally.
Thanks
To tell you the truth if someone has the will power to look for the file they will find it, so storage can help up security but I would focus on the contents of the file itself.
You could try to store the data of the application as a encrypted file which could stop the amateur attempts but as you are using the .net framework your program could could be decompiled and any symmetric encryption algorithms could be rendered useless.
I think your best bet would be to either generate a seed according to the computer the program is on, and if decryption fails call home or go into Lock Down.
Another option would be to store the encrypted (encrypted with your symmetric key) file and a hash file (in different locations probably). If the hash of the loaded file then does not match the hash file your program could then call home (If you have a home to call).
This is just a idea, haven't actually tried anything like this.
If you are not able to use windows users/credentials in any way on the machine, then there really is no absolute way to prevent the file from being removed/changed, Since anyone on the computer has the same access as the main user, who needs rights to modify the file in order for him to add users through the program.
The only way to do it for sure is to have the main user logon with a different user name, and set the file permissions on that file/folder to make sure that only the main user has modify permission to the file (and the other user account does not have the right to modify permissions). I know you said it wouldn't work in your environment(which is?) but you might be able to create users and run stuff under different credentials through your code without having the users log on any different.
The only crazy way I can think of is to create a service on the computer that once it starts running, it opens and holds a handle to that file with sharing set such that no other process can open the file for writing. You'd of course have to workout some way for the main user to be able to add users.
I'm writing a backup program for personal (for the moment at least) use.
For some directories (network directories / protected directories) credentials are needed to access them.
I can setup different jobs in the program to run at specific times.
These jobs are stored in an XML file.
I want to also store the usernames and passwords which the jobs will need.
What and where would be the best way to store these?
Changing permissions on the directories is not an option.
Thanks in advance!
You should never store the logon password for a user in Windows in order to be able to access a local directory. Instead, your backup program should run as a user that has the SeBackupPrivilege enabled (i.e. run the backup from a service that runs as the local system). This means that you won't need to change the permissions.
You may also need to make sure that you are doing a Volume Shadow Copy first that you are copying from - don't copy directly from the disk since that may cause your backup to be inconsistent.
Also, you need to take special care for encrypted files and will need to use ReadEncryptedFileRaw for this.
You could execute the backup program as a scheduled task, running as a specific user.
As for storing passwords you can store them using IsolatedStorage and using a two way encryption to make it harder for someone to decipher the file if they manage to find it.
Check out this SO question for implementing two-way encryption.