Hey guys so I have an xcode bot problem. Basically I have a bot that requires a pre-script to be ran. This script runs the command a git submodule init and update, and gets an ssh authentication error.
On the os x server machine it self the appropriate ssh keys have been set on the user admin (tested). In xcode on my machine the server is connect as the user admin. However it seems like when the script is being run it is not being run as admin(tested by creating a text file in ~ and wasnt there after). I was wondering if it was possible to su in the script, i've looked online and it seems like it wouldn't be possible because I don't know what user xcode bot is running the script as (my guess is its running as Guess)
Any advice on this? Or on a way to run the command as a different user(must be in the script)
_xcsbuildd is the account which the bot runs under. Make sure that account has the necessary permissions.
Xcode server runs an integration from a separate user called _xcsbuildd. If it is possible for you to login remotely to the machine that Xcode Server is running then you can login under that user through the Terminal, and should be able to add or check any ssh keys that are loaded with that user.
Here is a useful blog post on how to do this http://papaanton.com/setting-up-xcode-6-and-apple-server-4-0-for-continues-integration-with-cocoapods/
Scroll down to the part called Adding additional SSH Key to the Xcode Server That should be able to walk you through how to do that. I know its not an automated script, but its how I was able to get past my SSL issues, and maybe it'll help you as well
Related
I have installed the TeamCity server on my infrastructure. I was e.g. able to silently configure the external database and some other stuff through the TeamCity configuration files. Now, when I start TeamCity, I am facing the "TeamCity First Start" web page, where I can choose if I want to restore my system from a backup or if I want to proceed with a First Start.
I would like to automate that installation as much as possible. Is there a reliable way, e.g. through the TeamCity API, to trigger the action underlying the "Proceed" button of that web page, other than writing a bot that browses the web page and clicks the button? I found no such thing in the API documentation, but I might have missed something.
I think something like this could be called on the server host:
curl -X POST http://localhost:8111/mnt/do/goNewInstallation
but I don't know what data to post and how to get the necessary authentication to be allowed to run that command. Indeed, running the above command yields the following error:
The session is not authenticated. Access denied.
At that time, the super user authentication token has not yet been displayed to the server log file.
I just started with the Google Cloud Platform and created my first VM instance (Debian).
It all worked in a pretty straight-forward way, I hit the SSH button next to my instance and it opened up a command-line interface in a new browser window. My username was the handle (pre-#) part of my gmail.
However, I wanted to use Terminal on my macbook as a CLI for accessing it. Looking at the guides, this seemed to be a long convoluted process. I followed this process (detailed below) but now I can only access some new account on the VM; the username is my full gmail address this time (but with underscores replacing non-alphabet characters, so like the orignal but with "_gmail_com" tacked on to the end). I can no longer access the original account that seems to be the proper account with admin privileges. Note that I can sudo into the root account and open up the directories and files owned by the original account but this seems very dumb.
I've tried posting in the forum for this stuff, Google's group for Google Compute, "gce-discussion", but my posts are held at approval for some reason. It's as though Google are just hoping I cave and pay for technical support.
My aim is to have a python session running a discord bot that continues while I log off. It'd also be good to be able to serve up files (images) via http.
Thank you for any help you can be!
The steps I followed in the convoluted process given in the guide are as follows:
I created an SSH keypair (private and public)
I downloaded and installed the Google SDK to get the gcloud CLI applciation
I issued the gcloud command to set the public key up on my instance
it had me log in at a google page (OAuth-like thing)
I started an SSH session on Terminal, invoking the file containing my private key, trying with different permutations of options
finally got it to connect and log in using my-handle_gmail_com (ie the second username on my instance)
when I tried to access the SSH from within the Google Cloud Platform page, the browser-based CLI automatically logged my into this same second account, "my-handle_gmail_com". So now I have no access to the original.
Thanks!
I have a very similar question to this one. #cherba already gave a very rich and helpful dissection of the gcloud init command which has been very helpful.
So what I really want to do, automating gcloud init is:
Front load my interactive input: I want the users to supply all input at the beginning and not be prompted again.
Request a token, before gcloud is even installed, probably from a static perma-link, the resulting token should be usable only once, probably with a limited lifetime, maybe an hour. This is very similar to how gcloud init —-console-only already works, except with an unchanging initial URL.
I specifically want this to be for a user account, not a service account.
This would allow me to prompt the user, upfront, for all configuration input, and build the fully configured system automatically, over lunch or a long coffee break; not needing additional babysitting.
The goal here is distinct development environments, not deploying to an array of boxes.
How can I accomplish this?
This is not supported officially and is not recommended. Service accounts are meant for this kind of thing. You should use service accounts as explained in the earlier answer.
What the SDK is essentially doing is submitting a token request to https://accounts.google.com/o/oauth2/auth with following scopes:
'https://www.googleapis.com/auth/userinfo.email'
'https://www.googleapis.com/auth/cloud-platform'
'https://www.googleapis.com/auth/appengine.admin'
'https://www.googleapis.com/auth/compute'
'https://www.googleapis.com/auth/accounts.reauth'
For this to succeed you need to provide the regular oauth parameters like client_id, client_secret. To generate these you will need to register your app as an oauth app in the developer console.
This may not work if third party authorizations are not supported. I have not tried it.
You said "Front load my interactive input:" and also "Request a token, before gcloud is even installed". The problem with your request above, is that you will need to install gcloud at some point in time and gcloud will use its own authentication methods to connect, meaning that authentication should happen after gcloud is installed because you will always use the command “gcloud ….” to somehow connect. The previous post that you linked explains this.
Due to this, I'm suspecting that you need a workflow where simultaneous gcloud commands will run on multiple users/projects at the same time, by running gcloud many times in parallel. As you know, Linux runs one command at a time and "front loading" the authentication (as you call it) can either be the "screen" command inside one SSH session or running multiple SSH sessions at the same time. If that's not what you need, then a simple shell script should do. The shell script will run commands one after the other rather than in parallel.
For example, let's say that you want to install a package that will take a long time and be able to run another command at the same time, then you could do the following:
$ screen
$ sudo apt-get install [package-name]
Press Ctrl-A” and “d“ to temporarily exit this session
$ … (do another process here)
$ screen -r (re-attaches screen to continue on previous process on line 2)
The example above is somewhat the equivalent of having multiple SSH sessions open at the same time. You could maybe open multiple “screens” and launch multiple authentications at the same time, thereby also controlling when you want to stop a session. Keep in mind that if you run things in parallel, you will definitely need to load the authentication file as mentioned in the post you linked. Otherwise, you can use simple shell scripting and pass arguments. Since i'm not sure of the process that comes before/after your authentication, it's hard for me to provide a more precise example. There's a lot to consider and many unknowns about your workflow. I've included references below that show all the possibilities.
References:
- https://www.linode.com/docs/networking/ssh/using-gnu-screen-to-manage-persistent-terminal-sessions/
- https://www.geeksforgeeks.org/screen-command-in-linux-with-examples/
- https://www.lifewire.com/pass-arguments-to-bash-script-2200571
- https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account
- https://cloud.google.com/sdk/gcloud/reference/auth/login
- https://cloud.google.com/sdk/docs/scripting-gcloud
I have created 4 instances in two separate instance groups based on two vm templates.
Initially I was using the "SSH" button within google cloud console, and I noticed about 40% of the time would it actually work. I would often have to stop/restart the machines in order for the SSH to work. After a day or so later, the SSH button stops working. I figured this was just silly bug, and having actual SSH keys and logging in via normal SSH would work fine.
Well today I configured normal ssh keys, and I was getting the following on 3 of 4 instances:
Permission denied (publickey).
I logged into the cloud console and clicked the ssh button on all 4 instances and low and behold only 1 / 4 works.
So my question is... why am I having to keep rebooting instances just to keep my ssh working. I have never had this problem on any other cloud server before.
Note: I created a base ubuntu from their available images, and built a generic server, then used that as the base template and forked it to create the other 2 instance group templates.
I am thinking that the ssh daemon might be crashing, but how the heck can I tell, and how can I fix it?
I took the silence from the community as an indicator that the problem was only affecting myself. It turns out the stock image I had chosen to start as a base template had a buggy SSH daemon. It was a fairly quick process to rebuild my templates off of a different stock image, and since then I have had no problems connecting to my machines via ssh.
I have a build box, which I use to make continuous builds as well as run nightly unit tests. I'm using Jenkins to do by builds/unit test scripts, which is running on a windows box because our compiler is windows based.
One of our enterprise solutions uses Python code with rabbitmq for exchanging messages for syncing specific database tables over a faulty network. I have unit tests to help verify that updates are happening correctly.
In order to unit test the Python updates, I need to be able to stop some services running on my Linux box, then restart them after I update the python code. I setup a key exchange between my Windows box and Linux box, so that I don't have to put a password in the batch script.
When I'm remoted into the windows box, I can successfully run the batch file, which uses plink commands which rely on the key exchange and putty's pageant (which is running in the background). e.g. I use plink to execute commands on the Linux box from command line in my batch file. However, when I try to run the batch file from Jenkins, the batch file doesn't work properly because it is prompted for the SSH password when trying to run the plink commands.
I believe my current issue can be summarized by two issues, which I'm hoping can be verified and rectified:
I think Jenkins may be running as a different user or using different system credentials so it's not able to connect like the logged in user can. If this is the case, what would I need to do, to get it so that Jenkins can run the plink commands properly without being prompted for the password.
Pageant looks like it needs to get a password typed in every time the computer restarts. My research unearthed ways to put Pageant in startup, so you get prompted when you first login, but I need this to be automatic, like how I can on Linux boxes. If Windows reboots because of a Windows update, then the unit tests would fail as they won't be able to connect to the Linux server. Sure this only happens once a week, but over the course of a year it'll be very annoying.
What can I do to solve the above two issues? If there is a good alternative to putty for the automatic key exchange between Windows and Linux, I'd be interested in hearing about it (I would prefer to stay away from Cygwin with OpenSSH, but might go down this route if the above can't be rectified).
I use plink on my Windows Jenkins box to communicate with Linux on daily basis, there is no problem with it.
Like you theorized, Jenkins runs under it's own user (Windows default, I think, is SYSTEM user), which is different than your logged in session, even if you login as Administrator. Your authentication key is stored in your (Administrator or otherwise) profile directory
What you need to do is use Pageant to export your key as ppk file, then supply the path to this ppk file with plink:
plink -i "C:\path\to\id.ppk"
Looks like there is a simpler way to do what I'm trying to do, Jenkin's plugin https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin