How to configure minishift for user access - minishift

I would like to configure minishift for multiple users instead of using the default developer account to access the console and deploy the applications. I am thinking of an LDAP or Linux user management system where the users are created with default credentials and once they access the minishift environment and login they can be forced to change their passwords. The users created/maintained by the system can login and deploy the applications instead of anybody that uses the developer or admin default credentials.
Thanks for the help.

From the product page
Minishift is a tool that helps you run OKD locally by launching a single-node OKD cluster inside a virtual machine. With Minishift you can try out OKD or develop with it, day-to-day, on your local machine.
What you are describing here is a shared build system for a team of developers. This is not what minishift is meant for. You will have to install an OKD cluster yourself on dedicated machine(s) (you probably want to have several of them).

Related

Gcloud compute SSH connects to a different Instance than SSH + certificate

So I'm trying to connect to a gcloud instance where I've installed several packages and started to develop my code with no problem. During the week, I use a certificate and putty to login since I work with a windows machine.
However now that I'm home, I tried to connect to the instance using my mac where I installed the Google Cloud SDK and after configuring all the parameters using
gcloud init
I get logged to an empty instance that doesn't have all of the packages and scripts I mentioned above.
What am I doing wrong? I can confirm that I'm connecting to an instance with the same name, in the same region and all, but it is completely different.
Cheers!
As you are connecting from a different machine, that's a different user.
Go to /home, and check if there is a folder with the "other" username. Note that you might not be able to access. You would need to become superuser.

Running tests on VM does not work unless window is open

We are attempting to do our testing remotely, so we set up some Virtual Machines to run our GUI tests and to free up our local machines. What we were hoping for was to have the tests run just like they would on a physical machine, however they seem to require an active Remote Desktop Connection up in order to run properly. These tests are written using LeanFT and it is a windows app so this is not mobile GUI testing.
Is there a certain way to configure this VM to set it up properly for automated GUI testing that does not require an active Remote Desktop Connection? It seems as if its sharing the controls on our physical machine..
Or am I completely wrong here.. Is a Remote Machine different than a Virtual Machine? Thanks!
It's possible to run a GUI test without an active Remote Desktop Connection
I achieved this with leanft through the following 2 steps:
Configure how you execute your tests
Whether you're running via a Jenkins slave or through another kind of "listener" (maybe ssh, or bamboo, etc.), you need to configure this listener to start after a specific user is logged on.
In my case I was running through the Jenkins slave, so I've configured the startup of the slave to launch as soon the user logged in.
Tell windows to login the user when the computer starts. You can achieve that via the following registry:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon]
"DefaultDomainName"="DOMAIN"
"DefaultUserName"="USERNAME"
"AutoAdminLogon"="1"
"DefaultPassword"="PASSWORD"
The core requirement is that you need to have an active session (regardless of whether you are using Jenkins, TeamCity, Grid or other tools to launch the tests).
For your virtual machine, you will need to have access to the console. For VMWare vSphere, there is a native client or website. For VMWare Workstation or VirtualBox, these automatically display.
Using the console access, log into the system and set it up to never log out or sleep or hibernate. This is a variety of OS settings that you can look up.
Essentially, these boxes need to always be logged in. With this setup, you need to be sure access to these systems is controlled so that you don't have random people logging in / or logging out.

JProfiler is not able to detect any local running jvm, As I want to profile a java standalone application

JProfiler is not able to detect any local running jvm. Thinking of user access issue, I have started the java services with the same logged-in user on server but still no luck. Some help here would be really appreciated.

Vagrant in production

I've been reading about Vagrant, and I find it quite useful for my development. I am currently managing a series of services (mail, web, LDAP, file sharing, etc.), and often one of these falls and needs a quick backup. Is it possible (and recommended) to use Vagrant for these purposes?
So far I've virtual machines installed like real machines.
I would also like to know about an alternative to Vagrant which would allow me to setup a simple configuration file and put a virtual machine, for example, with Zimbra, and quickly have an alternate mail server, enable RabbitMQ, etc.
Vagrant should be used more like a staging environment to test your infrastructure changes. It should be your test bed for automated infrastructure changes.
The way we use it at my company is like so:
Create VMs for our managed servers in Vagrant.
Create puppet definitions for each server.
Create cucumber tests for each server.
Make infrastructure changes via puppet and add cucumber tests.
Launch our servers to test for failures.
Fix bugs, release and/or back to step 4.
Basically when we're happy with our changes, we'll pull our puppet changes into production to make it happen.
I'd not recommend using vagrant to manage VMs for real production. I'd use something else like razor, virsh, OpenStack or one of the many other vm management systems out there.
This page suggests that the Vagrant push command is meant for deploying to production:
https://www.hashicorp.com/blog/vagrant-push-one-command-to-deploy-any-application/
"Additionally, multiple config.push.define declarations can be in a Vagrantfile to define multiple pushes, perhaps one to staging and one to production, for example."
From my experience, Vagrant mainly used in a development environment.
Vagrant configuration and provisioning options are limited compared to Terraform for example.
If you are working on a cloud based environment, you can use Terraform for infrastructure provisioning.
If your environment is local or your VMs will be hosted on a datacenter, you can use Ansible, chef or puppet for you configuration management and automation.
Hashicorp just published Otto, which is meant to be the Vagrant's successor. It is designed to support deployment environments.
From their Github page:
The key features of Otto are:
Automatic development environments: Otto detects your application
type and builds a development environment tailored specifically for that
application, with zero or minimal configuration. If your application depends
on other services (such as a database), it'll automatically configure and
start those services in your development environment for you.
Built for Microservices: Otto understands dependencies and versioning
and can automatically deploy and configure an application and all
of its dependencies for any environment. An application only needs to
tell Otto its immediate dependencies; dependencies of dependencies are
automatically detected and configured.
Deployment: Otto knows how to deploy applications as well develop
them. Whether your application is a modern microservice, a legacy
monolith, or something in between, Otto can deploy your application to any
environment.
Docker: Otto can use Docker to download and start dependencies
for development to simplify microservices. Applications can be containerized
automatically to make deployments easier without changing the developer
workflow.
Production-hardened tooling: Otto uses production-hardened tooling to
build development environments (Vagrant),
launch servers (Terraform), configure
services (Consul), and more. Otto builds on
tools that powers the world's largest websites.
Otto automatically installs and manages all of this tooling, so you don't
have to.
I had the same question and have been investigating the use of Vagrant push which as per their documentation, as of version 1.7, Vagrant is capable of deploying or "pushing" application code in the same directory as your Vagrantfile to a remote such as an FTP server.
I'm considering having vagrant spin up in a VM for developers, while also giving you the option to deploy your code to a live server for production environments.
As mentioned by #andrerpena, Otto is the successor of Vagrant.
From www.ottoproject.io :
Otto can deploy your application. Users of Vagrant for years have wanted a way to deploy their Vagrant environments to production. Unfortunately, the Vagrantfile doesn't contain enough information to build a proper production environment with industry best practices. An Appfile is made to encode this knowledge, and deployment is a single command away.

Debian wheezy Linux guest environment not available

Since yesterday I can't connect through ssh to all of my Debian wheezy instances on my google cloud. I can connect only through the web console. When the web console tries to negotiate the session, there's a message telling me to update the Linux guest environment. But for wheezy, there is no Linux guest environment package.
Do you have any idea to resolve this issue ?
Debian 7 images were deprecated a while a go and as there are no update packages for the Guest Environment, the best approach would be to migrate to Debian 8 or 9.
To access your VMs you might try one of the following options:
1) According to this public issue the old guest environment still work with deprecated keys. If you have an SSH client configured with an old private key, you might still have access to your VMs through it.
2) Accessing the VM via the serial console
3) Mounting, as secondary, the original disk or a copy of it in a VM you do have access to. The steps are very similar to the section “Inspect an instance without shutting it down” on this document". That would allow you to recover your data.