What is the default WSL distribution? - windows-subsystem-for-linux

I am specifically trying to understand which distribution is used pre "wsl --install" by the user.

Related

Compute Engine: Restricting SSH usernames

I want to use OS Login with GCP because we use IAM for scoping access to all other resources within GCP (storage buckets, SQL, Redis, etc.). I understand how to restrict users from accessing machines using service accounts and roles.
But, I don't understand how to restrict the possible usernames that someone can use to SSH into our Compute Engine machines. Assume we have a VM configured with OS Login. The problem is that everyone connects using a CLI string like
gcloud compute ssh $MACHINE_NAME which (possibly creates and then) logs in to a /home/$USER_DOMAIN_SUFFIX directory. So, the team's shell history, relevant home directory contents (downloaded files, created scripts, etc.), and running processes are all in a different scope (UID). We could soft-enforce that everyone does something like gcloud compute ssh $SPECIAL_USERNAME#$MACHINE_NAME where everyone uses the same $SPECIAL_USERNAME value. But, that doesn't prevent new home directories from being provisioned. It's a convention, not a software policy.
Is there a way to accomplish what I want, where I can freely choose the value of $SPECIAL_USERNAME? I don't want to be locked in to the generated usernames based on the user/service account email.
Using root for everything is unacceptable for a number of reasons (we want to use a non-root container runtime and we want to limit potential damage done by this $SPECIAL_USERNAME).

Way to pass parameters or share a directory/file to a qemu-kvm launched VM on Centos 7.0

I need to be able to pass some parameters to my virtual machine during it's bootup so it sets itself properly. To do that I either have to bake the info into the image or somehow pass it as parameters to my qemu-kvm command. These parameters are just few, and if it was VMware, we would just pass it as ova paramas and when the VM launches we would call the ova-environment to get these params. But launching it from qemu-kvm I have no such options. I did some homework and found that I could use virtio-9p driver for sharing files across host and guest. Unfortuantely RHEL/Centos has decided not to support 9p.
With no option of rebuilding my RHEL kernel with the 9p options enabled, how do I solve my above problem? Either solution would work, which is, pass/share some kind of json file to the VM(pre-populated on the host), which will read this and do it's setup OR set some kind of "environment variables" which I can query from within the VM to get these params and continue with setup. Any pointers would help.
If your version of QEMU supports it, you could use its -fw_cfg option to pass information to the guest. If that guest is running a Linux kernel with CONFIG_FW_CFG_SYSFS enabled, you will be able to read out the information from sysfs. An example:
If you launch your VM like so:
qemu-system-x86_64 <OPTIONS> -fw_cfg name=opt/com.example.test,string=qwerty
From inside the guest, you can then get the value back from sysfs:
cat /sys/firmware/qemu_fw_cfg/by_name/opt/com.example.test/raw
There appears to be some driver for Windows as well, but I've never used it.
When you boot your guest with -kernel and -initrd you should be able to pass environment variables with -append.
The downside is that you have to keep track of your current kernel and initrd outside of your disk image.
Other possibilities could be a small prepared disk image (as you said) or via network/dhcp or a serial link into your guest or ... this really depends on your environment.
I was just searching to see if this situation had improved and came across this question. Apparently it has not improved.
What I do is output my variable data to a temp file (eg. /tmp/xxFoo). Usually I write text or a tar straight to that file then truncate it to a minimum size and 512 byte multiple like 64K otherwise the disk controller won't configure it. Then the VM starts with a raw drive as that file. After the VM is started the temp file is deleted. From within the guest you can read/cat the raw block device and get the variable data (in BSD use the c partition as the raw drive).
In Windows guests it's tricky to get to the data. In theory you can read \\.\PhysicalDriveN but I have not ever been able to get that to work. Cygwin can do it and it works like Linux. The other option is to make your temp file a partitioned and formatted image but that's a pain to create and update.
As far as sharing a folder I use Samba which works in just about anything. I usually use several instances of smbd running with different configurations.
One option is to create a ISO file and pass as parameter. This works for both host Win and Ubuntu and Guest Win and Ubuntu. You can read the mounted CD ROM inside the guest OS
>>qemu-system-x86_64 -drive file=c:/qemuiso/winlive1.qcow2,format=qcow2 -m 8G -drive file=c:\qemuiso\sample.iso,index=1,media=cdrom
On Guest Linux Mount CDROM in Ubuntu:-
>>blkid //to check if media is there
>>sudo mkdir /mnt/cdrom
>>sudo mount /dev/sr0 /mnt/cdrom //this step can also be put in crontab
>>cd /mnt/cdrom

Tensorflow serving from s3 path is not working - Could not find base path s3://bucketname

Loading model from s3 is not working for me. As per #615 I compiled the server using:
bazel build -s -c opt --define with_s3_support=true //tensorflow_serving/model_servers:tensorflow_model_server
and now when I run it using
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --model_base_path=s3://bucketname/
I'm getting
FileSystemStoragePathSource encountered a file-system access error: Could not find base path s3://bucketname/ for servable default
Any tips on how to fix that?
s3://bucketname/ isn't resolvable unless you have the AWS SDK installed on that machine.
A much better approach would be to use the URL of the model on S3. If you're going to do that, you have to either make the bucket public, in which everyone will be able to access it, or you have to create a bucket policy which allows access from the server's IP.
If you're hosting your server on AWS, you can also launch it with an IAM role and give it S3FullAccess. This is best for any sort of production environment because you don't have to store API Keys in your source code.

(EC2) Launch Windows instance programmatically via command line

I'd like to launch a Windows 2008 (64bits, base install) instance programmatically, kinda like clicking on the Launch Instance link & following the "Create a New Instance" wizard.
I read about this command ec2-run-instances, I tried running it on putty using this syntax:
/opt/aws/bin/ec2-run-instances ami_id ami-e5784391 -n 1
--availability-zone eu-west-1a --region eu-west-1 --instance-type m1.small --private-key /full/path/MyPrivateKey.pem --group MyRDP
but it always complain that:
Required option '-C, --cert CERT' missing (-h for usage)
According to the documentation, this option isn't required!!
Can someone tell me what's wrong anyway? I'm just trying to programmatically launch a fresh Windows install, run some tests on the clouds & shut it down after that.
The error message is correct (just try adding --cert ;) - to what documentation are you referring here?
The requirement is clearly outlined in the Microsoft Windows Guide for Amazon EC2, specifically in Task 4: Set the EC2_PRIVATE_KEY and EC2_CERT Environment Variables:
The command line tools need access to an X.509 certificate and a
corresponding private key that are associated with your account. [...]
You can either specify your credentials with the --private-key and
--cert parameters every time you issue a command or you can create environment variables that point to the credential files on your local
system. If the environment variables are properly configured, you can
omit the parameters when you issue a command.
[emphasis mine]
Maybe the option of using environment variables has been misleading somehow somewhere?
Alternative
Please note that you can ease and speed up working with EC2 considerably by using alternate scripting environments covering the same ground, in particular the excellent boto, which is a Python package that provides interfaces to Amazon Web Services.
Boto uses the nowadays more common authentication scheme based on access keys only rather than X.509 certificates (e.g. an AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY pair), which furthermore can (and should) be managed via AWS Identity and Access Management (IAM) to avoid the risk of exposing your main AWS account credentials in the first place. See my answer to How to download an EC2 X.509 certificate with an IAM User account? for more details on this.
Good luck!

Move Amazon EC2 AMIs between regions via web-interface?

Any easy way to move and custom AMI image between regions? (tokyo -> singapore)
I know you can mess up with API and S3 to get it done, but there there any easier way to do it?
As of December, 2012, Amazon now supports migrating an AMI to another region through the UI tool (Amazon Management Console). See their documentation here
So, how I've done it is..
From the AMI find out the Snapshot-ID and how it is attached (e.g. /dev/sda1)
Select the Snapshot, click "Copy", set Destination region and make the copy (takes a while!)
Select the new Snapshot, click "Create Image"
Architecture: (choose 32 or 64 bit)
Name/Description: (give it one)
Kernel ID: when migrating a Linux AMI, if you choose "default" it may fail. What worked for me was to go to the Amazon Kernels listing here to find the kernels Amazon supports, then specify it when creating the image)
Root Device Name: /dev/sda1
Click "Yes, Create"
4.Launch an instance from the new AMI and test that you can connect.
You can do it using Eric's post:
http://alestic.com/2010/10/ec2-ami-copy
The following assumes your AWS Console utilities are installed in /opt/aws/bin/, JAVA_HOME=/usr and you are running i386 architecture, otherwise replace with x86_64.
1) Run a live snapshot, where you believe your image can fit in 1.5GB and you have that to spare in /mnt (check running df)
/opt/aws/bin/ec2-bundle-vol -d /mnt -k /home/ec2-user/.ec2/pk-XXX.pem -c /home/ec2-user/.ec2/cert-XXX.pem -u 123456789 -r i386 -s 1500
2) Upload to current region's S3 bucket
/opt/aws/bin/ec2-upload-bundle -b S3_BUCKET -m /mnt/image.manifest.xml -a abcxyz -s SUPERSECRET
3) Transfer the image to EU S3 bucket
/opt/aws/bin/ec2-migrate-image -K /home/ec2-user/.ec2/pk-XXX.pem -C /home/ec2-user/.ec2/cert-XXX.pem -o abcxyz -w SUPERSECRET --bucket S3_BUCKET_US --destination-bucket S3_BUCKET_EU --manifest image.manifest.xml --location EU
4) Register your AMI so you can fire up the instance in Ireland
/opt/aws/bin/ec2-register –K /home/ec2-user/.ec2/pk-XXX.pem –C /home/ec2-user/.ec2/cert-XXX.pem http://s3.amazonaws.com:80/S3_BUCKET/image.manifest.xml --region eu-west-1 -name DEVICENAME -a i386 --kernel aki-xxx
There are API tools for this. http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-MigrateImage.html
I think that is now outdated by ec2-bundle-vol and ec2-migrate-image, BTW you can also take a look at this Perl script by Lincoln D. Stein:
http://search.cpan.org/~lds/VM-EC2/bin/migrate-ebs-image.pl
Usage:
$ migrate-ebs-image.pl --from us-east-1 --to ap-southeast-1 ami-123456
Amazon have just announced support for this functionality in this blog post. Note that the answer from dmohr relates to copying EBSs, not AMIs.
In case the blog post is unavailable, quoting the relevant parts:
To use AMI Copy, simply select the AMI to be copied from within the
AWS Management Console, choose the destination region, and start the
copy. AMI Copy can also be accessed via the EC2 Command Line
Interface or EC2 API as described in the EC2 User’s Guide. Once the
copy is complete, the new AMI can be used to launch new EC2 instances
in the destination region.
AWS now supports the copy of an EBS snapshot to another region via UI/CLI/API. You can copy the snapshot and then make an AMI from it. Direct AMI copy is coming - from AWS:
"We also plan to launch Amazon Machine Image (AMI) Copy as a follow-up
to this feature, which will enable you to copy both public and
custom-created AMIs across regions.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-copy-snapshot.html?ref_=pe_2170_27415460
Ylastic allows you to move EBS backed linux images between regions.
Its $25 or $50 per month but it looks like you can evaluate it for a week.
I just did this using a script on CloudyScripts, worked fantastically: https://cloudyscripts.com/tool/show/5 (and it's free).
As of 2017, it's pretty simple.. just follow the screenshots:
I'll add Scalr to the list of tools you can use (Disclaimer: I work there). Within Scalr, you can create your own AMI (we call them roles). Once your role is ready, you just have to choose where you want to deploy it (so in any regions).
Scalr is open-source released under the Apache 2 license: you can to download it and install it yourself. Otherwise, it is also available through a hosted version including support. Alternatives to Scalr includes RightScale and enStratus.