VM import on AWS gave me 'ClientError: Boot disk is not using MBR partitioning.' error - amazon-s3

After a 10hours upload to AWS S3, I've tried to import the vm using this command
aws ec2 import-image --description "My server VM" --disk-containers "file://C:\import\containers.json"
but I got this while processing the VM to import it to AWS
{
"ImportImageTasks": [
{
"Description": "myownVM",
"ImportTaskId": "import-ami-guid",
"Platform": "Windows",
"SnapshotDetails": [
{
"DiskImageSize": 28333778432.0,
"Format": "VMDK",
"Status": "completed",
"UserBucket": {
"S3Bucket": "my",
"S3Key": "Windows 10 x64.ova"
}
}
],
"Status": "deleted",
"StatusMessage": "ClientError: Boot disk is not using MBR partitioning.",
"Tags": []
}
]
}
It was created with VMWare 16 Professional, then exported it to ova... what have I done wrong?
I've tried googling it but I've seen no error corresponding to this
Thanks in advance

The Windows 10 boot disk is probably formatted with GPT and not MBR, which is not supported for VMDK disk images.
From the VMIE docs: https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html#limitations-image
UEFI/EFI boot partitions are supported only for Windows boot volumes with VHDX as the image format. Otherwise, a VM's boot volume must use Master Boot Record (MBR) partitions.
You can check using the command, from inside the VM:
diskpart
list disk
If it shows an asterisk in the GPT column, it's using GPT. If not, it's using MBR.
https://www.top-password.com/blog/tag/how-to-check-gpt-or-mbr-windows-10 has screenshots if you'd like to check via a GUI.
I'm not aware of any way to convert from GPT to MBR without wiping the drive and reinstalling Windows.
If you do reinstall, make sure you disable UEFI and secure boot if you have those options in the VM BIOS in VMware Workstation. That should allow you to choose the "custom" install during the Windows setup and then delete the default partitions and recreate them, as described here:
https://answers.microsoft.com/en-us/windows/forum/windows_10-windows_install-winpc/why-the-latest-w10-cant-install-mbr-disk/04351813-f7f5-46b8-b045-7d3b43094d36
Another option would be to use VHD(x) disk images using VirtualBox instead of VMware Workstation. Theoretically, this should allow you to continue using GPT. A walkthrough for this route is here:
https://gist.github.com/peterforgacs/abebc777fcd6f4b67c07b2283cd31777

Related

ECS Fargate - No Space left on Device

I had deployed my asp.net core application on AWS Fargate and all was working fine. I am using awslogs driver and logs were correctly sent to the cloudwatch. But after few days of correctly working, I am now seeing only one kind of log as shown below:
So no application logs are showing up due to no space. If I update the ECS service, logging starts working again, suggesting that the disk has been cleaned up.
This link suggests that awslogs driver does not take up space and sends log to cloudwatch instead.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/task_cannot_pull_image.html
Did anyone also faced this issue and knows how to resolve the same?
You need to set the "LibraryLogFileName" parameter in your AWS Logging configuration to null.
So in the appsettings.json file of a .Net Core application, it would look like this:
"AWS.Logging": {
"Region": "eu-west-1",
"LogGroup": "my-log-group",
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
},
"LibraryLogFileName": null
}
It depends on how you have logging configured in your application. The AWSlogs driver is just grabbing all the output sent to the console and saving it to CloudWatch, .NET doesn't necessarily know about this and is going to keep writing logs like it would have otherwise.
Likely .NET is still writing logs to whatever location it otherwise would be.
Advice for how to troubleshoot and resolve:
First, run the application locally and check if log files are being saved anywhere
Second, optinally run a container test to see if log files are being saved there too
Make sure you have docker installed on your machine
Download the container image from ECR which fargate is running.
docker pull {Image URI from ECR}
Run this locally
Do some task you know will genereate some logs
Use docker exec -it to connect up to your container
Check if log files are being written to the location you identified when you did the first test
Finally, once you have identified that logs are being written to files somewhere pick one of these options
Add some flag which can be optionally specified to disable logging to a file. Use this when running your application inside of the container.
Implement some logic to clean up log files periodically or once they reach a certain size. (Keep in mind ECS containers have up to 20GB local storage)
Disable all file logging(not a good option in my opinion)
Best of luck!

How to deploy Strapi to an Apache cPanel

I'm setting up a Strapi install in my Apache cPanel (WHM on CentOS 7), and can't find a proper way to deploy it. I've managed to get it running, but when I try to access the dashboard (/admin), it just shows the index page (the one in public/index).
Is this the proper way to deploy Strapi to an Apache server?
Is the "--quickstart" setting only for testing purposes, or can this be used in Production? If so, what are the pre-deployment steps I need to take?
This is for a simple project that requires easy to edit content that will be grabbed via API manually from another cPanel installation.
Reading through the Strapi docs, I could only find deployment information about Heroku, Netlify and other third-party services such as these, nothing on hosting it yourself on Apache/cPanel.
I've tried setting up a "--quickstart" project locally, getting it working and then deploying via Bitbucket Pipelines. After that, just going into the cPanel terminal and starting it - though the aforementioned problem occurs, can't access admin dashboard.
Here's my server.json configuration:
Production
{
"host": "api.example.com",
"port": 1337,
"production": true,
"proxy": {
"enabled": false
},
"cron": {
"enabled": false
},
"admin": {
"autoOpen": false
}
}
Development
{
"host": "localhost",
"port": 1337,
"proxy": {
"enabled": false
},
"cron": {
"enabled": false
},
"admin": {
"autoOpen": false
}
}
There are no console errors, nor 404s when trying to access it.
Edit
Regarding deployment with the --quickstart setting:
there are many features (mainly related to searching) that don't work properly with SQLite (lack of proper index support) Not to mention the possible slowness due to disk speed and raw IOPS of the disk.
A suggestion on how to implement:
Respectfully, to deploy strapi you likely need to:
1. build a docker container for it
2. make a script to deploy it
3. use SSH and do it manually
4. use a CI/CD platform and scripted to deploy it
In summary:
Strapi is not your typical "copy the files and start apache" it's not a flat file system, Strapi itself is designed to be run as a service similar to Apache/Nginx/MySQL ect. They are all services (Strapi does need Apache/Nginx/Traefik to do ssl for it though via proxying)
If you have the index page when you visit /admin it's because the admin is not built.
Please run yarn build before starting your application.

JupyterHub server is unable start in Terraformed EMR cluster running in private subnet

I'm creating an EMR cluster (emr-5.24.0) with Terraform, deployed into a private subnet, that includes Spark, Hive and JupyterHub.
I've added an additional configuration JSON to the deployment, which should add persistency for the Jupiter notebooks into S3 (instead of locally on disk).
The overall architecture includes a VPC endpoint to S3 and I'm able to access the bucket I'm trying to write the notebooks to.
When the cluster is provisioned, the JupyterHub server is unable to start.
Logging into the master node and trying to start/restart the docker container for the jupyterhub does not help.
The configuration for this persistency looks like this:
[
{
"Classification": "jupyter-s3-conf",
"Properties": {
"s3.persistence.enabled": "true",
"s3.persistence.bucket": "${project}-${suffix}"
}
},
{
"Classification": "spark-env",
"Configurations": [
{
"Classification": "export",
"Properties": {
"PYSPARK_PYTHON": "/usr/bin/python3"
}
}
]
}
]
In the terraform EMR resource definition, this is then referenced:
configurations = "${data.template_file.configuration.rendered}"
This is read from:
data "template_file" "configuration" {
template = "${file("${path.module}/templates/cluster_configuration.json.tpl")}"
vars = {
project = "${var.project_name}"
suffix = "bucket"
}
}
When I don't use persistency on the notebooks, everything works fine and I am able to log into JupyterHub.
I'm fairly certain it's not a IAM policy issue since the EMR cluster role policy Allow action is defined as "s3:*".
Are there any additional steps that need to be taken in order for this to function ?
/K
It seems that the jupyter on EMR uses the S3ContentsManager to connect with S3.
https://github.com/danielfrg/s3contents
I dig a bit S3ContentsManager and found the S3 endpoints which are the public one (as expected). Since the endpoint of S3 is public, jupyter needs to access the internet but you are running the EMR on the private subnet which is not possible to connect the endpoint I guess.
You might need to use a NAT gateway in a public subnet or create s3 endpoint for your VPC.
Yup. We ran into this too. Add an S3 VPC Endpoint, then from AWS support -
add a JupyterHub notebook config:
{
"Classification": "jupyter-notebook-conf",
"Properties": {
"config.S3ContentsManager.endpoint_url": "\"https://s3.${aws_region}.amazonaws.com\"",
"config.S3ContentsManager.region_name": "\"${aws_region}\""
}
},
hth

How can I mount an EFS share to AWS Fargate?

I have an AWS EFS share that I store container logs.
I would like to mount this nfs share (aws efs) to AWS Fargate. Is it possible?
Any supporting documentation link would be appreciated.
You can do this since April 2020! It's a little tricky but works.
The biggest gotcha I ran into was that you need to set the "Platform version" to 1.4.0 - it will default to "Latest" which is 1.3.0.
In your Container Definitions you need to define a volume and a mountpoint where you want the EFS share mounted inside the container:
Volume:
"volumes": [
{
"efsVolumeConfiguration": {
"transitEncryptionPort": null,
"fileSystemId": "fs-xxxxxxx",
"authorizationConfig": {
"iam": "DISABLED",
"accessPointId": "fsap-xxxxxxxx"
},
"transitEncryption": "ENABLED",
"rootDirectory": "/"
},
"name": "efs volume name",
"host": null,
"dockerVolumeConfiguration": null
}
]
Mount volume in container:
"mountPoints": [
{
"readOnly": null,
"containerPath": "/opt/your-app",
"sourceVolume": "efs volume name"
}
These posts helped me although they're missing a few details:
Tutorial: Using Amazon EFS file systems with Amazon ECS
EFSVolumeConfiguration
EFS support for Fargate is now available!
https://aws.amazon.com/about-aws/whats-new/2020/04/amazon-ecs-aws-fargate-support-amazon-efs-filesystems-generally-available/
EDIT: Since April 2020 this answer is not accurate. This was the situation until Fargate 1.4.0. If you are using earlier versions of Fargate this is still relevant, otherwise see newer answers.
Unfortunately it's not currently possible to use persistent storage with AWS Fargate however progress on this feature can be tracked using the newly launched public roadmap [1] for AWS container services [2]
Your use case seems to suggest logs. Have you considered using the AWSLogs driver [3] and shipping your application logs to CloudWatch Logs?
[1] https://github.com/aws/containers-roadmap/projects/1
[2] https://github.com/aws/containers-roadmap/issues/53
[3] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
wow need platform version is 1.4.0 as #TheFiddlerWins suggested

INSTALL "[AMD/ATI] Tonga XT GL [FirePro S7150]" Graphic Card on a VMu (Centos 6.9) running on XenServer 7.4

Just start using XenServer. Doing some experiment for my company. Installed XenServer 7.4 on a Box and created a Centos 6.9 VMU. Using XenCenter.
Got to the point when I can run the virtual operating system but when I try to use the "Advanced Micro Devices, Inc. [AMD/ATI] Tonga XT GL [FirePro S7150]" graphic card with the command:
xe vgpu-create vm-uuid=xxx-xxx-xxx-xxx gpu-group-uuid=xxx-xxx-xxx-xxx
I receive the following error message:
The use of this feature is restricted.
I have also tried to install the graphic interface (Xen-Center) using a licensed Xen-Server to enable the AMD card using the Tools->Install Update: downloaded and selected the mxgpu-1.0.5.amd.iso to enable the Graphic card but I cannot complete the process as I receive the error message:
The attempt to create a VDI failed
I am running out of option. The CentOS is running but I cannot use the machine AMD graphic card. Can you help?
Could you try running the VM with the virtual disk stored on the same Local Storage repository located on that card's host, and removed from any pools. This is the default configuration, but I'd thought I'd mention these tips in case you have this box somehow mixed in a heterogenous pool. If the machine is part of a pool, make sure that you are not selecting the video adapter to passthrough to the VM of another host's adapter.