I am building a windows server AMI using packer. It works fine with a hardcoded password, but I am trying to create the AMI so that the password is autogenerated. I tried what was suggested below and the packer logs looks good, it gets a password.
How to create windows image in packer using the keypair
However when I create an EC2 instance from the AMI in terraform the connection to the windows password is lost and cannot be retrieved. What is missing here?
Packer json
{
"builders": [
{
"profile" : "blah",
"type": "amazon-ebs",
"region": "eu-west-1",
"instance_type": "t2.micro",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "*Windows_Server-2012-R2*English-64Bit-Base*",
"root-device-type": "ebs"
},
"most_recent": true,
"owners": "amazon"
},
"ssh_keypair_name" : "shared.key",
"ssh_private_key_file" : "./common/sharedkey.pem",
"ssh_agent_auth" : "true",
"ami_name": "test-{{timestamp}}",
"user_data_file": "./common/bootstrap_win.txt",
"communicator": "winrm",
"winrm_username": "Administrator"
}
]
}
Adding Ec2Config.exe -sysprep at the end worked.
{
"type": "windows-shell",
"inline": ["C:\\progra~1\\Amazon\\Ec2ConfigService\\Ec2Config.exe -sysprep"]
}
Though beware it seems my IIS configuration does not work after sysprep.
Related
I'm having issue in execute a postman collection, from newman, which involves loading a pfx certificate to establish a TLSMA connection.
From the Postman application, the certificate is loaded correctly (from the setting) and used for the domain https://domain1.com to connect with a TLSMA counterpart server.
When I export the json collection and environment there is no mention about domain and certificate associated.
Checking the json schema here newman accepts a certificate definition in the request but applying it does not work, here my example:
"request": {
"method": "GET",
"header": [],
"certificate": {
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"cert": { "src": "./certificate.pfx" }
},
"url": {
"raw": "https://domain1.com/as/authorization.oauth2",
"host": ["https://domain1.com"],
"path": ["as", "authorization.oauth2"],
"query": [
{
I also tried to apply the certificate configuration in an external file cert-list.json with the following content:
[{
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"cert": { "src": "./certificate-t.pfx" }
}]
but it does not work either.
Here the newman command:
newman run domain.postman_collection.json -n 1 --ssl-client-cert-list cert-list.json -e env.postman_environment.json -r cli --verbose
Do you know where I am doing wrong?
Change cert to pfx
try:
[{
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"pfx": { "src": "./certificate-t.pfx" }
}]
I am trying to build images with packer in a jenkins pipeline. However, the packer ssh provisioner does not work as the ssh never becomes available and error out with timeout.
Farther investigation of the issue shows that, the image is missing network interface files ifconfig-eth0 in /etc/sysconfig/network-scripts directory so it never gets an ip and does not accept ssh connection.
The problem is, there are many such images to be generated and I can't open each one manually in GUI of virtualbox and correct the issue and repack. Is there any other possible solution to that?
{
"variables": {
"build_base": ".",
"isref_machine":"create-ova-caf",
"build_name":"virtual-box-jenkins",
"output_name":"packer-virtual-box",
"disk_size":"40000",
"ram":"1024",
"disk_adapter":"ide"
},
"builders":[
{
"name": "{{user `build_name`}}",
"type": "virtualbox-iso",
"guest_os_type": "Other_64",
"iso_url": "rhelis74_1710051533.iso",
"iso_checksum": "",
"iso_checksum_type": "none",
"hard_drive_interface":"{{user `disk_adapter`}}",
"ssh_username": "root",
"ssh_password": "Secret1.0",
"shutdown_command": "shutdown -P now",
"guest_additions_mode":"disable",
"boot_wait": "3s",
"boot_command": [ "auto<enter>"],
"ssh_timeout": "40m",
"headless":
"true",
"vm_name": "{{user `output_name`}}",
"disk_size": "{{user `disk_size`}}",
"output_directory":"{{user `build_base`}}/output-{{build_name}}",
"format": "ovf",
"vrdp_bind_address": "0.0.0.0",
"vboxmanage": [
["modifyvm", "{{.Name}}","--nictype1","virtio"],
["modifyvm", "{{.Name}}","--memory","{{ user `ram`}}"]
],
"skip_export":true,
"keep_registered": true
}
],
"provisioners": [
{
"type":"shell",
"inline": ["ls"]
}
]
}
When you don't need the SSH connection during the provisioning process you can switch it off. See the packer documentation about communicator, there you see the option none to switch of the communication between host and guest.
{
"builders": [
{
"type": "virtualbox-iso",
"communicator": "none"
}
]
}
Packer Builders DOCU virtualbox-iso
The present reason I'm asking this question is that my npm CLI server in the Command Prompt doesn't let me register my theme to the Visual Studio Code Marketplace. I did as the VS Code website's tutorial on how to publish an extension read. However, when I typed my Personal Access Token in the cmd as it was shown to me, this is what I get:
>vsce login 'my publisher name'
#Error: Access Denied: 'Username' needs the following permission(s) on
#the resource /publisher name to perform this action: View user permissions
#on a resource
I tried several times, and even gave it full access to all accessible organizations in my Azure DevOps. And for your information, my computer runs on Windows 8.1.
This is the package.json file I tried to register:
{
"name": "blacklady-code-workspace",
"displayName": "Black Lady Theme",
"description": "Modeled after the Black Lady from Sailor Moon R.",
"version": "0.0.1",
"publisher": "ayaimarion",
"repository": {
"url": "https://github.com/ZanJang/blacklady-theme-ver-0.0.1"
},
"engines": {
"vscode": "^1.30.0"
},
"categories": [
"Themes"
],
"contributes": {
"themes": [
{
"label": "Black Lady",
"uiTheme": "vs-dark",
"path": "./themes/Black Lady-color-theme.json"
}
]
}
}
If there's something I did do wrong, let me know.
My Azure DevOps organization: https://dev.azure.com/ayamaki
I had the same issue. I had not yet created the publisher in the Marketplace.
Once I created the new publisher, the vsce login command succeeded.
I have a web app on OpenShift v3 (all-in-One), using the Wildfly Builder Image. In addition, I created a service named xyz, to point to an external host+IP. Something like this:
"kind": "Service",
"apiVersion": "v1",
"metadata": { "name": "xyz" },
"spec": {
"ports": [
{ "port": 61616,
"protocol": "TCP",
"targetPort": 61616
}
],
"selector": {}
}
I also have an endpoint, pointing externally, but that is not relevant for this question.
When deployed, my program can access an environment variable named XYZ_PORT=tcp://172.30.192.186:61616
However, I cannot figure out how to see all the values of all such variables either via the web-console, or using the CLI. Using the web-console, I cannot see it being injected into the YAML.
I tried some of the oc env options, but none seem to list what I want.
Let's say you are deploying kitchensink, then the below CLI should list all the environment variables:
oc env bc/kitchensink --list
I'm trying to make storage plugin for Hadoop (hdfs) and Apache Drill.
Actually I'm confused and I don't know what to set as port for hdfs:// connection, and what to set for location.
This is my plugin:
{
"type": "file",
"enabled": true,
"connection": "hdfs://localhost:54310",
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
},
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null
}
},
"formats": {
"psv": {
"type": "text",
"extensions": [
"tbl"
],
"delimiter": "|"
},
"csv": {
"type": "text",
"extensions": [
"csv"
],
"delimiter": ","
},
"tsv": {
"type": "text",
"extensions": [
"tsv"
],
"delimiter": "\t"
},
"parquet": {
"type": "parquet"
},
"json": {
"type": "json"
},
"avro": {
"type": "avro"
}
}
}
So, is ti correct to set localhost:54310 because I got that with command:
hdfs -getconf -nnRpcAddresses
or it is :8020 ?
Second question, what do I need to set for location? My hadoop folder is in:
/usr/local/hadoop
, and there you can find /etc /bin /lib /log ... So, do I need to set location on my datanode, or?
Third question. When I'm connecting to Drill, I'm going through sqlline and than connecting on my zookeeper like:
!connect jdbc:drill:zk=localhost:2181
My question here is, after I make storage plugin and when I connect to Drill with zk, can I query hdfs file?
I'm very sorry if this is a noob question but I haven't find anything useful on internet or at least it haven't helped me.
If you are able to explain me some stuff, I'll be very grateful.
As per Drill docs,
{
"type" : "file",
"enabled" : true,
"connection" : "hdfs://10.10.30.156:8020/",
"workspaces" : {
"root" : {
"location" : "/user/root/drill",
"writable" : true,
"defaultInputFormat" : null
}
},
"formats" : {
"json" : {
"type" : "json"
}
}
}
In "connection",
put namenode server address.
If you are not sure about this address.
Check fs.default.name or fs.defaultFS properties in core-site.xml.
Coming to "workspaces",
you can save workspaces in this. In the above example, there is a workspace with name root and location /user/root/drill.
This is your HDFS location.
If you have files under /user/root/drill hdfs directory, you can query them using this workspace name.
Example: abc is under this directory.
select * from dfs.root.`abc.csv`
After successfully creating the plugin, you can start drill and start querying .
You can query any directory irrespective to workspaces.
Say you want to query employee.json in /tmp/data hdfs directory.
Query is :
select * from dfs.`/tmp/data/employee.json`
I have similar problem, Drill cannot read dfs server. Finally, the problem is cause by namenode port.
The default address of namenode web UI is http://localhost:50070/.
The default address of namenode server is hdfs://localhost:8020/.