Packer ssh timeout - ssh

I am trying to build images with packer in a jenkins pipeline. However, the packer ssh provisioner does not work as the ssh never becomes available and error out with timeout.
Farther investigation of the issue shows that, the image is missing network interface files ifconfig-eth0 in /etc/sysconfig/network-scripts directory so it never gets an ip and does not accept ssh connection.
The problem is, there are many such images to be generated and I can't open each one manually in GUI of virtualbox and correct the issue and repack. Is there any other possible solution to that?
{
"variables": {
"build_base": ".",
"isref_machine":"create-ova-caf",
"build_name":"virtual-box-jenkins",
"output_name":"packer-virtual-box",
"disk_size":"40000",
"ram":"1024",
"disk_adapter":"ide"
},
"builders":[
{
"name": "{{user `build_name`}}",
"type": "virtualbox-iso",
"guest_os_type": "Other_64",
"iso_url": "rhelis74_1710051533.iso",
"iso_checksum": "",
"iso_checksum_type": "none",
"hard_drive_interface":"{{user `disk_adapter`}}",
"ssh_username": "root",
"ssh_password": "Secret1.0",
"shutdown_command": "shutdown -P now",
"guest_additions_mode":"disable",
"boot_wait": "3s",
"boot_command": [ "auto<enter>"],
"ssh_timeout": "40m",
"headless":
"true",
"vm_name": "{{user `output_name`}}",
"disk_size": "{{user `disk_size`}}",
"output_directory":"{{user `build_base`}}/output-{{build_name}}",
"format": "ovf",
"vrdp_bind_address": "0.0.0.0",
"vboxmanage": [
["modifyvm", "{{.Name}}","--nictype1","virtio"],
["modifyvm", "{{.Name}}","--memory","{{ user `ram`}}"]
],
"skip_export":true,
"keep_registered": true
}
],
"provisioners": [
{
"type":"shell",
"inline": ["ls"]
}
]
}

When you don't need the SSH connection during the provisioning process you can switch it off. See the packer documentation about communicator, there you see the option none to switch of the communication between host and guest.
{
"builders": [
{
"type": "virtualbox-iso",
"communicator": "none"
}
]
}
Packer Builders DOCU virtualbox-iso

Related

Newman loads the pfx certificate but it is not used to connect to the endpoint

I'm having issue in execute a postman collection, from newman, which involves loading a pfx certificate to establish a TLSMA connection.
From the Postman application, the certificate is loaded correctly (from the setting) and used for the domain https://domain1.com to connect with a TLSMA counterpart server.
When I export the json collection and environment there is no mention about domain and certificate associated.
Checking the json schema here newman accepts a certificate definition in the request but applying it does not work, here my example:
"request": {
"method": "GET",
"header": [],
"certificate": {
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"cert": { "src": "./certificate.pfx" }
},
"url": {
"raw": "https://domain1.com/as/authorization.oauth2",
"host": ["https://domain1.com"],
"path": ["as", "authorization.oauth2"],
"query": [
{
I also tried to apply the certificate configuration in an external file cert-list.json with the following content:
[{
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"cert": { "src": "./certificate-t.pfx" }
}]
but it does not work either.
Here the newman command:
newman run domain.postman_collection.json -n 1 --ssl-client-cert-list cert-list.json -e env.postman_environment.json -r cli --verbose
Do you know where I am doing wrong?
Change cert to pfx
try:
[{
"name": "Dev or Test Server",
"matches": ["https://domain1.com/*"],
"pfx": { "src": "./certificate-t.pfx" }
}]

Why is Ansible still able to connect node without ssh

I created two ubuntu docker containers one with control node and another with slave. I ran
ansible all -m service -a "name=ssh state=stopped"
and it shows
172.18.0.3 | CHANGED => {
"changed": true,
"name": "ssh",
"status": {
"enabled": {
"changed": false,
"rc": null,
"stderr": null,
"stdout": null
},
"stopped": {
"changed": true,
"rc": 0,
"stderr": "",
"stdout": " * Stopping OpenBSD Secure Shell server sshd\n ...done.\n"
}
}
}
Then I tried to ssh manually it failed because the openssh server has stopped which is fine. Then I ran another ansible command to start it.
# ansible all -m service -a "name=ssh state=started"
172.18.0.3 | CHANGED => {
"changed": true,
"name": "ssh",
"status": {
"enabled": {
"changed": false,
"rc": null,
"stderr": null,
"stdout": null
},
"started": {
"changed": true,
"rc": 0,
"stderr": "",
"stdout": " * Starting OpenBSD Secure Shell server sshd\n ...done.\n"
}
}
}
I am quiet amazed how was ansible able to connect to the node when I have already stopped the ssh service of the node ? Is there some alternative method that ansible is connecting to node other than ssh?
Ansible can connect to targets through a variety of protocols.
Take a look at the connection plugins list
In your case, for Docker containers, it uses the Docker API.

AWS AMI cannot retrieve password after packer creation using private key

I am building a windows server AMI using packer. It works fine with a hardcoded password, but I am trying to create the AMI so that the password is autogenerated. I tried what was suggested below and the packer logs looks good, it gets a password.
How to create windows image in packer using the keypair
However when I create an EC2 instance from the AMI in terraform the connection to the windows password is lost and cannot be retrieved. What is missing here?
Packer json
{
"builders": [
{
"profile" : "blah",
"type": "amazon-ebs",
"region": "eu-west-1",
"instance_type": "t2.micro",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "*Windows_Server-2012-R2*English-64Bit-Base*",
"root-device-type": "ebs"
},
"most_recent": true,
"owners": "amazon"
},
"ssh_keypair_name" : "shared.key",
"ssh_private_key_file" : "./common/sharedkey.pem",
"ssh_agent_auth" : "true",
"ami_name": "test-{{timestamp}}",
"user_data_file": "./common/bootstrap_win.txt",
"communicator": "winrm",
"winrm_username": "Administrator"
}
]
}
Adding Ec2Config.exe -sysprep at the end worked.
{
"type": "windows-shell",
"inline": ["C:\\progra~1\\Amazon\\Ec2ConfigService\\Ec2Config.exe -sysprep"]
}
Though beware it seems my IIS configuration does not work after sysprep.

Packer Waiting for SSH

I am new to Packer. I have researched this issue(above) in great detail. I am currently trying to create an UBuntu 32 bit VM. I am running Packer on a WIndows 10 OS. Once the installation is complete and the VM reboots and I am then prompted with GUI to login on my VM while packer still running my host Windows says it's waiting for SSH to become available. How may I best enable ssh for Packer to use to connect with my vm. Here is my json.template
{
"builders": [
{
"type": "virtualbox-iso",
"vm_name": "{{ user `alias` }}",
"vboxmanage": [
[ "modifyvm", "{{.Name}}", "--cpus", "1" ],
[ "modifyvm", "{{.Name}}", "--memory", "{{user `ram`}}" ],
[ "modifyvm", "{{.Name}}", "--clipboard", "bidirectional" ],
[ "modifyvm", "{{.Name}}", "--draganddrop", "bidirectional" ],
[ "modifyvm", "{{.Name}}", "--audio", "none" ],
[ "modifyvm", "{{.Name}}", "--nic1", "intnet"],
[ "modifyvm", "{{.Name}}","--nic2", "null"],
[ "modifyvm", "{{.Name}}","--vram", "16"],
[ "modifyvm", "{{.Name}}","--mouse", "usbtablet"]
],
"guest_os_type": "Ubuntu",
"iso_url": "{{ user `iso_url` }}",
"iso_checksum": "{{ user `iso_checksum` }}",
"iso_checksum_type": "md5",
"disk_size": "{{ user `disk_size` }}",
"ssh_username": "{{ user `ssh_username` }}",
"ssh_password": "{{ user `ssh_password` }}",
"ssh_timeout": "{{ user `ssh_timeout` }}",
"guest_additions_mode": "attach",
"headless": "{{ user `headless` }}",
"boot_wait": "3s",
"boot_command": [
"<enter><wait><esc><enter><wait>",
"/install/vmlinuz<wait>",
" {{user `preseed_path`}}",
" debian-installer/locale=en_US console-setup/ask_detect=false<wait>",
" console-setup/layoutcode=us<wait>",
" keyboard-configuration/layoutcode=us<wait>",
" passwd/user-password={{ user `ssh_password` }}<wait>",
" passwd/user-password-again={{ user `ssh_password` }}<wait>",
" finish-install/reboot_in_progress=note<wait>",
" netcfg/use_autoconfig=false<wait>",
" cdrom-detect/eject boolean=false<wait>",
" initrd=/install/initrd.gz<wait>",
"<enter><wait>"
],
"shutdown_command": "sudo shutdown -h now"
}
],
"post-processors": [
{
"type": "vagrant",
"output": "C://{{ user `box_name` }}.box"
}
],
"variables": {
"headless": "false",
"iso_checksum": "7",
"iso_url": "{{file path}}",
"disk_size": "256000",
"alias": "packervm",
"box_name": "ubuntu_custom",
"ssh_timeout": "20m",
"ssh_username": "{{username}}",
"ssh_password": "{{password}}",
"preseed_path":"file=/cdrom/preseed/preseed.cfg",
"ram": "2048"
}
}
P.S Yes I have looked at templates before coming to here and asking this question.
Actually, I got it. It was a combination of setting ssh_port to 22, setting ssh_address to my vm's address, setting ssh_skip_nat_mapping to true and than changing my nic card from internal network to hostonly and then configuring it.
You overwrite Packers setup of network and thus the host won't be able to reach the guest. To fix it remove the two following lines:
[ "modifyvm", "{{.Name}}", "--nic1", "intnet"],
[ "modifyvm", "{{.Name}}","--nic2", "null"],

Making storage plugin on Apache Drill to HDFS

I'm trying to make storage plugin for Hadoop (hdfs) and Apache Drill.
Actually I'm confused and I don't know what to set as port for hdfs:// connection, and what to set for location.
This is my plugin:
{
"type": "file",
"enabled": true,
"connection": "hdfs://localhost:54310",
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
},
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null
}
},
"formats": {
"psv": {
"type": "text",
"extensions": [
"tbl"
],
"delimiter": "|"
},
"csv": {
"type": "text",
"extensions": [
"csv"
],
"delimiter": ","
},
"tsv": {
"type": "text",
"extensions": [
"tsv"
],
"delimiter": "\t"
},
"parquet": {
"type": "parquet"
},
"json": {
"type": "json"
},
"avro": {
"type": "avro"
}
}
}
So, is ti correct to set localhost:54310 because I got that with command:
hdfs -getconf -nnRpcAddresses
or it is :8020 ?
Second question, what do I need to set for location? My hadoop folder is in:
/usr/local/hadoop
, and there you can find /etc /bin /lib /log ... So, do I need to set location on my datanode, or?
Third question. When I'm connecting to Drill, I'm going through sqlline and than connecting on my zookeeper like:
!connect jdbc:drill:zk=localhost:2181
My question here is, after I make storage plugin and when I connect to Drill with zk, can I query hdfs file?
I'm very sorry if this is a noob question but I haven't find anything useful on internet or at least it haven't helped me.
If you are able to explain me some stuff, I'll be very grateful.
As per Drill docs,
{
"type" : "file",
"enabled" : true,
"connection" : "hdfs://10.10.30.156:8020/",
"workspaces" : {
"root" : {
"location" : "/user/root/drill",
"writable" : true,
"defaultInputFormat" : null
}
},
"formats" : {
"json" : {
"type" : "json"
}
}
}
In "connection",
put namenode server address.
If you are not sure about this address.
Check fs.default.name or fs.defaultFS properties in core-site.xml.
Coming to "workspaces",
you can save workspaces in this. In the above example, there is a workspace with name root and location /user/root/drill.
This is your HDFS location.
If you have files under /user/root/drill hdfs directory, you can query them using this workspace name.
Example: abc is under this directory.
select * from dfs.root.`abc.csv`
After successfully creating the plugin, you can start drill and start querying .
You can query any directory irrespective to workspaces.
Say you want to query employee.json in /tmp/data hdfs directory.
Query is :
select * from dfs.`/tmp/data/employee.json`
I have similar problem, Drill cannot read dfs server. Finally, the problem is cause by namenode port.
The default address of namenode web UI is http://localhost:50070/.
The default address of namenode server is hdfs://localhost:8020/.