Related
I am trying to get CloudWatch running properly on my Lightsail instance, which I appear to achieved with only partial success.
I have ran the Wizard using sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard which has produced a config file outlining numerous metrics including cpu, memory and disk usage as outlined here. The service loads and starts the config file, and doesn't complain about invalid json (this did happen a few times, but I fixed it).
I can stop the service with sudo amazon-cloudwatch-agent-ctl -a stop
I then reload the config with sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -s -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json
Verify the service is running: sudo amazon-cloudwatch-agent-ctl -a status
Which outputs this:
{
"status": "running",
"starttime": "2022-01-10T21:53:12+00:00",
"configstatus": "configured",
"cwoc_status": "stopped",
"cwoc_starttime": "",
"cwoc_configstatus": "not configured",
"version": "1.247349.0b251399"
}
Logging into my CloudWatch console, I can see the data being received, and the single line appearing on the graph there corresponds to the times that I started and stopped the service-- so it's definitely doing something. And yet... the only metric that appears on that graph is mem_used_percent... why? Why only this one metric? Where is the rest of my data pertaining to cpu, etc? What am I doing wrong?
Here is my config.json, which as I said, is being loaded by the service without issue.
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"metrics": {
"append_dimensions": {
"ImageID": "${aws:ImageId}",
"InstanceId":"${aws:InstanceId}",
"InstanceType":"${aws:InstanceType}"
},
"metrics_collected": {
"cpu": {
"resources": [
"*"
],
"measurement": [
"cpu_usage_active"
],
"metrics_collection_interval": 60,
"totalcpu": false
},
"disk": {
"measurement": [
"free",
"total",
"used",
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_active",
"mem_available",
"mem_available_percent",
"mem_free",
"mem_total",
"mem_used",
"mem_used_percent"
],
"metrics_collection_interval": 60
},
"netstat": {
"measurement": [
"tcp_established",
"udp_socket"
]
}
}
}
}
Any help greatly appreciated here. TIA.
You likely haven't fetched the configuration yet.
Check the logfile, i.e. /opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log, to see which inputs are loaded:
2022-05-18T10:18:57Z I! Loaded inputs: mem disk
To fetch the configuration, do as follows (you'll need to adapt this to your environment - this is for systemd, on-premise, without SSM):
sudo amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json
sudo systemctl status amazon-cloudwatch-agent.service restart
After:
2022-05-18T11:45:05Z I! Loaded inputs: mem net netstat swap cpu disk diskio
Maybe you face the same issue as I did. In my case two configuration json files
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
were merged.
The files are then translated to
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml.
When I was checking the file, only the mem definition of /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json was taken. Thus, I deleted the file and restarted the service.
sudo systemctl restart amazon-cloudwatch-agent
After the restart, the toml file contained what I expected and the metrics were in place.
Observing an Insufficient Security error after upgrading RabbitMQ server to 3.7.15 with Erlang 22.0.1 / 22.0.2 on centOS 7.6.
Initial State of system where SSL was found to be working:
CentOS Linux release - 7.5
RMQ - 3.7.7-1.el7
Erlang - 20.3.8.2-1.el7.x86_64
SSL was found to be working even when CentOS was upgraded to 7.6 and RMQ to 3.7.15. Checked after RMQ restart.
However when Erlang was upgraded to erlang-22.0.2-1.el7.x86_64.rpm, SSL stopped working. (After RMQ restart)
RabbitMQ config:
[
{rabbitmq_management,
[{listener, [{port, 15671},
{ssl, true},
{ssl_opts, [{cacertfile, "<path>/cacert.pem"},
{certfile, "<path>/cert.pem"},
{keyfile, "<path>/key.pem"}]}
]}
]},
{rabbit, [
{log_levels, [{connection,info}]},
{tcp_listeners, []},
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile,"<path>/all_cacerts.pem"},
{certfile,"<path>/cert.pem"},
{keyfile,"<path>/key.pem"},
{depth, 5},
{verify,verify_peer},
{fail_if_no_peer_cert,false}]},
{auth_mechanisms, ['PLAIN','AMQPLAIN','EXTERNAL']},
{loopback_users, []},
{ssl_cert_login_from, common_name}
]}
].
RabbitMQ enabled pluggins:
[rabbitmq_auth_mechanism_ssl,rabbitmq_management,rabbitmq_shovel,rabbitmq_shovel_management].
Please help.
Edit 1:
Updated the rabbitmq.config in this manner. Cert based auth is working now.
[
{rabbitmq_management,
[{listener, [{port, 15671},
{ssl, true},
{ssl_opts, [{cacertfile, "<path>/cacert.pem"},
{certfile, "<path>/cert.pem"},
{keyfile, "<path>/key.pem"}]},
{ssl, [{versions, ['tlsv1.3', 'tlsv1.2', 'tlsv1.1', 'tlsv1', 'sslv3']},
{ciphers,
[{ecdhe_ecdsa,aes_256_gcm,aead,sha384}, {...}]}
]}
]},
{ssl, [{versions, ['tlsv1.3', 'tlsv1.2', 'tlsv1.1', 'tlsv1', 'sslv3']},
{rabbit, [
{log_levels, [{connection,info}]},
{tcp_listeners, [5672]},
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile,"<path>/all_cacerts.pem"},
{certfile,"<path>/cert.pem"},
{keyfile,"<path>/key.pem"},
{ssl, [{versions, ['tlsv1.3', 'tlsv1.2', 'tlsv1.1', 'tlsv1', 'sslv3']},
{ciphers,
[{ecdhe_ecdsa,aes_256_gcm,aead,sha384}, {...}]},
{depth, 5},
{verify,verify_peer},
{fail_if_no_peer_cert,false}]},
{auth_mechanisms, ['PLAIN','AMQPLAIN','EXTERNAL']},
{loopback_users, []},
{ssl_cert_login_from, common_name}
]}
].
However, shovels with amqps with port 5671 still error out.
[error] <0.7391.6> Shovel 'ShovelTest' failed to connect (URI: amqps://<ip>:5671/<blah>): {tls_alert,{insufficient_security,"received SERVER ALERT: Fatal - Insufficient Security"}}
Shovels work fine with ampq with port 5672 though.
Please help.
I am trying to build images with packer in a jenkins pipeline. However, the packer ssh provisioner does not work as the ssh never becomes available and error out with timeout.
Farther investigation of the issue shows that, the image is missing network interface files ifconfig-eth0 in /etc/sysconfig/network-scripts directory so it never gets an ip and does not accept ssh connection.
The problem is, there are many such images to be generated and I can't open each one manually in GUI of virtualbox and correct the issue and repack. Is there any other possible solution to that?
{
"variables": {
"build_base": ".",
"isref_machine":"create-ova-caf",
"build_name":"virtual-box-jenkins",
"output_name":"packer-virtual-box",
"disk_size":"40000",
"ram":"1024",
"disk_adapter":"ide"
},
"builders":[
{
"name": "{{user `build_name`}}",
"type": "virtualbox-iso",
"guest_os_type": "Other_64",
"iso_url": "rhelis74_1710051533.iso",
"iso_checksum": "",
"iso_checksum_type": "none",
"hard_drive_interface":"{{user `disk_adapter`}}",
"ssh_username": "root",
"ssh_password": "Secret1.0",
"shutdown_command": "shutdown -P now",
"guest_additions_mode":"disable",
"boot_wait": "3s",
"boot_command": [ "auto<enter>"],
"ssh_timeout": "40m",
"headless":
"true",
"vm_name": "{{user `output_name`}}",
"disk_size": "{{user `disk_size`}}",
"output_directory":"{{user `build_base`}}/output-{{build_name}}",
"format": "ovf",
"vrdp_bind_address": "0.0.0.0",
"vboxmanage": [
["modifyvm", "{{.Name}}","--nictype1","virtio"],
["modifyvm", "{{.Name}}","--memory","{{ user `ram`}}"]
],
"skip_export":true,
"keep_registered": true
}
],
"provisioners": [
{
"type":"shell",
"inline": ["ls"]
}
]
}
When you don't need the SSH connection during the provisioning process you can switch it off. See the packer documentation about communicator, there you see the option none to switch of the communication between host and guest.
{
"builders": [
{
"type": "virtualbox-iso",
"communicator": "none"
}
]
}
Packer Builders DOCU virtualbox-iso
I am trying to create a 3 node cluster on RabbitMQ. I have the first node up and running. When I issue join cluster command from node 2, it is throwing an error that node is down.
rabbitmqctl join_cluster rabbit#hostname02
I am getting the following error:
Status of node rabbit#hostname02 ...
Error: unable to connect to node rabbit#hostname02: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#hostname02]
rabbit#hostname02:
* connected to epmd (port 4369) on hostname02
* epmd reports: node 'rabbit' not running at all
no other nodes on hostname02
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-30#hostname02'
- home dir: /var/lib/rabbitmq
- cookie hash: bygafwoj/ISgb3yKej1pEg==
This is my config file.
[
{rabbit, [
{cluster_nodes, {[rabbit#hostname01, rabbitmq#hostname02, rabbit#hostname03], disc}},
{cluster_partition_handling, ignore},
{tcp_listen_options,
[binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true},
{exit_on_close, false}]
},
{default_user, <<"guest">>},
{default_pass, <<"guest">>},
{log_levels, [{autocluster, debug}, {connection, info}]}
]},
{kernel, [
]},
{rabbitmq_management, [
{listener, [
{port, 15672}
]}
]}
].
% EOF
I have updated the /etc/hosts file with the details of all 3 nodes on all the 3 servers. I am not sure where I am getting this wrong.
Morning,
I clustered two servers and it was working with rabbitmq.config that used just the ldap backend. I tried to change it so it would use ldap just for authentication and internal for authorization, and I can log into the management console on the first server (rabbitmq01p). However, if I try to access the 2nd server (rabbitmq02p) management console, it now throws:
Got response code 500 with body
This happens even with a test internal user radmin that I created.
I am not sure what needs to change.
The rabbitmq.config:
[
{rabbit, [
{loopback_users, []},
{auth_backends, [{rabbit_auth_backend_ldap,
rabbit_auth_backend_internal}, rabbit_auth_backend_internal]},
{log_levels, [{channel, info}, {connection, info}, {federation, info},
mirroring, info}]},
{tcp_listen_options,
[binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true},
{exit_on_close, false}]
},
{default_user, <<"radmin">>},
{default_pass, <<"radmin">>}
]},
{kernel, [
]}
,
{rabbitmq_management, [
{listener, [
{port, 15672}
]}
]}
%% {listener, [{port, 12345},
%% {ip, "127.0.0.1"},
%% {ssl, true},
%% {ssl_opts, [{cacertfile, "/path/to/cacert.pem"},
%% {certfile, "/path/to/cert.pem"},
%% {keyfile, "/path/to/key.pem"}]}]},
,
{rabbitmq_auth_backend_ldap, [
{other_bind, {"CN=LDAP Demo,OU=Generic and Shared
Accounts,OU=Admin,dc=usa,dc=company,dc=com", "password"}},
{servers, ["ldap-server.company.com"]},
{user_dn_lookup_attribute, "sAMAccountName"},
{dn_lookup_base, "ou=User Accounts,ou=USA,DC=company,DC=com" },
{user_dn_pattern, "${username}#usa.company.com" },
{use_ssl, false},
{port, 3268},
{log,true},
{group_lookup_base, "ou=Groups,dc=usa,dc=company,dc=com"},
{tag_queries, [{administrator, {in_group, "CN=Server
Team,OU=Groups,DC=usa,DC=company,DC=com"}},
{management, {constant, true}}]}
]
}
].
The error in the log:
=ERROR REPORT==== 13-Nov-2017::09:03:26 ===
Ranch listener rabbit_web_dispatch_sup_15672 had connection process started with cowboy_protocol:start_link/4 at <0.1234.0> exit with reason: {[{reason,{badmatch,undefined}},{mfa,{rabbit_mgmt_wm_whoami,is_authorized,2}},{stacktrace,[{rabbit_auth_backend_ldap,env,1,[{file,"src/rabbit_auth_backend_ldap.erl"},{line,580}]},{rabbit_auth_backend_ldap,log,2,[{file,"src/rabbit_auth_backend_ldap.erl"},{line,721}]},{rabbit_auth_backend_ldap,user_login_authentication,2,[{file,"src/rabbit_auth_backend_ldap.erl"},{line,74}]},{rabbit_access_control,try_authenticate,3,[{file,"src/rabbit_access_control.erl"},{line,88}]},{rabbit_access_control,'-check_user_login/2-fun-0-',4,[{file,"src/rabbit_access_control.erl"},{line,65}]},{lists,foldl,3,[{file,"lists.erl"},{line,1248}]},{rabbit_mgmt_util,is_authorized,6,[{file,"src/rabbit_mgmt_util.erl"},{line,160}]},{cowboy_rest,call,3,[{file,"src/cowboy_rest.erl"},{line,976}]}]},{req,[{socket,#Port<0.25192>},{transport,ranch_tcp},{connection,keepalive},{pid,<0.1234.0>},{method,<<"GET">>},{version,'HTTP/1.1'},{peer,{{10,2,2,144},52823}},{host,<<"esrabbitmq02p.usa.company.com">>},{host_info,undefined},{port,15672},{path,<<"/api/whoami">>},{path_info,undefined},{qs,<<>>},{qs_vals,[]},{bindings,[]},{headers,[{<<"host">>,<<"esrabbitmq02p.usa.company.com:15672">>},{<<"connection">>,<<"keep-alive">>},{<<"user-agent">>,<<"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36">>},{<<"authorization">>,<<"Basic cmFkbWpzOnJhZG1pbg==">>},{<<"content-type">>,<<"application/json">>},{<<"accept">>,<<"/">>},{<<"referer">>,<<"http://esrabbitmq02p.usa.company.com:15672/">>},{<<"accept-encoding">>,<<"gzip, deflate">>},{<<"accept-language">>,<<"en-US,en;q=0.8">>},{<<"cookie">>,<<"_SI_VID_1.681cceba2200012815576dcc=3bafef640f2a946d6f48e512; _vwo_uuid_v2=5707BE963C1A8F85D47ABE721862DCD2|e694e7c388bfb2dd860621dc71c082fc; _ceg.s=ow9umn; _ceg.u=ow9umn; RDTC=1; __utmz=234506268.1505479911.21.2.utmcsr=favorites.usa.company.com|utmccn=(referral)|utmcmd=referral|utmcct=/; _SI_VID_3.681cceba2200012815576dcc=3bafef640f2a946d6f48e512; LPVID=lmMWNkODgyOWJiMDYzN2Jk; rxVisitor=15053163485147G3CS2HA9NUHJFVP185Q0ASL8J4DDIV2; amlbcookie=03; iPlanetDirectoryPro=AQIC5wM2LY4Sfcyx28ueXpdDc1glrOUlOpBpriQ5JrEN_3Y.AAJTSQACMDIAAlNLABMtNjg0NTczODcxNTgzMTczMjU1AAJTMQACMDM.; __utma=234506268.404039322.1499344780.1509745463.1510321495.40; __utmc=234506268; _ga=GA1.2.404039322.1499344780; m=2258:cmFkbWluOnJhZG1pbg%253D%253D">>}]},{p_headers,[{<<"connection">>,[<<"keep-alive">>]}]},{cookies,undefined},{meta,[]},{body_state,waiting},{buffer,<<>>},{multipart,undefined},{resp_compress,true},{resp_state,waiting},{resp_headers,[{<<"vary">>,<<"origin">>}]},{resp_body,<<>>},{onresponse,#Fun}]},{state,{context,undefined,none,undefined}}],[{cowboy_rest,is_authorized,2,[{file,"src/cowboy_rest.erl"},{line,150}]},{cowboy_protocol,execute,4,[{file,"src/cowboy_protocol.erl"},{line,442}]}]}
I am not sure when/how I missed it, but I had to run (rerun?)
rabbitmq-plugins enable rabbitmq_auth_backend_ldap
After this, the authentication worked.