I created my own AMI & when I start my instance sshd is not getting started. What might be the problem?
Please find below the system log snippet
init: rcS main process (199) terminated with status 1
Entering non-interactive startup
NET: Registered protocol family 10
lo: Disabled Privacy Extensions
Bringing up loopback interface: OK
Bringing up interface eth0:
Determining IP information for eth0...type=1400 audit(1337940238.646:4): avc: denied { getattr } for pid=637 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
martian source 255.255.255.255 from 169.254.1.0, on dev eth0
ll header: ff:ff:ff:ff:ff:ff:fe:ff:ff:ff:ff:ff:08:00
type=1400 audit(1337940239.023:5): avc: denied { getattr } for pid=647 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
type=1400 audit(1337940239.515:6): avc: denied { getattr } for pid=674 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
type=1400 audit(1337940239.560:7): avc: denied { getattr } for pid=690 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
done.
OK
Starting auditd: OK
Starting system logger: OK
Starting system message bus: OK
Retrigger failed udev events OK
Starting sshd: FAILED
The problem was due to selinux. Once I disabled selinux during boot up by providing selinux=0 as argument in GRUB for kernel field, the machine booted with sshd service started and I'm able to connect to it.
Related
I got following selinux permission issues:
[ 35.353551] type=1400 audit(38.680:14): avc: denied { ioctl } for
pid=266 comm="multilink" path="socket:[12798]" dev="sockfs" ino=12798
ioctlcmd=0x8946 scontext=u:r:multilink:s0 tcontext=u:r:multilink:s0
tclass=socket permissive=1
[ 35.353789] type=1400 audit(38.680:16): avc: denied { ioctl } for
pid=266 comm="multilink" path="socket:[12799]" dev="sockfs" ino=12799
ioctlcmd=0x8933 scontext=u:r:multilink:s0 tcontext=u:r:multilink:s0
tclass=packet_socket permissive=1
I tried to add following rules to fix this issue:
allowxperm multilink self:socket ioctl SIOCETHTOOL;
allowxperm multilink self:packet_socket ioctl SIOCGIFINDEX;
But, it didn't work, same issues occurred again.
Do I miss something ?
Adding another rule will fix this issue:
allow multilink self:socket { create ioctl };
allow multilink self:packet_socket { create ioctl };
I would like to use RabbitMQ to send messages from a webapp backend to a second module. On my laptop, it works, but when I deploy the application on a VPS, even in dev mode, it doesn't work anymore... Could you please help me solve this out?
Current status :
If I check the queues on the VPS where both modules are installed, then, it looks ok (messages are added in the queue)
$ rabbitmqctl list_queues
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
MyMessages 2
When I launch the second module, I get following log :
Waiting for a request on queue : MyMessages, hosted at localhost
Comming from the following java code :
public static void main(String[] args) throws IOException, TimeoutException {
RabbitMQConsumer rabbitMQConsumer = new RabbitMQConsumer();
rabbitMQConsumer.waitForRequests();
System.out.println("Waiting for a request on queue : " + AppConfig.QUEUE_NAME + ", hosted at " + AppConfig.QUEUE_HOST);
}
public RabbitMQConsumer() throws IOException, TimeoutException {
mapper = new ObjectMapper();
ConnectionFactory connectionFactory = new ConnectionFactory();
connectionFactory.setHost(AppConfig.QUEUE_HOST);
Connection connection = connectionFactory.newConnection();
channel = connection.createChannel();
}
public void waitForRequests() throws IOException {
DefaultConsumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
try {
System.out.println("Message received ! ");
channel.basicAck(envelope.getDeliveryTag(), false);
} catch (Exception e) {
e.printStackTrace();
}
}
};
channel.queueDeclare(AppConfig.QUEUE_NAME, true, false, false, null);
channel.basicConsume(AppConfig.QUEUE_NAME, consumer);
}
I think both modules are looking at the same queue, there are messages in the quue, so... to me, it looks like messages are not consummed... I've looked at the status of rabbitMQ, but I do not know how to use it :
$ invoke-rc.d rabbitmq-server status
● rabbitmq-server.service - RabbitMQ broker
Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2018-04-07 18:24:59 CEST; 1h 38min ago
Process: 17103 ExecStop=/usr/lib/rabbitmq/bin/rabbitmqctl shutdown (code=exited, status=0/SUCCESS)
Main PID: 17232 (beam.smp)
Status: "Initialized"
Tasks: 84 (limit: 4915)
CGroup: /system.slice/rabbitmq-server.service
├─17232 /usr/lib/erlang/erts-9.3/bin/beam.smp -W w -A 64 -P 1048576 -t 5000000 -stbt db -zdbbl 1280000 -K true -- -root /usr/lib/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/r
abbitmq/lib/rabbitmq_server-3.7.4/ebin -noshell -noinput -s rabbit boot -sname rabbit#vps5322 -boot start_sasl -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_err
or_logger false -rabbit lager_log_root "/var/log/rabbitmq" -rabbit lager_default_file "/var/log/rabbitmq/rabbit#vps5322.log" -rabbit lager_upgrade_file "/var/log/rabbitmq/rabbit#vps5322_upgrade.log" -r
abbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/plugins:/usr/lib/rabbitmq/lib/rabbitmq_server-3.7.4/plugins" -rabbit plugins_expand_dir "/var/lib/rabbitmq/
mnesia/rabbit#vps5322-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit#vps5322" -kernel inet_dist_listen_m
in 25672 -kernel inet_dist_listen_max 25672
├─17319 /usr/lib/erlang/erts-9.3/bin/epmd -daemon
├─17453 erl_child_setup 1024
├─17475 inet_gethost 4
└─17476 inet_gethost 4
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ## ##
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ## ## RabbitMQ 3.7.4. Copyright (C) 2007-2018 Pivotal Software, Inc.
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ########## Licensed under the MPL. See http://www.rabbitmq.com/
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ###### ##
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ########## Logs: /var/log/rabbitmq/rabbit#vps5322.log
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: /var/log/rabbitmq/rabbit#vps5322_upgrade.log
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: Starting broker...
Apr 07 18:24:59 vps5322 rabbitmq-server[17232]: systemd unit for activation check: "rabbitmq-server.service"
Apr 07 18:24:59 vps5322 systemd[1]: Started RabbitMQ broker.
Apr 07 18:24:59 vps5322 rabbitmq-server[17232]: completed with 0 plugins.
Finally, note that the webapp application is a PlayFramework app, with these dependencies :
libraryDependencies ++= Seq(
guice,
"com.rabbitmq" % "amqp-client" % "5.2.0"
)
Whereas the second module is a pure java code, based on maven, with the following pom :
<dependency>
<groupId>com.rabbitmq</groupId>
<artifactId>amqp-client</artifactId>
<version>5.2.0</version>
</dependency>
Any idea of the problem?
Thank you very much !!
Finally I've found the problem. This configuration is actually working, but I could not see it because of a crash in my own app that was not logged because of an error in my log4J configuration.
Just in case, the error I had was that a local library included in my pom with a relative path (${project.basedir}) was found by my IDE but not anymore once deployed on a VPS. To solve this, I've just moved this (hopefully) very small library directly into my project. After solving this issue, I had to reset rabbitMQ and then it was all fine :
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app
Thank you very much,
Regards,
Provisioning a DSE cluster with the lifecycle manager fails consitently. Master node (also the one OpsCenter is running on) installed correctly. Each one of the other nodes fails the install (also config) task. Have double-checked the SSH credentials and ports. Any ideas on how to investigate further and fix the issue would be great.
Please excuse the length - trying to provide all of the relevant info.
Ubuntu 14.04.4,
JRE: 1.8.0.91,
DSE 5.0.0
job events:
...
"results": [
{
"event-subtype": "start",
"event-type": "milestone",
"message": "job started...",
...
},
{
"event-subtype": "invocation",
"event-type": "shell-command",
"message": "Invoked command: if [ -x $(which yum) ] && [ -f /etc/redhat-release -o -f /etc/SuSE-release ]; then echo -n yum; elif [ -x $(which apt-get) ]; then echo -n apt; fi"
...
},
{
"event-subtype": "uploaded-facts",
"event-type": "milestone",
"message": "Uploaded facts to OpsCenter server",
...
},
{
"event-subtype": "meld-error",
"event-type": "error",
"message": "Unexpected error executing meld",
...
},
{
"event-subtype": "MeldError",
"event-type": "error",
"message": "Meld failed on: name=\"NODE-2\" ssh-management-address=\"<IP>\" node-id=\"<node-id>\" job-id=\"<job-id>\" stdout=\"\r\n\" stderr=\"\"",
...
}
]
opscenterd.log
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:16,848 [opscenterd] INFO: Install job started for node name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" (async-thread-macro-53)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:16,850 [opscenterd] INFO: using ssh-private-key (async-thread-macro-53)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:18,478 [opscenterd] INFO: Received milestone from node name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" message="Uploaded facts to OpsCenter server" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" (MainThread)
/var/log/opscenter/opscenterd.log:2016-07-02 16:34:18,675 [opscenterd] ERROR: Received error from node event-subtype="meld-error" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" name="NODE-2" traceback="Traceback (most recent call last):
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 3313, in run
/var/log/opscenter/opscenterd.log- rc = engine.go()
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 2991, in go
/var/log/opscenter/opscenterd.log- self.file_manager.get_config_files()
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 1280, in get_config_files
/var/log/opscenter/opscenterd.log- {\"accept\": \"application/json\"})
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 598, in get
/var/log/opscenter/opscenterd.log- return json.loads(response.read())
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/socket.py\", line 351, in read
/var/log/opscenter/opscenterd.log- data = self._sock.recv(rbufsize)
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 549, in read
/var/log/opscenter/opscenterd.log- return self._read_chunked(amt)
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 609, in _read_chunked
/var/log/opscenter/opscenterd.log- value.append(self._safe_read(amt))
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 666, in _safe_read
/var/log/opscenter/opscenterd.log- raise IncompleteRead(''.join(s), amt)
/var/log/opscenter/opscenterd.log:IncompleteRead: IncompleteRead(4153 bytes read, 4039 more expected)" ssh-management-address="<IP>" node-id="<node-id>" event-type="error" message="Unexpected error executing meld" (MainThread)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:18,892 [opscenterd] ERROR: Install job a630c081-6ac1-4b00-ac08-18fef320e0d5 failed! (async-thread-macro-54)
/var/log/opscenter/opscenterd.log:2016-07-02 16:34:19,105 [opscenterd] ERROR: Meld failed on: name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" stdout="
/var/log/opscenter/opscenterd.log-" stderr="" (async-thread-macro-53)
Thank you
EDIT: Captured the HTTP traffic between NODE2 and master. The error occurs while transferring config files. One of them is not transferred completely for some reason. The json looks resonable until some gibberish appears.
{"filename": "dse.yaml", "contents": {"internode_messaging_options": {"client_worker_threads": 16, "port": 8609, "server_worker_threads": 16, "server_acceptor_thread
Yvatv+~UK{.kMI4^QOrqQTDX_3"DPm,v!"H&M$!1M7
LRYCs{l>-df;cj
W6C9dq
The config files are valid and do work on the master node. Only the replication fails.
OpsCenter LCM developer here. Your issue is caused by OPSC-8851 in the LCM known issues list: http://docs.datastax.com/en/opscenter/6.0/opsc/release_notes/opscReleaseNotes600.html
This is only triggered under certain network conditions and was discovered too close to release to get fixed in 6.0.0. It's a high priority though, and will be fixed in a subsequent release soon. Unfortunately, I don't think there's anything you can do to work around this in the field. If you're a DataStax customer, you could contact support and potentially get a patch now to workaround the issue... otherwise the only thing I can suggest is to watch the upcoming release notes.
Edit: I should also note that in our tests the issue is intermittent. LCM is designed so you can rerun failed jobs safely (aka it's idempotent) so in all but the most extreme cases you can also work around this just by rerunning your job.
You can specify the private IP for Listen Address and 0.0.0.0 for broadcast address and LCM should be able to provision appropriately.
I am attempting to automate our testing using Selenium and Selenium Grid 2. To do this I have create a VirtualBox VM and packaged it with vagrant into a box. Using simple batch scripts, eventually want to run this on a Jenkins CI server, I can start the vagrant box,but I get:
c:\seleniumServer>vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'IE_Vagrant.box'...
==> default: Matching MAC address for NAT networking...
==> default: Setting the name of the VM:seleniumServer_default_1436811491763_573
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: bridged
==> default: Forwarding ports...
default: 22 => 2222 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: password
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
I can start the Selenium Hub, and selenium Node and they register. I can even ssh into the vagrant box after it is done telling it it cannot connect. I have setup cygwin and OpenSSH on the box.
When I try to run the testNg test from Eclipse I get :
Error forwarding the new session Error forwarding the request Connect to 10.0.2.15:5566 [/10.0.2.15] failed: Connection timed out: connect.
Here are the relevant bits.
Start node with
java -jar lib/selenium-server-standalone-2.46.0.jar -role webdriver -hub http://localhost:4444/grid/register -browser browserName="chrome",version=ANY,platform=WINDOWS,maxInstances=5 -Dwebdriver.chrome.driver="c\seleniumDrivers\chromedriver.exe"
Start the Hub with
java -jar selenium-server-standalone-2.46.0.jar -role hub
VagrantFile:
Vagrant.configure(2) do |config|
config.vm.boot_timeout = "300"
config.ssh.username = "vagrant"
config.ssh.password = "vagrant"
config.vm.network "public_network"
config.vm.box = "IE_Vagrant.box"
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
vb.gui = true
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
end
And here is my test:
package com.hiiq.qa.testing.gen2;
import static org.junit.Assert.assertEquals;
import java.net.MalformedURLException;
import java.net.URL;
import org.openqa.selenium.By;
import org.openqa.selenium.Platform;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
public class GridTest {
private static RemoteWebDriver driver;
#BeforeClass
public void setUp() throws MalformedURLException {
DesiredCapabilities capability = new DesiredCapabilities();
//capability.setBrowserName("chrome");
capability.setBrowserName(DesiredCapabilities.chrome().getBrowserName());
capability.setPlatform(Platform.WINDOWS);
//capability.setVersion("");
capability.setJavascriptEnabled(true);
driver = new RemoteWebDriver(new URL("http://10.70.1.28:4444/wd/hub"), capability);
driver.get("http://10.1.6.112:8383");
}
#Test
public void loginTest(){
Check this tutorial if the box is properly setup, especially virtualbox guest additions: https://dennypc.wordpress.com/2014/06/09/creating-a-windows-box-with-vagrant-1-6/
vagrant up and vagrant ssh should work properly.
Then setup your Vagrantfile for Port Forwarding:
Vagrant.configure(2) do |config|
config.vm.boot_timeout = "300"
config.ssh.username = "vagrant"
config.ssh.password = "vagrant"
config.vm.network "public_network"
config.vm.box = "IE_Vagrant.box"
config.vm.network "forwarded_port", guest: 4444, host: 4444
config.vm.network "forwarded_port", guest: 8383, host: 8383
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
vb.gui = true
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
end
end
Contact your services in your tests by localhost:4444 and localhost:8383.
See https://github.com/arunoda/meteor-up/issues/171
I am trying to deploy my meteor app from my nitrous box to a remote server in Linode.
I follow the instruction in meteor up and got
Invalid mup.json file: Server username does not exit
mup.json
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
// "username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 1024 }
}
]
So I uncomment the username: "roote line in mup.json and I did mup logs -n 300 and got the following error:
[123.456.78.90] ssh: connect to host 123.456.78.90 port 1024: Connection refused
I suspect I may did something wrong in setting up the SSH key. I can access my remote server without password after setting up my ssh key in ~/.ssh/authorized_keys.
The content of the authorized_keys looks like this:
ssh-rsa XXXXXXXXXX..XXXX== root#apne1.nitrousbox.com
Do you guys have any ideas of what went wrong?
Problem solved by uncommenting the username and changing the port to 22:
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
"username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 22 }
}
]