I am trying to integrate activemq with datadog. I have modified /Users//.datadog-agent/conf.d/activemq_58.yaml.
Changes are:
instances:
- host: localhost
port: 8161
user: admin
password: admin
activemq is running in localhost at default port with jmx enabled.
Restarted datadog agent
I could see error after running info command. Error is
activemq_58
- initialize check class [ERROR]: 'mapping values are not allowed in >this context\n in "<byte string>", line 4, column 10'
Can anybody suggest that why I am getting this error?
is your activemq_58.yaml all in one line like that? You probably want it to be more like this:
instances:
- host: localhost
port: 8161
user: admin
password: admin
Related
I'm trying to use DDEV to locally test an upgrade of my running clubs's Drubal 7 website.
I've got one container with a copy of the website, result below is from the DDEV describe command:
URLs
----
https://drupalTest.ddev.site:8003
https://127.0.0.1:32773
http://drupalTest.ddev.site:8002
http://127.0.0.1:32774
MySQL/MariaDB Credentials
-------------------------
Username: "db", Password: "db", Default database: "db"
or use root credentials when needed: Username: "root", Password: "root"
Database hostname and port INSIDE container: db:3306
To connect to db server inside container or in project settings files:
mysql --host=db --user=db --password=db --database=db
Database hostname and port from HOST: 127.0.0.1:32771
To connect to mysql from your host machine,
mysql --host=127.0.0.1 --port=32771 --user=db --password=db --database=db
Other Services
--------------
MailHog (https): https://drupalTest.ddev.site:8026
MailHog: http://drupalTest.ddev.site:8025
phpMyAdmin (https): https://drupalTest.ddev.site:8037
phpMyAdmin: http://drupalTest.ddev.site:8036
I also have a container with Drupal 8 (fresh install).
URLs
----
https://drupal8migration.ddev.site:8017
https://127.0.0.1:32769
http://drupal8migration.ddev.site:8016
http://127.0.0.1:32770
MySQL/MariaDB Credentials
-------------------------
Username: "db", Password: "db", Default database: "db"
or use root credentials when needed: Username: "root", Password: "root"
Database hostname and port INSIDE container: db:3306
To connect to db server inside container or in project settings files:
mysql --host=db --user=db --password=db --database=db
Database hostname and port from HOST: 127.0.0.1:32797
To connect to mysql from your host machine,
mysql --host=127.0.0.1 --port=32797 --user=db --password=db --database=db
Other Services
--------------
MailHog (https): https://drupal8migration.ddev.site:8026
MailHog: http://drupal8migration.ddev.site:8025
phpMyAdmin (https): https://drupal8migration.ddev.site:8037
phpMyAdmin: http://drupal8migration.ddev.site:8036
I'm having problems getting the drush migrate-upgrade command to work, this is the
ddev exec drush migrate-upgrade --legacy-db-url=mysql://db:db#127.0.0.1:32771/db --legacy-root=https://drupalTest.ddev.site:8003 --configure-only
Just getting this error:
SQLSTATE[HY000] [2002] Connection refused [error]
Any help appreciatd
Welcome to ddev, Mark!
Your problem is thtat you're using the wrong --legacy-db-url there. The credentials of the database are going to be:
host: container name of the legacy install (like ddev--db) (NOT 127.0.0.1)
Port: does not need to be specified, because it's the default 3306 (inside the docker container space)
So it looks like you want something like this:
ddev exec drush migrate-upgrade --legacy-db-url=mysql://db:db#ddev-drupaltest-db/db --legacy-root=https://drupalTest.ddev.site:8003 --configure-only
See the faq under "Can different projects communicate with each other"
Also, you'll absolutely want to read Migrating from Drupal 6 to Drupal 8 Like a Boss, which helps to understand all these things in the context of migration.
I note that you seem to be using different http ports for different projects - you don't need to do that at all. The normal way to use ddev is for everything to be on ports 80 and 443 (or some other port set if you have conflicts). You do not need to set router_http_port or router_https_port just to run multiple projects on the same host.
I'm using molecule and vagrant to deploy centos7 instance. For some reasons, I need to use ssh command access molecule instance, instead of molecule login. The ssh informations will then paste into one of my VS code extension.
Molecule.yml
---
dependency:
name: gilt
driver:
name: vagrant
provider:
name: virtualbox
lint:
name: yamllint
platforms:
- name: openresty-instance
box: centos/7
instance_raw_config_args:
- "ssh.insert_key = false"
- "vm.network 'forwarded_port', guest: 22, host: 22"
- "vm.network 'forwarded_port', guest: 80, host: 8080"
interfaces:
- auto_config: true
network_name: private_network
ip: '192.168.33.111'
provisioner:
name: ansible
log: true
lint:
name: ansible-lint
verifier:
name: testinfra
lint:
name: flake8
The IP above let me access port 80 outside vagrant.
But the ssh command to molecule instance IP is not working.
Error
########################################################### #
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
########################################################### IT IS
POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be
eavesdropping on you right now (man-in-the-middle attack)! It is also
possible that a host key has just been changed. The fingerprint for
the ECDSA key sent by the remote host is
SHA256:wVk4Da5pWWNHLiypvEKAJuwzG/2FLOMgwPkrO4oFBZQ. Please contact
your system administrator. Add correct host key in
/Users/abel/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /Users/abel/.ssh/known_hosts:32 ECDSA
host key for 192.168.33.111 has changed and you have requested strict
checking. Host key verification failed
This message can mean what it says: "that there is something nasty going on" if you have this in an environment with static servers.
But if you have, say, a testing-environment, where you create and destroy virtual machines as a daily procedure, this is a "normal" security warning.
It just means "hey, I now this guy, but his fingerprint doesn't match the one in my document archive". If this is intended (like I said, in a test-environment) - then just go into the "document archive", delete "this guys fingerprint" and "take a new fingerprint of him".
So in your case ("/Users/abel/.ssh/known_hosts:32") just open your "known_hosts"-file, and delete the line 32.
Or use the command:
ssh-keygen -R 192.168.33.111 -f "~/Users/abel/.ssh/known_hosts"
It is unclear for me how to reset the password to access the Sequel Pro. Those are my settings:
Name: localhost
Host: 127.0.0.1
Username: root
Port: 8889
How to reset password/ solve this set up of Sequel Pro issue?
Log in with your information above as usual.
Click on "Query".
Run the following query:
SET PASSWORD = PASSWORD('yournewpassword');
Exit and log in with your info above and "yournewpassword".
We are implementing a Hyperledger Fabric solution. To do so, we set up a fabric-CA, using the minimal configuration (we are still trying to figure out how the things works) in a specific docker.
As we need to login our users, using a email/password couple, we set up a LDAP component. We choosed to use OpenLDAP, using osixia/openldap implementation in a different docker.
We set the parameters in the fabric-ca-server-config.yaml to connect Fabric CA to the LDAP. At the start of both dockers, the logs seems fine :
Successfully initialized LDAP client
When we carry on the Fabric-CA tutorial, we fail at the command :
fabric-ca-client enroll -u http://cn=admin,dc=example:admin#localhost:7054
The result is :
[INFO] 127.0.0.1:46244 POST /enroll 401 23 "Failed to get user: Failed to connect to LDAP server over TCP at localhost:389: LDAP Result Code 200 "": dial tcp 127.0.0.1:389: connect: connection refused"
The LDAP is setup and functionning correctly, when sollicitated in CLI and via PHPLdapAdmin, an LDAP Browser, using the same credentials.
This is a bit of the fabric-ca-server-config.yaml:
ldap:
enabled: true
url: ldap://cn=admin,dc=example:admin#localhost:389/dc=example
userfilter: (uid=%s)
tls:
enabled: false
certfiles:
client:
certfile: noclientcert
keyfile:
attribute:
names: ['uid','member']
converters:
- name: hf.Revoker
value: attr("uid") =~ "revoker*"
maps:
groups:
- name: example
value: peer
Anyone could help ?
Thanks for reading,
I see two issues here:
First is more related with docker rather than fabric-ca. You have to set netowrk_mode to host to remove network isolation between the container and the Docker host. Then your docker container will see OpenLDAP located on Docker host
Please look into sample docker-compose.yaml file
version: '2'
services:
fabric-ca-server:
image: hyperledger/fabric-ca:1.1.0
container_name: fabric-ca-server
ports:
- "7054:7054"
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
volumes:
- ./fabric-ca-server:/etc/hyperledger/fabric-ca-server
command: sh -c 'fabric-ca-server start'
network_mode: host
More about docker network you can find here: https://docs.docker.com/network/
When network issue will be resolved, you have also to modify userfilter to relate with admin prefix so it should looks like this: userfilter: (cn=%s) If userfilter will not be repaired then you will get info that admin cannot be found in LDAP.
I did not using the local LDAP server, instead I am using the one line for the quick test...
http://www.forumsys.com/tutorials/integration-how-to/ldap/online-ldap-test-server/
However I am still getting the error as well.
My fabric-ca-server-config.yaml is
ldap:
enabled: true
url: ldap://cn=read-only-admin,dc=example,dc=com:password#ldap.forumsys.com:389/dc=example,dc=com
tls:
certfiles:
client:
certfile:
keyfile:
# Attribute related configuration for mapping from LDAP entries to Fabric CA attributes
attribute:
names: ['uid','member']
converters:
- name: hf.Revoker
value: attr("uid") =~ "revoker*"
maps:
groups:
- name:
value:
And I run it by:
fabric-ca-server start -c fabric-ca-server-config.yaml
I saw logs:
Successfully initialized LDAP client
Here is the screenshot for phpLDAPAdmin:
I am using the same script for testing:
$fabric-ca-client enroll -u http://cn=read-only-admin,dc=example,dc=com:password#localhost:7054
$fabric-ca-client enroll -u http://uid=tesla,dc=example,dc=com:password#localhost:7054
But still not good, getting something like:
POST /enroll 401 23 "Failed to get user: User 'uid=tesla,dc=example,dc=com' does not exist in LDAP directory"
I'm trying to set up a CouchDB 2.0 instance up on my CentOS 7 server.
I've got it installed and running as a systemd service and it responses with its friendly hello world message when I access it from the server using 127.0.0.1 or 0.0.0.0
$ curl 127.0.0.1:5984
{"couchdb":"Welcome","version":"2.0.0","vendor":{"name":"The Apache Software Foundation"}}
$ curl 0.0.0.0:5984
{"couchdb":"Welcome","version":"2.0.0","vendor":{"name":"The Apache Software Foundation"}}
in my local.ini file I've configed the bind_address to 0.0.0.0
[httpd]
bind_address = 0.0.0.0
My understanding was that if I had this bind address I could connect to port 5984 from any ip address open in my firewall
I'm using firewalld for my firewall and I've configured it to open port 5984
This config is confirmed by listing the configuration of the public zone:
$ sudo firewall-cmd --zone=public --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: couchdb2 dhcpv6-client http https ssh
ports: 443/tcp 5984/tcp
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
I've also created a service called couchdb2 at /etc/firewalld/services/couchdb2.xml with XML:
<service>
<short>couchdb2</short>
<description>CouchDB 2.0 Instance</description>
<port protocol="tcp" port="5984"/>
</service>
From what I know about firewalld I should be able to receive connection on 5984 now
but when I curl from my laptop my connection is refused:
$ curl my-server:5984 --verbose
* Rebuilt URL to: my-server:5984/
* Trying <my-ip>...
* connect to <my-ip> port 5984 failed: Connection refused
* Failed to connect to my-server port 5984: Connection refused
* Closing connection 0
When I connect to the couchdb instance locally via either 127.0.0.1 or 0.0.0.0 I can see the 200 response in my couchdb log:
$ sudo journalctl -u couchdb2
...
[notice] 2017-06-06T00:35:01.159244Z couchdb#localhost <0.3328.0> 222d655c69 0.0.0.0:5984 127.0.0.1 undefined GET / 200 ok 28
[notice] 2017-06-06T00:37:21.819298Z couchdb#localhost <0.5598.0> 2f8986d14b 127.0.0.1:5984 127.0.0.1 undefined GET / 200 ok 1
But when I curled from my laptop nothing shows up in the couchdb log for the Connection Refused error
This suggests to me that the problem may be the firewall and not CouchDB but I'm not sure about that.
Is Connection Refused always the firewall? Would I be getting some other error if this where the CouchDB instance having a problem?
To the best of my knowledge both CouchDB and firewalld are configured correctly, but its not working like I expected.
Any help would be appreciated, whether you know the problem or whether you can just help me discern if the problem is related to CouchDB or firewalld.