problems with node joining the cluster when using sst:xtrabackup (galera) - load-balancing

It looks like the node is joining the cluster and then it fails… I have tried with both rsync and xtrabackup and it fails during state transfer. I seems to me that I am missing something real simple and I am not able to put a finger on it.. Any help would be appreciated.
More information regarding the nodes
Master - 10.XXX.XXX.161
node1 - 10.XXX.XXX.160
Packages installed:
MariaDB-compat MariaDB-common MariaDB-devel MariaDB-shared MariaDB-client MariaDB-test MariaDB-Galera-server (v5.5.29-1)
galera (v23.2.4-1.rhel6)
percona-xtrabackup (v2.1.6-702.rhel6)
config for node 1
[mysqld]
wsrep_cluster_address = gcomm://10.XXX.XXX.161
wsrep_provider = /usr/lib64/galera/libgalera_smm.so
wsrep_provider_options = gcache.size=4G; gcache.page_size=1G
wsrep_cluster_name = galera_cluster
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
innodb_locks_unsafe_for_binlog = 1
wsrep_sst_method = xtrabackup
wsrep_sst_auth = root:rootpassword
wsrep_node_name=1
config for master
[mysqld]
wsrep_cluster_address = gcomm://
wsrep_provider = /usr/lib64/galera/libgalera_smm.so
wsrep_provider_options = gcache.size=4G; gcache.page_size=1G
wsrep_cluster_name = galera_cluster
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
innodb_locks_unsafe_for_binlog = 1
wsrep_sst_method = rsync
wsrep_slave_threads = 4
wsrep_sst_auth = root:rootpassword
wsrep_node_name = 2
node1 log file
131203 16:09:03 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
131203 16:09:03 mysqld_safe WSREP: Running position recovery with --log_error=/tmp/tmp.f2EedjRjbQ
131203 16:09:08 mysqld_safe WSREP: Recovered position 359350ee-5c63-11e3-0800-6673d15135cd:2188
131203 16:09:08 [Note] WSREP: wsrep_start_position var submitted: '359350ee-5c63-11e3-0800-6673d15135cd:2188'
131203 16:09:08 [Note] WSREP: Read nil XID from storage engines, skipping position init
131203 16:09:08 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so'
131203 16:09:08 [Note] WSREP: wsrep_load(): Galera 23.2.4(r147) by Codership Oy <info#codership.com]]> loaded succesfully.
131203 16:09:08 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1
131203 16:09:08 [Note] WSREP: Reusing existing '/var/lib/mysql//galera.cache'.
131203 16:09:08 [Note] WSREP: Passing config to GCS: base_host = 10.XXX.XXX.160; base_port = 4567; cert.log_conflicts = no; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 1G; gcache.size = 4G; gcs.fc_debug = 0; gcs.fc_factor = 1; gcs.fc_limit = 16; gcs.fc_master_slave = NO; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = NO; replicator.causal_read_timeout = PT30S; replicator.commit_order = 3
131203 16:09:08 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
131203 16:09:08 [Note] WSREP: wsrep_sst_grab()
131203 16:09:08 [Note] WSREP: Start replication
131203 16:09:08 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
131203 16:09:08 [Note] WSREP: protonet asio version 0
131203 16:09:08 [Note] WSREP: backend: asio
131203 16:09:08 [Note] WSREP: GMCast version 0
131203 16:09:08 [Note] WSREP: (8814b4ba-5c67-11e3-0800-91035d554a96, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
131203 16:09:08 [Note] WSREP: (8814b4ba-5c67-11e3-0800-91035d554a96, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
131203 16:09:08 [Note] WSREP: EVS version 0
131203 16:09:08 [Note] WSREP: PC version 0
131203 16:09:08 [Note] WSREP: gcomm: connecting to group 'galera_cluster', peer '10.XXX.XXX.161:'
131203 16:09:09 [Note] WSREP: declaring 7a9a87e8-5c67-11e3-0800-8cb6cba8f65a stable
131203 16:09:09 [Note] WSREP: Node 7a9a87e8-5c67-11e3-0800-8cb6cba8f65a state prim
131203 16:09:09 [Note] WSREP: view(view_id(PRIM,7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,2) memb {
7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,
8814b4ba-5c67-11e3-0800-91035d554a96,
} joined {
} left {
} partitioned {
})
131203 16:09:09 [Note] WSREP: gcomm: connected
131203 16:09:09 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
131203 16:09:09 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
131203 16:09:09 [Note] WSREP: Opened channel 'galera_cluster'
131203 16:09:09 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
131203 16:09:09 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
131203 16:09:09 [Note] WSREP: Waiting for SST to complete.
131203 16:09:09 [Note] WSREP: STATE EXCHANGE: sent state msg: 8861cdd5-5c67-11e3-0800-cc70fcc5f515
131203 16:09:09 [Note] WSREP: STATE EXCHANGE: got state msg: 8861cdd5-5c67-11e3-0800-cc70fcc5f515 from 0 (2)
131203 16:09:09 [Note] WSREP: STATE EXCHANGE: got state msg: 8861cdd5-5c67-11e3-0800-cc70fcc5f515 from 1 (1)
131203 16:09:09 [Note] WSREP: Quorum results:
version = 2,
component = PRIMARY,
conf_id = 1,
members = 1/2 (joined/total),
act_id = 2521,
last_appl. = -1,
protocols = 0/4/2 (gcs/repl/appl),
group UUID = 359350ee-5c63-11e3-0800-6673d15135cd
131203 16:09:09 [Note] WSREP: Flow-control interval: [23, 23]
131203 16:09:09 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 2521)
131203 16:09:09 [Note] WSREP: State transfer required:
Group state: 359350ee-5c63-11e3-0800-6673d15135cd:2521
Local state: 00000000-0000-0000-0000-000000000000:-1
131203 16:09:09 [Note] WSREP: New cluster view: global state: 359350ee-5c63-11e3-0800-6673d15135cd:2521, view# 2: Primary, number of nodes: 2, my index: 1, protocol version 2
131203 16:09:09 [Warning] WSREP: Gap in state sequence. Need state transfer.
131203 16:09:11 [Note] WSREP: Running: 'wsrep_sst_xtrabackup --role 'joiner' --address '10.XXX.XXX.160' --auth 'root:rootpassword' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --parent '13175''
131203 16:09:11 [Note] WSREP: Prepared SST request: xtrabackup|10.162.143.160:4444/xtrabackup_sst
131203 16:09:11 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
131203 16:09:11 [Note] WSREP: Assign initial position for certification: 2521, protocol version: 2
131203 16:09:11 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (359350ee-5c63-11e3-0800-6673d15135cd): 1 (Operation not permitted)
at galera/src/replicator_str.cpp:prepare_for_IST():442. IST will be unavailable.
131203 16:09:11 [Note] WSREP: Node 1 (1) requested state transfer from '*any*'. Selected 0 (2)(SYNCED) as donor.
131203 16:09:11 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 2525)
131203 16:09:11 [Note] WSREP: Requesting state transfer: success, donor: 0
tar: dbexport/db.opt: Cannot open: Permission denied
tar: Exiting with failure status due to previous errors
131203 16:10:22 [Note] WSREP: 0 (2): State transfer to 1 (1) complete.
131203 16:10:22 [Note] WSREP: Member 0 (2) synced with group.
WSREP_SST: [ERROR] Error while getting st data from donor node: 0, 2 (20131203 16:10:22.379)
131203 16:10:22 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup --role 'joiner' --address '10.XXX.XXX.160' --auth 'root:rootpassword' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --parent '13175': 32 (Broken pipe)
131203 16:10:22 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
131203 16:10:22 [ERROR] WSREP: SST failed: 32 (Broken pipe)
131203 16:10:22 [ERROR] Aborting
131203 16:10:24 [Note] WSREP: Closing send monitor...
131203 16:10:24 [Note] WSREP: Closed send monitor.
131203 16:10:24 [Note] WSREP: gcomm: terminating thread
131203 16:10:24 [Note] WSREP: gcomm: joining thread
131203 16:10:24 [Note] WSREP: gcomm: closing backend
131203 16:10:25 [Note] WSREP: view(view_id(NON_PRIM,7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,2) memb {
8814b4ba-5c67-11e3-0800-91035d554a96,
} joined {
} left {
} partitioned {
7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,
})
131203 16:10:25 [Note] WSREP: view((empty))
131203 16:10:25 [Note] WSREP: gcomm: closed
131203 16:10:25 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
131203 16:10:25 [Note] WSREP: Flow-control interval: [16, 16]
131203 16:10:25 [Note] WSREP: Received NON-PRIMARY.
131203 16:10:25 [Note] WSREP: Shifting JOINER -> OPEN (TO: 2607)
131203 16:10:25 [Note] WSREP: Received self-leave message.
131203 16:10:25 [Note] WSREP: Flow-control interval: [0, 0]
131203 16:10:25 [Note] WSREP: Received SELF-LEAVE. Closing connection.
131203 16:10:25 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 2607)
131203 16:10:25 [Note] WSREP: RECV thread exiting 0: Success
131203 16:10:25 [Note] WSREP: recv_thread() joined.
131203 16:10:25 [Note] WSREP: Closing slave action queue.
131203 16:10:25 [Note] WSREP: Service disconnected.
131203 16:10:25 [Note] WSREP: rollbacker thread exiting
131203 16:10:26 [Note] WSREP: Some threads may fail to exit.
131203 16:10:26 [Note] /usr/sbin/mysqld: Shutdown complete
Error in my_thread_global_end(): 2 threads didn't exit
131203 16:10:31 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
master log file
131203 16:08:47 [Warning] Recovery from master pos 103358630 and file mysql-bin.000131.
131203 16:08:47 [Note] Event Scheduler: Loaded 0 events
131203 16:08:47 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
131203 16:08:47 [Note] WSREP: Assign initial position for certification: 2497, protocol version: 2
131203 16:08:47 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.29-MariaDB-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server, wsrep_23.7.3.rXXXX
131203 16:08:47 [Note] WSREP: Synchronized with group, ready for connections
131203 16:08:47 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
131203 16:09:09 [Note] WSREP: declaring 8814b4ba-5c67-11e3-0800-91035d554a96 stable
131203 16:09:09 [Note] WSREP: Node 7a9a87e8-5c67-11e3-0800-8cb6cba8f65a state prim
131203 16:09:09 [Note] WSREP: view(view_id(PRIM,7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,2) memb {
7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,
8814b4ba-5c67-11e3-0800-91035d554a96,
} joined {
} left {
} partitioned {
})
131203 16:09:09 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 2
131203 16:09:09 [Note] WSREP: STATE_EXCHANGE: sent state UUID: 8861cdd5-5c67-11e3-0800-cc70fcc5f515
131203 16:09:09 [Note] WSREP: STATE EXCHANGE: sent state msg: 8861cdd5-5c67-11e3-0800-cc70fcc5f515
131203 16:09:09 [Note] WSREP: STATE EXCHANGE: got state msg: 8861cdd5-5c67-11e3-0800-cc70fcc5f515 from 0 (2)
131203 16:09:09 [Note] WSREP: STATE EXCHANGE: got state msg: 8861cdd5-5c67-11e3-0800-cc70fcc5f515 from 1 (1)
131203 16:09:09 [Note] WSREP: Quorum results:
version = 2,
component = PRIMARY,
conf_id = 1,
members = 1/2 (joined/total),
act_id = 2521,
last_appl. = 2517,
protocols = 0/4/2 (gcs/repl/appl),
group UUID = 359350ee-5c63-11e3-0800-6673d15135cd
131203 16:09:09 [Note] WSREP: Flow-control interval: [23, 23]
131203 16:09:09 [Note] WSREP: New cluster view: global state: 359350ee-5c63-11e3-0800-6673d15135cd:2521, view# 2: Primary, number of nodes: 2, my index: 0, protocol version 2
131203 16:09:09 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
131203 16:09:09 [Note] WSREP: Assign initial position for certification: 2521, protocol version: 2
131203 16:09:11 [Note] WSREP: Node 1 (1) requested state transfer from '*any*'. Selected 0 (2)(SYNCED) as donor.
131203 16:09:11 [Note] WSREP: Shifting SYNCED -> DONOR/DESYNCED (TO: 2525)
131203 16:09:11 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
131203 16:09:11 [Note] WSREP: Running: 'wsrep_sst_xtrabackup --role 'donor' --address '10.XXX.XXX.160:4444/xtrabackup_sst' --auth 'root:rootpassword' --socket '/var/lib/mysql/mysql.sock' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --gtid '359350ee-5c63-11e3-0800-6673d15135cd:2525''
131203 16:09:11 [Note] WSREP: sst_donor_thread signaled with 0
131203 16:10:20 [Note] WSREP: Provider paused at 359350ee-5c63-11e3-0800-6673d15135cd:2604
131203 16:10:22 [Note] WSREP: Provider resumed.
131203 16:10:22 [Note] WSREP: 0 (2): State transfer to 1 (1) complete.
131203 16:10:22 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 2606)
131203 16:10:22 [Note] WSREP: Member 0 (2) synced with group.
131203 16:10:22 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 2606)
131203 16:10:22 [Note] WSREP: Synchronized with group, ready for connections
131203 16:10:22 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
131203 16:10:25 [Note] WSREP: Node 7a9a87e8-5c67-11e3-0800-8cb6cba8f65a state prim
131203 16:10:25 [Note] WSREP: view(view_id(PRIM,7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,3) memb {
7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,
} joined {
} left {
} partitioned {
8814b4ba-5c67-11e3-0800-91035d554a96,
})
131203 16:10:25 [Note] WSREP: forgetting 8814b4ba-5c67-11e3-0800-91035d554a96 (tcp://10.XXX.XXX.160:4567)
131203 16:10:25 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 1
131203 16:10:25 [Note] WSREP: STATE_EXCHANGE: sent state UUID: b5dda52e-5c67-11e3-0800-4b2360dd84f9
131203 16:10:25 [Note] WSREP: STATE EXCHANGE: sent state msg: b5dda52e-5c67-11e3-0800-4b2360dd84f9
131203 16:10:25 [Note] WSREP: STATE EXCHANGE: got state msg: b5dda52e-5c67-11e3-0800-4b2360dd84f9 from 0 (2)
131203 16:10:25 [Note] WSREP: Quorum results:
version = 2,
component = PRIMARY,
conf_id = 2,
members = 1/1 (joined/total),
act_id = 2607,
last_appl. = 2597,
protocols = 0/4/2 (gcs/repl/appl),
group UUID = 359350ee-5c63-11e3-0800-6673d15135cd
131203 16:10:25 [Note] WSREP: Flow-control interval: [16, 16]
131203 16:10:25 [Note] WSREP: New cluster view: global state: 359350ee-5c63-11e3-0800-6673d15135cd:2607, view# 3: Primary, number of nodes: 1, my index: 0, protocol version 2
131203 16:10:25 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
131203 16:10:25 [Note] WSREP: Assign initial position for certification: 2607, protocol version: 2
131203 16:10:30 [Note] WSREP: cleaning up 8814b4ba-5c67-11e3-0800-91035d554a96 (tcp://10..XXX.XXX.160:4567)

The problem was that there was a directory of database backups (dbexport) in MariaDB's data directory (probably /var/lib/mysql/). When doing the SST, the provider scans the data directory to find the files to send. It saw the directory and assumed that it was for a database since that's what the directories in the data directory are for. Removing the backup directory fixed the problem. As a best practice, don't change anything in /var/lib/; programs usually keep their data files in there and messing with them can cause problems like this.
After the main problem was resolved, a new message was noticed in the logs:
[Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (359350ee-5c63-11e3-0800-6673d15135cd): 1 (Operation not permitted) at galera/src/replicator_str.cpp:prepare_for_IST():442. IST will be unavailable.
This message is normal. When a node joins a galera cluster it will try to perform an IST (Incremental State Transfer) instead of the full SST (State Snapshot Transfer). If the node was previously part of the cluster and the difference between the state it had when it left and the current state of the cluster is small enough, IST is available which just transfers the differences between the node's current state and the cluster's state. This is much faster than transferring all of the data. If the node was previously part of the cluster but left long time ago, it will need to do an SST. In this case the, joining node's state UUID was 00000000-0000-0000-0000-000000000000 which basically means it is new to the cluster. I run a MariaDB/galera cluster and this message annoys me whenever IST is not available. It would be nice if it wasn't a warning and was reworded. I'm not sure why Operation not permitted is in there, but it's nothing to worry about.
Additionally, it is recommended that you run an odd number of nodes to prevent split brain conditions. If possible, you should add another MariaDB server to the cluster or run garbd if you cannot. garbd acts as a node in the cluster without being a database server. It allows you to have an odd number of nodes without needed to have another database server.

In my situation replace primary to secondary cluster fix the problem.
Before on db1
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="name"
wsrep_cluster_address="gcomm://37.x.x.104,37.x.x.117"
wsrep_sst_method=rsync
wsrep_node_address="37.x.x.104"
wsrep_node_name="db1"
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
on db2
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="name"
wsrep_cluster_address="gcomm://37.x.x.104,37.x.x.117"
wsrep_sst_method=rsync
wsrep_node_address="37.x.x.104"
wsrep_node_name="db1"
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
I change
wsrep_cluster_address="gcomm://37.x.x.104,37.x.x.117"
to
wsrep_cluster_address="gcomm://37.x.x.117,37.x.x.104"
and
wsrep_node_address="37.x.x.**104**" to **117**
and cluster has started!

Related

How can nextcloudcmd update changes on Nextcloud itself?

Nextcloud version 3.5.4-20220806.084713.fea986309-1.0~focal1
Using Qt 5.12.8, built against Qt 5.12.8
Using 'OpenSSL 1.1.1f 31 Mar 2020'
Running on Ubuntu 20.04.5 LTS, x86_64
The issue you are facing:
I am trying to access files and make change on them using nextcloudcmd . But I don't see the changes made on the Nextcloud, the changes are made only locally.
The way I use nextcloudcmd :
rm -rf ~/Nextcloud && mkdir ~/Nextcloud
nextcloudcmd -u ***** -p '*****' -h ~/Nextcloud https://#ppp.woelkli.com
At this point its able to sync files. But when I make changes to the files the changes are not pushed updated on the Nextcloud itself. How can I push changes in to cloud?
=> Log of nextcloudcmd -u *** -p '***' -h ~/Nextcloud https://#ppp.woelkli.com:
01-14 10:03:27:344 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/ocs/v1.php/cloud/capabilities?format=json" has X-Request-ID "8220a8ea-876b-4532-b866-66c57b01177c"
01-14 10:03:27:344 [ info nextcloud.sync.networkjob ]: OCC::JsonApiJob created for "https://ppp.woelkli.com" + "ocs/v1.php/cloud/capabilities" ""
01-14 10:03:28:939 [ info nextcloud.sync.networkjob.jsonapi ]: JsonApiJob of QUrl("https://#ppp.woelkli.com/ocs/v1.php/cloud/capabilities?format=json") FINISHED WITH STATUS "OK"
01-14 10:03:28:939 [ debug default ] [ main(int, char**)::<lambda ]: Server capabilities QJsonObject({"activity":{"apiv2":["filters","filters-api","previews","rich-strings"]},"bruteforce":{"delay":0},"core":{"pollinterval":60,"reference-api":true,"reference-regex":"(\\s|\\n|^)(https?:\\/\\/)((?:[-A-Z0-9+_]+\\.)+[-A-Z]+(?:\\/[-A-Z0-9+&##%?=~_|!:,.;()]*)*)(\\s|\\n|$)","webdav-root":"remote.php/webdav"},"dav":{"bulkupload":"1.0","chunking":"1.0"},"external":{"v1":["sites","device","groups","redirect"]},"files":{"bigfilechunking":true,"blacklisted_files":[".htaccess"],"comments":true,"directEditing":{"etag":"c748e8fc588b54fc5af38c4481a19d20","url":"https://ppp.woelkli.com/ocs/v2.php/apps/files/api/v1/directEditing"},"undelete":true,"versioning":true},"files_sharing":{"api_enabled":true,"default_permissions":1,"federation":{"expire_date":{"enabled":true},"expire_date_supported":{"enabled":true},"incoming":true,"outgoing":true},"group":{"enabled":false,"expire_date":{"enabled":true}},"group_sharing":false,"public":{"enabled":true,"expire_date":{"enabled":false},"expire_date_internal":{"enabled":false},"expire_date_remote":{"enabled":false},"multiple_links":true,"password":{"askForOptionalPassword":false,"enforced":true},"send_mail":false,"upload":false,"upload_files_drop":false},"resharing":true,"sharebymail":{"enabled":true,"expire_date":{"enabled":true,"enforced":false},"password":{"enabled":true,"enforced":true},"send_password_by_mail":false,"upload_files_drop":{"enabled":true}},"sharee":{"always_show_unique":true,"query_lookup_default":false},"user":{"expire_date":{"enabled":true},"send_mail":false}},"metadataAvailable":{"gps":["/image\\/.*/"],"size":["/image\\/.*/"]},"notes":{"api_version":["0.2","1.3"],"version":"4.6.0"},"notifications":{"admin-notifications":["ocs","cli"],"ocs-endpoints":["list","get","delete","delete-all","icons","rich-strings","action-web","user-status"],"push":["devices","object-data","delete"]},"ocm":{"apiVersion":"1.0-proposal1","enabled":true,"endPoint":"https://ppp.woelkli.com/ocm","resourceTypes":[{"name":"file","protocols":{"webdav":"/public.php/webdav/"},"shareTypes":["user","group"]}]},"password_policy":{"api":{"generate":"https://ppp.woelkli.com/ocs/v2.php/apps/password_policy/api/v1/generate","validate":"https://ppp.woelkli.com/ocs/v2.php/apps/password_policy/api/v1/validate"},"enforceNonCommonPassword":true,"enforceNumericCharacters":true,"enforceSpecialCharacters":false,"enforceUpperLowerCase":true,"minLength":12},"provisioning_api":{"AccountPropertyScopesFederatedEnabled":true,"AccountPropertyScopesPublishedEnabled":true,"AccountPropertyScopesVersion":2,"version":"1.15.0"},"spreed":{"config":{"attachments":{"allowed":true,"folder":"/Talk"},"call":{"enabled":true},"chat":{"max-length":32000,"read-privacy":0},"conversations":{"can-create":true},"previews":{"max-gif-size":3145728},"signaling":{"hello-v2-token-key":"-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEZjYYA01asJ+h/1+YflsnNfwXBSGa\nz+4vunVgFMhBDhSRZJv51H2KyJWTszJW+n1vgdp8gjfy4KNPhyjmzPO/tw==\n-----END PUBLIC KEY-----\n","session-ping-limit":200}},"features":["audio","video","chat-v2","conversation-v4","guest-signaling","empty-group-room","guest-display-names","multi-room-users","favorites","last-room-activity","no-ping","system-messages","delete-messages","mention-flag","in-call-flags","conversation-call-flags","notification-levels","invite-groups-and-mails","locked-one-to-one-rooms","read-only-rooms","listable-rooms","chat-read-marker","chat-unread","webinary-lobby","start-call-flag","chat-replies","circles-support","force-mute","sip-support","sip-support-nopin","chat-read-status","phonebook-search","raise-hand","room-description","rich-object-sharing","temp-user-avatar-api","geo-location-sharing","voice-message-sharing","signaling-v3","publishing-permissions","clear-history","direct-mention-flag","notification-calls","conversation-permissions","rich-object-list-media","rich-object-delete","unified-search","chat-permission","silent-send","silent-call","send-call-notification","talk-polls","message-expiration","reactions","chat-reference-id"],"version":"15.0.2"},"theming":{"background":"https://ppp.woelkli.com/apps/theming/image/background?v=50","background-default":false,"background-plain":false,"color":"#0082c9","color-element":"#0082c9","color-element-bright":"#0082c9","color-element-dark":"#0082c9","color-text":"#ffffff","favicon":"https://ppp.woelkli.com/apps/theming/image/logo?useSvg=1&v=50","logo":"https://ppp.woelkli.com/apps/theming/image/logo?useSvg=1&v=50","logoheader":"https://ppp.woelkli.com/apps/theming/image/logo?useSvg=1&v=50","name":"wölkli","slogan":"Secure Cloud Storage in Switzerland","url":"https://woelkli.com"},"user_status":{"enabled":true,"supports_emoji":true},"weather_status":{"enabled":true}})
01-14 10:03:28:939 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/ocs/v2.php/apps/user_status/api/v1/user_status?format=json" has X-Request-ID "3c414428-18d2-4112-b01a-3e3653621175"
01-14 10:03:28:940 [ info nextcloud.sync.networkjob ]: OCC::JsonApiJob created for "https://ppp.woelkli.com" + "/ocs/v2.php/apps/user_status/api/v1/user_status" "OCC::UserStatusConnector"
01-14 10:03:28:940 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/ocs/v1.php/cloud/user?format=json" has X-Request-ID "bbc3bf4c-74fc-430d-81d7-138234341795"
01-14 10:03:28:940 [ info nextcloud.sync.networkjob ]: OCC::JsonApiJob created for "https://ppp.woelkli.com" + "ocs/v1.php/cloud/user" ""
01-14 10:03:29:078 [ info nextcloud.sync.networkjob.jsonapi ]: JsonApiJob of QUrl("https://#ppp.woelkli.com/ocs/v2.php/apps/user_status/api/v1/user_status?format=json") FINISHED WITH STATUS "OK"
01-14 10:03:29:298 [ info nextcloud.sync.networkjob.jsonapi ]: JsonApiJob of QUrl("https://#ppp.woelkli.com/ocs/v1.php/cloud/user?format=json") FINISHED WITH STATUS "OK"
01-14 10:03:29:302 [ info nextcloud.sync.engine ]: There are 74930769920 bytes available at "/home/alper/Nextcloud/"
01-14 10:03:29:302 [ info nextcloud.sync.engine ]: New sync (no sync journal exists)
01-14 10:03:29:302 [ info nextcloud.sync.engine ]: "Using Qt 5.12.8 SSL library OpenSSL 1.1.1f 31 Mar 2020 on Ubuntu 20.04.5 LTS"
01-14 10:03:29:302 [ info nextcloud.sync.database ]: sqlite3 version "3.31.1"
01-14 10:03:29:302 [ info nextcloud.sync.database ]: sqlite3 locking_mode= "exclusive"
01-14 10:03:29:302 [ info nextcloud.sync.database ]: sqlite3 journal_mode= "wal"
01-14 10:03:29:305 [ info nextcloud.sync.database ]: sqlite3 synchronous= "NORMAL"
01-14 10:03:29:312 [ info nextcloud.sync.database ]: Forcing remote re-discovery by deleting folder Etags
01-14 10:03:29:312 [ info nextcloud.sync.engine ]: NOT Using Selective Sync
01-14 10:03:29:312 [ info nextcloud.sync.engine ]: #### Discovery start ####################################################
01-14 10:03:29:312 [ info nextcloud.sync.engine ]: Server ""
01-14 10:03:29:312 [ info sync.discovery ]: STARTING "" OCC::ProcessDirectoryJob::NormalQuery "" OCC::ProcessDirectoryJob::NormalQuery
01-14 10:03:29:313 [ info nextcloud.sync.accessmanager ]: 6 "PROPFIND" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/" has X-Request-ID "0bd895d8-8ec8-429d-9a37-4dd85a2c89a9"
01-14 10:03:29:313 [ info nextcloud.sync.networkjob ]: OCC::LsColJob created for "https://ppp.woelkli.com" + "/" "OCC::DiscoverySingleDirectoryJob"
01-14 10:03:29:462 [ info nextcloud.sync.networkjob.lscol ]: LSCOL of QUrl("https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/") FINISHED WITH STATUS "OK"
01-14 10:03:29:463 [ warning default ]: System exclude list file could not be read: "/home/alper/Nextcloud/Nextcloud/.sync-exclude.lst"
01-14 10:03:29:463 [ info sync.discovery ]: Processing "Nextcloud" | valid: false/false/true | mtime: 0/0/1673085410 | size: 0/0/0 | etag: ""//"63b941e2324d9" | checksum: ""//"" | perm: ""//"DNVCKR" | fileid: ""//"99398452ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeDirectory | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:463 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:463 [ info sync.discovery ]: Discovered "Nextcloud" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeDirectory
01-14 10:03:29:463 [ warning default ]: System exclude list file could not be read: "/home/alper/Nextcloud/Notes/.sync-exclude.lst"
01-14 10:03:29:463 [ info sync.discovery ]: Processing "Notes" | valid: false/false/true | mtime: 0/0/1673021853 | size: 0/0/0 | etag: ""//"63b8499d22325" | checksum: ""//"" | perm: ""//"DNVCKR" | fileid: ""//"99376158ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeDirectory | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:463 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:464 [ info sync.discovery ]: Discovered "Notes" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeDirectory
01-14 10:03:29:464 [ warning default ]: System exclude list file could not be read: "/home/alper/Nextcloud/Talk/.sync-exclude.lst"
01-14 10:03:29:464 [ info sync.discovery ]: Processing "Talk" | valid: false/false/true | mtime: 0/0/1673021842 | size: 0/0/0 | etag: ""//"63b84992517bb" | checksum: ""//"" | perm: ""//"DNVCKR" | fileid: ""//"99376154ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeDirectory | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:464 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:464 [ info sync.discovery ]: Discovered "Talk" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeDirectory
01-14 10:03:29:464 [ warning default ]: System exclude list file could not be read: "/home/alper/Nextcloud/todo/.sync-exclude.lst"
01-14 10:03:29:464 [ info sync.discovery ]: Processing "todo" | valid: false/false/true | mtime: 0/0/1673667233 | size: 0/0/0 | etag: ""//"63c222a1476e3" | checksum: ""//"" | perm: ""//"DNVCKR" | fileid: ""//"99376056ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeDirectory | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:464 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:464 [ info sync.discovery ]: Discovered "todo" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeDirectory
01-14 10:03:29:464 [ info sync.discovery ]: STARTING "Nextcloud" OCC::ProcessDirectoryJob::NormalQuery "Nextcloud" OCC::ProcessDirectoryJob::ParentDontExist
01-14 10:03:29:464 [ info nextcloud.sync.accessmanager ]: 6 "PROPFIND" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Nextcloud" has X-Request-ID "63605bb7-f72e-4a11-9796-af9d40240e0a"
01-14 10:03:29:464 [ info nextcloud.sync.networkjob ]: OCC::LsColJob created for "https://ppp.woelkli.com" + "/Nextcloud" "OCC::DiscoverySingleDirectoryJob"
01-14 10:03:29:464 [ info sync.discovery ]: STARTING "Notes" OCC::ProcessDirectoryJob::NormalQuery "Notes" OCC::ProcessDirectoryJob::ParentDontExist
01-14 10:03:29:465 [ info nextcloud.sync.accessmanager ]: 6 "PROPFIND" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Notes" has X-Request-ID "a35d715f-29d1-4b0e-aae5-57c953d8c743"
01-14 10:03:29:465 [ info nextcloud.sync.networkjob ]: OCC::LsColJob created for "https://ppp.woelkli.com" + "/Notes" "OCC::DiscoverySingleDirectoryJob"
01-14 10:03:29:465 [ info sync.discovery ]: STARTING "Talk" OCC::ProcessDirectoryJob::NormalQuery "Talk" OCC::ProcessDirectoryJob::ParentDontExist
01-14 10:03:29:465 [ info nextcloud.sync.accessmanager ]: 6 "PROPFIND" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Talk" has X-Request-ID "186d0a04-d2de-418b-8948-f4a5794788a4"
01-14 10:03:29:465 [ info nextcloud.sync.networkjob ]: OCC::LsColJob created for "https://ppp.woelkli.com" + "/Talk" "OCC::DiscoverySingleDirectoryJob"
01-14 10:03:29:465 [ info sync.discovery ]: STARTING "todo" OCC::ProcessDirectoryJob::NormalQuery "todo" OCC::ProcessDirectoryJob::ParentDontExist
01-14 10:03:29:465 [ info nextcloud.sync.accessmanager ]: 6 "PROPFIND" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/todo" has X-Request-ID "64b7aede-d893-423f-bf43-2431d708059a"
01-14 10:03:29:466 [ info nextcloud.sync.networkjob ]: OCC::LsColJob created for "https://ppp.woelkli.com" + "/todo" "OCC::DiscoverySingleDirectoryJob"
01-14 10:03:29:613 [ info nextcloud.sync.networkjob.lscol ]: LSCOL of QUrl("https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Notes") FINISHED WITH STATUS "OK"
01-14 10:03:29:652 [ info nextcloud.sync.networkjob.lscol ]: LSCOL of QUrl("https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Nextcloud") FINISHED WITH STATUS "OK"
01-14 10:03:29:789 [ info nextcloud.sync.networkjob.lscol ]: LSCOL of QUrl("https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/todo") FINISHED WITH STATUS "OK"
01-14 10:03:29:790 [ info sync.discovery ]: Processing "todo/.todo-list.org_archive" | valid: false/false/true | mtime: 0/0/1673083763 | size: 0/0/347 | etag: ""//"12af94e75dbfc087dc6b8038a1a7e131" | checksum: ""//"" | perm: ""//"WDNVR" | fileid: ""//"99398133ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeFile | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:790 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:790 [ info sync.discovery ]: Discovered "todo/.todo-list.org_archive" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeFile
01-14 10:03:29:790 [ info sync.discovery ]: Processing "todo/todo-list.org" | valid: false/false/true | mtime: 0/0/1673666892 | size: 0/0/148 | etag: ""//"d145e38a7f9b55a53706cc101a635779" | checksum: ""//"" | perm: ""//"WDNVR" | fileid: ""//"99382200ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeFile | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:790 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:790 [ info sync.discovery ]: Discovered "todo/todo-list.org" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeFile
01-14 10:03:29:790 [ info sync.discovery ]: Processing "todo/zoo" | valid: false/false/true | mtime: 0/0/1673667228 | size: 0/0/0 | etag: ""//"d8a5a3511a0e045530ebd37209a2af32" | checksum: ""//"SHA1:da39a3ee5e6b4b0d3255bfef95601890afd80709" | perm: ""//"WDNVR" | fileid: ""//"99781299ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeFile | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:790 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:790 [ info sync.discovery ]: Discovered "todo/zoo" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeFile
01-14 10:03:29:849 [ info nextcloud.sync.networkjob.lscol ]: LSCOL of QUrl("https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Talk") FINISHED WITH STATUS "OK"
01-14 10:03:29:849 [ info nextcloud.sync.engine ]: #### Discovery end #################################################### 536 ms
01-14 10:03:29:849 [ info nextcloud.sync.engine ]: #### Reconcile (aboutToPropagate) #################################################### 536 ms
01-14 10:03:29:849 [ info nextcloud.sync.engine ]: #### Reconcile (aboutToPropagate OK) #################################################### 536 ms
01-14 10:03:29:850 [ info nextcloud.sync.engine ]: #### Post-Reconcile end #################################################### 537 ms
01-14 10:03:29:854 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::NotYetStarted pending uploads 0 subjobs state OCC::PropagatorJob::NotYetStarted
01-14 10:03:29:854 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "Nextcloud" by OCC::PropagateLocalMkdir(0x5571c346afe0)
01-14 10:03:29:854 [ info nextcloud.sync.database ]: Updating file record for path: "Nextcloud" inode: 5645997 modtime: 1673085410 type: CSyncEnums::ItemTypeDirectory etag: "_invalid_" fileId: "99398452ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:854 [ info nextcloud.sync.propagator ]: Completed propagation of "Nextcloud" by OCC::PropagateLocalMkdir(0x5571c346afe0) with status OCC::SyncFileItem::Success
01-14 10:03:29:858 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:858 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "Notes" by OCC::PropagateLocalMkdir(0x5571c3470360)
01-14 10:03:29:858 [ info nextcloud.sync.database ]: Updating file record for path: "Notes" inode: 5646004 modtime: 1673021853 type: CSyncEnums::ItemTypeDirectory etag: "_invalid_" fileId: "99376158ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:858 [ info nextcloud.sync.propagator ]: Completed propagation of "Notes" by OCC::PropagateLocalMkdir(0x5571c3470360) with status OCC::SyncFileItem::Success
01-14 10:03:29:858 [ info nextcloud.sync.database ]: Updating file record for path: "Nextcloud" inode: 5645997 modtime: 1673085410 type: CSyncEnums::ItemTypeDirectory etag: "63b941e2324d9" fileId: "99398452ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:858 [ info nextcloud.sync.propagator ]: PropagateDirectory::slotSubJobsFinished emit finished OCC::SyncFileItem::Success
01-14 10:03:29:861 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:862 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "Talk" by OCC::PropagateLocalMkdir(0x5571c34cf2e0)
01-14 10:03:29:862 [ info nextcloud.sync.database ]: Updating file record for path: "Talk" inode: 5646005 modtime: 1673021842 type: CSyncEnums::ItemTypeDirectory etag: "_invalid_" fileId: "99376154ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:862 [ info nextcloud.sync.propagator ]: Completed propagation of "Talk" by OCC::PropagateLocalMkdir(0x5571c34cf2e0) with status OCC::SyncFileItem::Success
01-14 10:03:29:862 [ info nextcloud.sync.database ]: Updating file record for path: "Notes" inode: 5646004 modtime: 1673021853 type: CSyncEnums::ItemTypeDirectory etag: "63b8499d22325" fileId: "99376158ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:862 [ info nextcloud.sync.propagator ]: PropagateDirectory::slotSubJobsFinished emit finished OCC::SyncFileItem::Success
01-14 10:03:29:866 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:866 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "todo" by OCC::PropagateLocalMkdir(0x5571c34c0730)
01-14 10:03:29:866 [ info nextcloud.sync.database ]: Updating file record for path: "todo" inode: 5646006 modtime: 1673667233 type: CSyncEnums::ItemTypeDirectory etag: "_invalid_" fileId: "99376056ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:866 [ info nextcloud.sync.propagator ]: Completed propagation of "todo" by OCC::PropagateLocalMkdir(0x5571c34c0730) with status OCC::SyncFileItem::Success
01-14 10:03:29:866 [ info nextcloud.sync.database ]: Updating file record for path: "Talk" inode: 5646005 modtime: 1673021842 type: CSyncEnums::ItemTypeDirectory etag: "63b84992517bb" fileId: "99376154ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:866 [ info nextcloud.sync.propagator ]: PropagateDirectory::slotSubJobsFinished emit finished OCC::SyncFileItem::Success
01-14 10:03:29:870 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:870 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "todo/.todo-list.org_archive" by OCC::PropagateDownloadFile(0x5571c34daf00)
01-14 10:03:29:870 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/todo/.todo-list.org_archive" has X-Request-ID "4f03a59f-cc0f-4f03-9c04-6054f726bc94"
01-14 10:03:29:871 [ info nextcloud.sync.networkjob ]: OCC::GETFileJob created for "https://ppp.woelkli.com" + "/todo/.todo-list.org_archive" "OCC::PropagateDownloadFile"
01-14 10:03:29:874 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:874 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "todo/todo-list.org" by OCC::PropagateDownloadFile(0x5571c34757d0)
01-14 10:03:29:875 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/todo/todo-list.org" has X-Request-ID "41f8110d-654b-428c-a7d6-35fed3d2667e"
01-14 10:03:29:875 [ info nextcloud.sync.networkjob ]: OCC::GETFileJob created for "https://ppp.woelkli.com" + "/todo/todo-list.org" "OCC::PropagateDownloadFile"
01-14 10:03:29:878 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:879 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "todo/zoo" by OCC::PropagateDownloadFile(0x7f29340a17b0)
01-14 10:03:29:879 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/todo/zoo" has X-Request-ID "27d4ff9f-5e9a-46d8-82a0-372ed440001c"
01-14 10:03:29:879 [ info nextcloud.sync.networkjob ]: OCC::GETFileJob created for "https://ppp.woelkli.com" + "/todo/zoo" "OCC::PropagateDownloadFile"
01-14 10:03:29:882 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:30:096 [ info nextcloud.sync.checksums ]: Computing "SHA1" checksum of "/home/alper/Nextcloud/todo/..todo-list.org_archive.~29f9cf4b" in a thread
01-14 10:03:30:096 [ info nextcloud.sync.propagator.download ]: "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com#ppp.woelkli.com"
01-14 10:03:30:096 [ info nextcloud.sync.database ]: Updating file record for path: "todo/.todo-list.org_archive" inode: 5646007 modtime: 1673083763 type: CSyncEnums::ItemTypeFile etag: "12af94e75dbfc087dc6b8038a1a7e131" fileId: "99398133ock6rp1oyjad" remotePerm: "WDNVR" fileSize: 347 checksum: "SHA1:55cd297d7da6b9c4c7854805bbdb4791099e32a8" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:30:097 [ info nextcloud.sync.propagator ]: Completed propagation of "todo/.todo-list.org_archive" by OCC::PropagateDownloadFile(0x5571c34daf00) with status OCC::SyncFileItem::Success
01-14 10:03:30:097 [ info nextcloud.sync.checksums ]: Computing "SHA1" checksum of "/home/alper/Nextcloud/todo/.zoo.~5c52cf8e" in a thread
01-14 10:03:30:097 [ info nextcloud.sync.checksums ]: Computing "SHA1" checksum of "/home/alper/Nextcloud/todo/.todo-list.org.~689a35f5" in a thread
01-14 10:03:30:098 [ info nextcloud.sync.propagator.download ]: "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com#ppp.woelkli.com"
01-14 10:03:30:098 [ info nextcloud.sync.database ]: Updating file record for path: "todo/zoo" inode: 5646009 modtime: 1673667228 type: CSyncEnums::ItemTypeFile etag: "d8a5a3511a0e045530ebd37209a2af32" fileId: "99781299ock6rp1oyjad" remotePerm: "WDNVR" fileSize: 0 checksum: "SHA1:da39a3ee5e6b4b0d3255bfef95601890afd80709" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:30:098 [ info nextcloud.sync.propagator ]: Completed propagation of "todo/zoo" by OCC::PropagateDownloadFile(0x7f29340a17b0) with status OCC::SyncFileItem::Success
01-14 10:03:30:098 [ info nextcloud.sync.propagator.download ]: "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com#ppp.woelkli.com"
01-14 10:03:30:098 [ info nextcloud.sync.database ]: Updating file record for path: "todo/todo-list.org" inode: 5646008 modtime: 1673666892 type: CSyncEnums::ItemTypeFile etag: "d145e38a7f9b55a53706cc101a635779" fileId: "99382200ock6rp1oyjad" remotePerm: "WDNVR" fileSize: 148 checksum: "SHA1:d4199fbec083bd84270dcba2f62646509aa1c8f4" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:30:098 [ info nextcloud.sync.propagator ]: Completed propagation of "todo/todo-list.org" by OCC::PropagateDownloadFile(0x5571c34757d0) with status OCC::SyncFileItem::Success
01-14 10:03:30:099 [ info nextcloud.sync.database ]: Updating file record for path: "todo" inode: 5646006 modtime: 1673667233 type: CSyncEnums::ItemTypeDirectory etag: "63c222a1476e3" fileId: "99376056ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:30:099 [ info nextcloud.sync.propagator ]: PropagateDirectory::slotSubJobsFinished emit finished OCC::SyncFileItem::Success
01-14 10:03:30:099 [ info nextcloud.sync.propagator.root.directory ]: OCC::SyncFileItem::Success slotSubJobsFinished OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Finished
01-14 10:03:30:102 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Finished
01-14 10:03:30:102 [ info nextcloud.sync.propagator ]: PropagateRootDirectory::slotDirDeletionJobsFinished emit finished OCC::SyncFileItem::Success
01-14 10:03:30:102 [ info nextcloud.sync.engine ]: Sync run took 789 ms
01-14 10:03:30:102 [ info nextcloud.sync.database ]: Closing DB "/home/alper/Nextcloud/.sync_33fdb314a5d8.db"

Media player aborts automatically (player-status-change -> status = "aborted") for an unknow reason

Integration was working perfectly for weeks but we now have a Media player status change happening without reason:
07:46:15:137 Agora-SDK [DEBUG]: Player 0 audio Status Changed Detected by Timer: init=>aborted
Chrome console extract
See below logs for more details:
onmessage # socket.js:40
EventTarget.dispatchEvent # sockjs.js:170
(anonymous) # sockjs.js:887
SockJS._transportMessage # sockjs.js:885
EventEmitter.emit # sockjs.js:86
WebSocketTransport.ws.onmessage # sockjs.js:2961
wrapFn # zone-evergreen.js:1191
invokeTask # zone-evergreen.js:391
runTask # zone-evergreen.js:168
invokeTask # zone-evergreen.js:465
invokeTask # zone-evergreen.js:1603
globalZoneAwareCallback # zone-evergreen.js:1629
client:185 ./node_modules/pdfjs-dist/build/pdf.js
Module not found: Error: Can't resolve 'zlib' in 'C:\codeRep\aboard\node_modules\pdfjs-dist\build'
warnings # client:185
onmessage # socket.js:40
EventTarget.dispatchEvent # sockjs.js:170
(anonymous) # sockjs.js:887
SockJS._transportMessage # sockjs.js:885
EventEmitter.emit # sockjs.js:86
WebSocketTransport.ws.onmessage # sockjs.js:2961
wrapFn # zone-evergreen.js:1191
invokeTask # zone-evergreen.js:391
runTask # zone-evergreen.js:168
invokeTask # zone-evergreen.js:465
invokeTask # zone-evergreen.js:1603
globalZoneAwareCallback # zone-evergreen.js:1629
meeting-board loaded
07:46:11:960 Agora-SDK [INFO]: Creating client, MODE: interop CODEC: vp8
07:46:11:962 Agora-SDK [INFO]: [23D44] Initializing AgoraRTC client, appId: 1790a8792d1e4ff9b1718dc756710f54.
07:46:11:963 Agora-SDK [INFO]: [23D44] Adding event handler on connection-state-change
07:46:11:965 Agora-SDK [INFO]: processId: process-79c97042-927c-45f6-83c0-48a79f7ecfbc
07:46:11:966 Agora-SDK [DEBUG]: Flush cached event reporting: 6
07:46:12:132 Agora-SDK [INFO]: [23D44] Added event handler on connection-state-change, network-quality
07:46:12:220 Agora-SDK [INFO]: [23D44] Adding event handler on connection-state-change
07:46:12:264 Agora-SDK [INFO]: [23D44] Added event handler on connection-state-change, network-quality
07:46:12:445 Agora-SDK [DEBUG]: Get UserAccount Successfully {uid: 10014, url: "https://sua-ap-web-1.agora.io/api/v1"}
07:46:12:445 Agora-SDK [DEBUG]: getUserAccount Success https://sua-ap-web-1.agora.io/api/v1 JXL5oSGFHauvWRdQVouA => 10014
07:46:12:453 Agora-SDK [DEBUG]: [23D44] Connect to choose_server: https://webrtc2-ap-web-1.agora.io/api/v1
07:46:12:469 Agora-SDK [DEBUG]: [23D44] Connect to choose_server: https://webrtc2-ap-web-2.agoraio.cn/api/v1
07:46:12:732 Agora-SDK [DEBUG]: [23D44] Get gateway address: (3) ["148-153-25-164.edge.agora.io:5886", "128-1-77-69.edge.agoraio.cn:5887", "128-1-78-92.edge.agora.io:5891"]
07:46:12:733 Agora-SDK [INFO]: [23D44] Joining channel: yqYjJu7a1slX8T9CxfNr
07:46:12:734 Agora-SDK [DEBUG]: [23D44] setParameter in distribution: {"event_uuid":"123"}
07:46:12:734 Agora-SDK [DEBUG]: [23D44] register client Channel yqYjJu7a1slX8T9CxfNr Uid 10014
07:46:12:735 Agora-SDK [DEBUG]: [23D44] start connect:148-153-25-164.edge.agora.io:5886
07:46:12:847 Agora-SDK [DEBUG]: [23D44] websockect opened: 148-153-25-164.edge.agora.io:5886
07:46:12:849 Agora-SDK [DEBUG]: [23D44] Connected to gateway server
07:46:12:850 Agora-SDK [DEBUG]: Turn config {mode: "auto", username: "test", credential: "111111", forceturn: false, url: "148-153-25-164.edge.agora.io", …}
07:46:12:930 Agora-SDK [INFO]: [23D44] Join channel yqYjJu7a1slX8T9CxfNr success, join with uid: JXL5oSGFHauvWRdQVouA.
07:46:12:932 Agora-SDK [DEBUG]: Create stream
07:46:12:936 Agora-SDK [DEBUG]: [JXL5oSGFHauvWRdQVouA] Requested access to local media
07:46:12:936 Agora-SDK [DEBUG]: GetUserMedia {"audio":{}}
07:46:12:937 Agora-SDK [INFO]: [23D44] Adding event handler on error
07:46:12:951 Agora-SDK [INFO]: [23D44] Added event handler on error, player-status-change, stream-added, stream-subscribed, stream-removed, peer-leave
07:46:13:132 Agora-SDK [DEBUG]: [JXL5oSGFHauvWRdQVouA] User has granted access to local media
accessAllowed
getUserMedia successfully
07:46:13:134 Agora-SDK [DEBUG]: [JXL5oSGFHauvWRdQVouA] play(). agora_local undefined
07:46:13:137 Agora-SDK [INFO]: [23D44] Publishing stream, uid JXL5oSGFHauvWRdQVouA
07:46:13:138 Agora-SDK [DEBUG]: [23D44] setClientRole to host
07:46:13:138 Agora-SDK [INFO]: [23D44] Adding event handler on stream-published
07:46:13:145 Agora-SDK [INFO]: [23D44] Added event handler on stream-published
07:46:13:172 Agora-SDK [DEBUG]: [23D44]Created webkitRTCPeerConnnection with config "{"iceServers":[{"url":"stun:webcs.agora.io:3478"},{"username":"test","credential":"111111","credentialType":"password","urls":"turn:148-153-25-164.edge.agora.io:5916?transport=udp"},{"username":"test","credential":"111111","credentialType":"password","urls":"turn:148-153-25-164.edge.agora.io:5916?transport=tcp"}],"sdpSemantics":"plan-b"}".
07:46:13:174 Agora-SDK [DEBUG]: [23D44] PeerConnection add stream : MediaStream {id: "SvvMFhU4Lx0jFjXTAK0DxYrmOLHyexu6dNKM", active: true, onaddtrack: null, onremovetrack: null, onactive: null, …}
07:46:13:190 Agora-SDK [DEBUG]: [23D44]srflx candidate : null relay candidate: null host candidate : a=candidate:2681806481 1 udp 2122262783 2a02:2788:2b4:58d:681b:8505:ebc2:9c35 50241 typ host generation 0 network-id 2 network-cost 10
07:46:13:197 Agora-SDK [DEBUG]: [23D44] SDP exchange in publish : send offer -- {messageType: "OFFER", sdp: "v=0
↵o=- 8826689166328964007 2 IN IP4 127.0.0.1
↵s…7242 label:9a3f62cf-5bd4-477a-b7b3-1fca64d6c2ca
↵", offererSessionId: 104, seq: 1, tiebreaker: 72973729}
07:46:13:218 Agora-SDK [INFO]: [23D44] Local stream published with uid JXL5oSGFHauvWRdQVouA
07:46:13:218 Agora-SDK [DEBUG]: [23D44] SDP exchange in publish : receive answer -- {answererSessionId: 106, messageType: "ANSWER", offererSessionId: 104, sdp: "v=0
↵o=- 0 0 IN IP4 127.0.0.1
↵s=AgoraGateway
↵t=0…bel:pa99gEbWHL
↵a=ssrc:44444 label:pa99gEbWHLa0
↵", seq: 1}
07:46:13:250 Agora-SDK [DEBUG]: [23D44] publish p2p connected: Map(1) {3292 => {…}}
07:46:13:251 Agora-SDK [DEBUG]: Flush cached event reporting: 4
07:46:13:251 Agora-SDK [INFO]: [23D44] Publish success, uid: JXL5oSGFHauvWRdQVouA
Publish local stream successfully
07:46:15:137 Agora-SDK [DEBUG]: Player 0 audio Status Changed Detected by Timer: init=>aborted
07:46:15:138 Agora-SDK [DEBUG]: Media Player Status Change {type: "player-status-change", playerId: 0, mediaType: "audio", status: "aborted", reason: "timer", …}
07:46:15:168 Agora-SDK [DEBUG]: Media Player Status Change {type: "player-status-change", playerId: 0, mediaType: "video", status: "play", reason: "playing", …}
Anyone ever experienced this? Any hint?
Thanks
Received an answer from Agora:
Typically, this is because of the autoplay policy, you can find more information on how to bypass here:
enter link description here
I'll give it a try and update the post

RabbitMQ Shovel failed to connect

I have 2 rabbitmq box named : centos (192.168.1.115) and devserver (192.168.1.126)
in 'centos' I have :
I have queue named : toshovel bound to a topic exchange with routing key '#'
I test posting to the exchange and messages transfered to that queue.
In 'devserver' I have :
1. topic exchange named bino.topic
2. queue named : bino.nms.idc3d bound to bino.topic
This also tested. including using pika to publish message from 'centos' to 'devserver' so that I'm sure there is no firewall nor permition nor authentication (user/password : esx/esx) problem
Now I want to shovel from 'centos' to 'devserver'
I tried adding shovel in 'centos' per https://www.rabbitmq.com/shovel-dynamic.html
rabbitmqctl set_parameter shovel my-shovel '{"src-protocol": "amqp091", "src-uri": "amqp://esx:esx#192.168.1.115", "src-queue": "toshovel", "dest-protocol": "amqp091", "dest-uri": "amqp://esx:esx#192.168.126/", "dest-queue": "bino.nms.idc3d"}'
but the centos log said
from : /var/log/rabbitmq/rabbit\#centos.log
2018-06-20 14:03:21.800 [info] <0.735.0> terminating static worker with {timeout,{gen_server,call,[<0.763.0>,connect,60000]}}
2018-06-20 14:03:21.800 [error] <0.735.0> ** Generic server <0.735.0> terminating
** Last message in was {'$gen_cast',init}
** When Server state == {state,undefined,undefined,undefined,undefined,{<<"/">>,<<"my-shovel">>},dynamic,#{ack_mode => on_confirm,dest => #{dest_queue => <<"bino.nms.idc3d">>,fields_fun => #Fun<rabbit_shovel_parameters.11.26683091>,module => rabbit_amqp091_shovel,props_fun => #Fun<rabbit_shovel_parameters.12.26683091>,resource_decl => #Fun<rabbit_shovel_parameters.10.26683091>,uris => ["amqp://esx:esx#192.168.126/"]},name => <<"my-shovel">>,reconnect_delay => 5,shovel_type => dynamic,source => #{delete_after => never,module => rabbit_amqp091_shovel,prefetch_count => 1000,queue => <<"toshovel">>,resource_decl => #Fun<rabbit_shovel_parameters.14.26683091>,source_exchange_key => <<>>,uris => ["amqp://esx:esx#192.168.1.115"]}},undefined,undefined,undefined,undefined,undefined}
** Reason for termination ==
** {timeout,{gen_server,call,[<0.763.0>,connect,60000]}}
2018-06-20 14:03:21.800 [warning] <0.743.0> closing AMQP connection <0.743.0> (192.168.1.115:48223 -> 192.168.1.115:5672 - Shovel my-shovel, vhost: '/', user: 'esx'):
client unexpectedly closed TCP connection
2018-06-20 14:03:21.800 [error] <0.735.0> CRASH REPORT Process <0.735.0> with 1 neighbours exited with reason: {timeout,{gen_server,call,[<0.763.0>,connect,60000]}} in gen_server2:terminate/3 line 1166
2018-06-20 14:03:21.801 [error] <0.410.0> Supervisor {<0.410.0>,rabbit_shovel_dyn_worker_sup} had child {<<"/">>,<<"my-shovel">>} started with rabbit_shovel_worker:start_link(dynamic, {<<"/">>,<<"my-shovel">>}, [{<<"dest-protocol">>,<<"amqp091">>},{<<"dest-queue">>,<<"bino.nms.idc3d">>},{<<"dest-uri">>,<<"a...">>},...]) at <0.735.0> exit with reason {timeout,{gen_server,call,[<0.763.0>,connect,60000]}} in context child_terminated
2018-06-20 14:03:21.802 [error] <0.738.0> ** Generic server <0.738.0> terminating
** Last message in was {'EXIT',<0.735.0>,{timeout,{gen_server,call,[<0.763.0>,connect,60000]}}}
** When Server state == {state,amqp_network_connection,{state,#Port<0.29325>,<<"client 192.168.1.115:48223 -> 192.168.1.115:5672">>,10,<0.744.0>,131072,<0.737.0>,undefined,false},<0.742.0>,{amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<amqp_uri.12.90191702>,#Fun<amqp_uri.12.90191702>],[{<<"connection_name">>,longstr,<<"Shovel my-shovel">>}],[]},2047,[{<<"capabilities">>,table,[{<<"publisher_confirms">>,bool,true},{<<"exchange_exchange_bindings">>,bool,true},{<<"basic.nack">>,bool,true},{<<"consumer_cancel_notify">>,bool,true},{<<"connection.blocked">>,bool,true},{<<"consumer_priorities">>,bool,true},{<<"authentication_failure_close">>,bool,true},{<<"per_consumer_qos">>,bool,true},{<<"direct_reply_to">>,bool,true}]},{<<"cluster_name">>,longstr,<<"rabbit#centos">>},{<<"copyright">>,longstr,<<"Copyright (C) 2007-2018 Pivotal Software, Inc.">>},{<<"information">>,longstr,<<"Licensed under the MPL. See http://www.rabbitmq.com/">>},{<<"platform">>,longstr,<<"Erlang/OTP 20.3.4">>},{<<"product">>,longstr,<<"RabbitMQ">>},{<<"version">>,longstr,<<"3.7.5">>}],none,false}
** Reason for termination ==
** "stopping because dependent process <0.735.0> died: {timeout,\n {gen_server,call,\n [<0.763.0>,connect,\n 60000]}}"
2018-06-20 14:03:21.802 [error] <0.738.0> CRASH REPORT Process <0.738.0> with 0 neighbours exited with reason: "stopping because dependent process <0.735.0> died: {timeout,\n {gen_server,call,\n [<0.763.0>,connect,\n 60000]}}" in gen_server:handle_common_reply/8 line 726
2018-06-20 14:03:21.802 [error] <0.752.0> Supervisor {<0.752.0>,amqp_channel_sup} had child channel started with amqp_channel:start_link(network, <0.738.0>, 1, <0.753.0>, {<<"client 192.168.1.115:48223 -> 192.168.1.115:5672">>,1}) at <0.755.0> exit with reason {timeout,{gen_server,call,[<0.763.0>,connect,60000]}} in context child_terminated
2018-06-20 14:03:21.802 [error] <0.752.0> Supervisor {<0.752.0>,amqp_channel_sup} had child channel started with amqp_channel:start_link(network, <0.738.0>, 1, <0.753.0>, {<<"client 192.168.1.115:48223 -> 192.168.1.115:5672">>,1}) at <0.755.0> exit with reason reached_max_restart_intensity in context shutdown
2018-06-20 14:03:21.803 [error] <0.736.0> Supervisor {<0.736.0>,amqp_connection_sup} had child connection started with amqp_gen_connection:start_link(<0.737.0>, {amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<am..>,...],...}) at <0.738.0> exit with reason "stopping because dependent process <0.735.0> died: {timeout,\n {gen_server,call,\n [<0.763.0>,connect,\n 60000]}}" in context child_terminated
2018-06-20 14:03:21.803 [error] <0.736.0> Supervisor {<0.736.0>,amqp_connection_sup} had child connection started with amqp_gen_connection:start_link(<0.737.0>, {amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<am..>,...],...}) at <0.738.0> exit with reason reached_max_restart_intensity in context shutdown
2018-06-20 14:03:26.865 [info] <0.835.0> accepting AMQP connection <0.835.0> (192.168.1.115:47801 -> 192.168.1.115:5672)
2018-06-20 14:03:26.934 [info] <0.835.0> Connection <0.835.0> (192.168.1.115:47801 -> 192.168.1.115:5672) has a client-provided name: Shovel my-shovel
2018-06-20 14:03:26.935 [info] <0.835.0> connection <0.835.0> (192.168.1.115:47801 -> 192.168.1.115:5672 - Shovel my-shovel): user 'esx' authenticated and granted access to vhost '/'
2018-06-20 14:04:26.938 [info] <0.827.0> terminating static worker with {timeout,{gen_server,call,[<0.855.0>,connect,60000]}}
2018-06-20 14:04:26.938 [error] <0.827.0> ** Generic server <0.827.0> terminating
** Last message in was {'$gen_cast',init}
** When Server state == {state,undefined,undefined,undefined,undefined,{<<"/">>,<<"my-shovel">>},dynamic,#{ack_mode => on_confirm,dest => #{dest_queue => <<"bino.nms.idc3d">>,fields_fun => #Fun<rabbit_shovel_parameters.11.26683091>,module => rabbit_amqp091_shovel,props_fun => #Fun<rabbit_shovel_parameters.12.26683091>,resource_decl => #Fun<rabbit_shovel_parameters.10.26683091>,uris => ["amqp://esx:esx#192.168.126/"]},name => <<"my-shovel">>,reconnect_delay => 5,shovel_type => dynamic,source => #{delete_after => never,module => rabbit_amqp091_shovel,prefetch_count => 1000,queue => <<"toshovel">>,resource_decl => #Fun<rabbit_shovel_parameters.14.26683091>,source_exchange_key => <<>>,uris => ["amqp://esx:esx#192.168.1.115"]}},undefined,undefined,undefined,undefined,undefined}
** Reason for termination ==
** {timeout,{gen_server,call,[<0.855.0>,connect,60000]}}
2018-06-20 14:04:26.939 [warning] <0.835.0> closing AMQP connection <0.835.0> (192.168.1.115:47801 -> 192.168.1.115:5672 - Shovel my-shovel, vhost: '/', user: 'esx'):
client unexpectedly closed TCP connection
2018-06-20 14:04:26.939 [error] <0.827.0> CRASH REPORT Process <0.827.0> with 1 neighbours exited with reason: {timeout,{gen_server,call,[<0.855.0>,connect,60000]}} in gen_server2:terminate/3 line 1166
2018-06-20 14:04:26.939 [error] <0.410.0> Supervisor {<0.410.0>,rabbit_shovel_dyn_worker_sup} had child {<<"/">>,<<"my-shovel">>} started with rabbit_shovel_worker:start_link(dynamic, {<<"/">>,<<"my-shovel">>}, [{<<"dest-protocol">>,<<"amqp091">>},{<<"dest-queue">>,<<"bino.nms.idc3d">>},{<<"dest-uri">>,<<"a...">>},...]) at <0.827.0> exit with reason {timeout,{gen_server,call,[<0.855.0>,connect,60000]}} in context child_terminated
2018-06-20 14:04:26.940 [error] <0.830.0> ** Generic server <0.830.0> terminating
** Last message in was {'EXIT',<0.827.0>,{timeout,{gen_server,call,[<0.855.0>,connect,60000]}}}
** When Server state == {state,amqp_network_connection,{state,#Port<0.29425>,<<"client 192.168.1.115:47801 -> 192.168.1.115:5672">>,10,<0.836.0>,131072,<0.829.0>,undefined,false},<0.834.0>,{amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<amqp_uri.12.90191702>,#Fun<amqp_uri.12.90191702>],[{<<"connection_name">>,longstr,<<"Shovel my-shovel">>}],[]},2047,[{<<"capabilities">>,table,[{<<"publisher_confirms">>,bool,true},{<<"exchange_exchange_bindings">>,bool,true},{<<"basic.nack">>,bool,true},{<<"consumer_cancel_notify">>,bool,true},{<<"connection.blocked">>,bool,true},{<<"consumer_priorities">>,bool,true},{<<"authentication_failure_close">>,bool,true},{<<"per_consumer_qos">>,bool,true},{<<"direct_reply_to">>,bool,true}]},{<<"cluster_name">>,longstr,<<"rabbit#centos">>},{<<"copyright">>,longstr,<<"Copyright (C) 2007-2018 Pivotal Software, Inc.">>},{<<"information">>,longstr,<<"Licensed under the MPL. See http://www.rabbitmq.com/">>},{<<"platform">>,longstr,<<"Erlang/OTP 20.3.4">>},{<<"product">>,longstr,<<"RabbitMQ">>},{<<"version">>,longstr,<<"3.7.5">>}],none,false}
** Reason for termination ==
** "stopping because dependent process <0.827.0> died: {timeout,\n {gen_server,call,\n [<0.855.0>,connect,\n 60000]}}"
2018-06-20 14:04:26.940 [error] <0.830.0> CRASH REPORT Process <0.830.0> with 0 neighbours exited with reason: "stopping because dependent process <0.827.0> died: {timeout,\n {gen_server,call,\n [<0.855.0>,connect,\n 60000]}}" in gen_server:handle_common_reply/8 line 726
2018-06-20 14:04:26.941 [error] <0.844.0> Supervisor {<0.844.0>,amqp_channel_sup} had child channel started with amqp_channel:start_link(network, <0.830.0>, 1, <0.846.0>, {<<"client 192.168.1.115:47801 -> 192.168.1.115:5672">>,1}) at <0.847.0> exit with reason {timeout,{gen_server,call,[<0.855.0>,connect,60000]}} in context child_terminated
2018-06-20 14:04:26.941 [error] <0.844.0> Supervisor {<0.844.0>,amqp_channel_sup} had child channel started with amqp_channel:start_link(network, <0.830.0>, 1, <0.846.0>, {<<"client 192.168.1.115:47801 -> 192.168.1.115:5672">>,1}) at <0.847.0> exit with reason reached_max_restart_intensity in context shutdown
2018-06-20 14:04:26.941 [error] <0.828.0> Supervisor {<0.828.0>,amqp_connection_sup} had child connection started with amqp_gen_connection:start_link(<0.829.0>, {amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<am..>,...],...}) at <0.830.0> exit with reason "stopping because dependent process <0.827.0> died: {timeout,\n {gen_server,call,\n [<0.855.0>,connect,\n 60000]}}" in context child_terminated
2018-06-20 14:04:26.942 [error] <0.828.0> Supervisor {<0.828.0>,amqp_connection_sup} had child connection started with amqp_gen_connection:start_link(<0.829.0>, {amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<am..>,...],...}) at <0.830.0> exit with reason reached_max_restart_intensity in context shutdown
from /var/log/rabbitmq/log/crash.log
2018-06-20 14:04:40 =SUPERVISOR REPORT====
Supervisor: {<0.914.0>,amqp_connection_sup}
Context: shutdown
Reason: reached_max_restart_intensity
Offender: [{pid,<0.916.0>},{name,connection},{mfargs,{amqp_gen_connection,start_link,[<0.915.0>,{amqp_params_network,<<"esx">>,<<"esx">>,<<"/">>,"192.168.1.115",5672,2047,0,10,60000,none,[#Fun<amqp_uri.12.90191702>,#Fun<amqp_uri.12.90191702>],[{<<"connection_name">>,longstr,<<"Shovel my-shovel">>}],[]}]}},{restart_type,intrinsic},{shutdown,brutal_kill},{child_type,worker}]
Kindly please give me some clue

RabbitMQ crashing during restart

When I try to start my rabbitmq server, I get following error in log/*-sasl.log
=CRASH REPORT==== 10-Apr-2013::23:24:32 ===
crasher:
initial call: application_master:init/4
pid: <0.69.0>
registered_name: []
exception exit: {bad_return,
{{rabbit,start,[normal,[]]},
{'EXIT',
{undef,
[{os_mon_mib,module_info,[attributes],[]},
{rabbit_misc,module_attributes,1,
[{file,"src/rabbit_misc.erl"},
{line,760}]},
{rabbit_misc,
'-all_module_attributes/1-fun-0-',3,
[{file,"src/rabbit_misc.erl"},
{line,779}]},
{lists,foldl,3,
[{file,"lists.erl"},{line,1197}]},
{rabbit,boot_steps,0,
[{file,"src/rabbit.erl"},{line,441}]},
{rabbit,start,2,
[{file,"src/rabbit.erl"},{line,356}]},
{application_master,start_it_old,4,
[{file,"application_master.erl"},
{line,274}]}]}}}}
in function application_master:init/4 (application_master.erl, line 138)
ancestors: [<0.68.0>]
messages: [{'EXIT',<0.70.0>,normal}]
links: [<0.68.0>,<0.7.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 2584
stack_size: 24
reductions: 227
neighbours:
startup_log has -
{error_logger,{{2013,4,11},{2,48,36}},std_error,"File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7/ebin. Function: read_file_info. Process: code_server."}
=ERROR REPORT==== 11-Apr-2013::02:48:36 ===
File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7/ebin. Function: read_file_info. Process: code_server.
Activating RabbitMQ plugins ...
=ERROR REPORT==== 11-Apr-2013::02:48:36 ===
File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7. Function: list_dir. Process: application_controller.
=ERROR REPORT==== 11-Apr-2013::02:48:36 ===
File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7. Function: list_dir. Process: application_controller.
Skipping /usr/lib64/erlang/lib/os_mon-2.2.8/ebin/os_mon_mib.beam (unreadable)
=ERROR REPORT==== 11-Apr-2013::02:48:36 ===
File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7. Function: list_dir. Process: systools_make.
=ERROR REPORT==== 11-Apr-2013::02:48:36 ===
File operation error: eacces. Target: /usr/lib64/erlang/lib/otp_mibs-1.0.7. Function: list_dir. Process: systools_make.
startup_err has -
Erlang has closed
Crash dump was written to: erl_crash.dump
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{undef,[{os_mon_mib,module_info,[attributes],[]},{rabbit_misc,module
Any help to understand the cause of the error would be great. Thanks.

ovirt:create iso/nfs storage domain erro

I met a problem while creating
"New Domain" iso/nfs storage,it prints "Error while executing action
New NFS Storage Domain: Storage domain remote path not mounted"
and the error code is 477.
I followed the "http://wiki.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues" and find that the vdsm user can't now use mount.
"mount: only root can do that"
the version I use:
oVirt Engine Version: 3.1.0-2.fc17
oVirt Node Hypervisor 2.5.4-0.1.fc17
the error log:
2012-11-08 09:15:00,004 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-77) Checking autorecoverable storage domains done
2012-11-08 09:17:28,920 WARN [org.ovirt.engine.core.bll.GetConfigurationValueQuery] (ajp--0.0.0.0-8009-2) calling GetConfigurationValueQuery (StorageDomainNameSizeLimit) with null version,
using default general for version
2012-11-08 09:17:29,333 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] START, ValidateStorageServerConnectionVDSCommand(vdsId = 12bcf124-29a4-11e2-bcba-00505680002a, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: 6556c55d-42a4-4dcc-832c-4d8987ebe6bd, connection: 200.200.101.219:/usr/lwq/iso };]), log id: 52777a80
2012-11-08 09:17:29,388 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] FINISH, ValidateStorageServerConnectionVDSCommand, return: {6556c55d-42a4-4dcc-832c-4d8987ebe6bd=0}, log id: 52777a80
2012-11-08 09:17:29,392 INFO [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp--0.0.0.0-8009-3) [7720b88f] Running command: AddStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System
2012-11-08 09:17:29,404 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] START, ConnectStorageServerVDSCommand(vdsId = 12bcf124-29a4-11e2-bcba-00505680002a, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: 6556c55d-42a4-4dcc-832c-4d8987ebe6bd, connection: 200.200.101.219:/usr/lwq/iso };]), log id: 36cb94f
2012-11-08 09:17:29,656 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] FINISH, ConnectStorageServerVDSCommand, return: {6556c55d-42a4-4dcc-832c-4d8987ebe6bd=477}, log id: 36cb94f
2012-11-08 09:17:29,658 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (ajp--0.0.0.0-8009-3) [7720b88f] The connection with details 200.200.101.219:/usr/lwq/iso failed because of
error code 477 and error message is: 477
2012-11-08 09:17:29,717 INFO [org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand] (ajp--0.0.0.0-8009-11) [1661aa36] Running command: AddNFSStorageDomainCommand internal: false. En
tities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System
2012-11-08 09:17:29,740 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--0.0.0.0-8009-11) [1661aa36] START, CreateStorageDomainVDSCommand(vdsId = 12bcf12
4-29a4-11e2-bcba-00505680002a, storageDomain=org.ovirt.engine.core.common.businessentities.storage_domain_static#4a900545, args=200.200.101.219:/usr/lwq/iso), log id: 50b803a0
2012-11-08 09:17:35,233 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-11) [1661aa36] Failed in CreateStorageDomainVDS method
2012-11-08 09:17:35,234 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-11) [1661aa36] Error code StorageDomainFSNotMounted and error message VDSGeneri
cException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Storage domain remote path not mounted: ('/rhev/data-center/mnt/200.200.101.219:_usr_lwq_iso',)
2012-11-08 09:17:35,260 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-11) [1661aa36] Command org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageD
omainVDSCommand return value
As far as I know VDSM does sudo to run privileged commands, but still that mount error is strange. Can you send share the vdsm log?
Also, you may get more attention on the engine-users or vdsm mailing lists.