I am trying to build 'meta-qt5 krogoth branch' but I am getting following error during qtwebengine 'do_populate_sysroot' state.
ERROR: qtwebengine-5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0 do_populate_sysroot: QA Issue: Qt5WebEngineCore.pc failed sanity test (tmpdir) in path /home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/lib/pkgconfig [pkgconfig]
ERROR: qtwebengine-5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0 do_populate_sysroot: QA staging was broken by the package built above
ERROR: qtwebengine-5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0 do_populate_sysroot: Function failed: do_qa_staging
ERROR: Logfile of failure stored in: /home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/temp/log.do_populate_sysroot.2443
ERROR: Task 878 (/home/yusuf/yocto-krogoth/poky/meta-qt5/recipes-qt/qt5/qtwebengine_git.bb, do_populate_sysroot) failed with exit code '1'
And this is 'log.do_populate_sysroot.2443' file:
DEBUG: Executing python function sstate_task_prefunc
DEBUG: Python function sstate_task_prefunc finished
DEBUG: Executing python function do_populate_sysroot
DEBUG: Executing shell function sysroot_stage_all
0 blocks
0 blocks
0 blocks
DEBUG: Shell function sysroot_stage_all finished
DEBUG: Executing python function sysroot_strip
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note --strip-unneeded '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/lib/libQt5WebEngineWidgets.so.5.6.1'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/share/qt5/examples/webenginewidgets/markdowneditor/markdowneditor'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/share/qt5/examples/webenginewidgets/contentmanipulation/contentmanipulation'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note --strip-unneeded '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/lib/qt5/qml/QtWebEngine/experimental/libqtwebengineexperimentalplugin.so'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/share/qt5/examples/webenginewidgets/simplebrowser/simplebrowser'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/share/qt5/examples/webengine/minimal/minimal'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/lib/qt5/libexec/QtWebEngineProcess'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/share/qt5/examples/webenginewidgets/minimal/minimal'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/share/qt5/examples/webenginewidgets/demobrowser/demobrowser'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note --strip-unneeded '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/lib/libQt5WebEngineCore.so.5.6.1'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note --strip-unneeded '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/lib/libQt5WebEngine.so.5.6.1'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/share/qt5/examples/webengine/quicknanobrowser/quicknanobrowser'
DEBUG: runstrip: 'arm-poky-linux-gnueabi-strip' --remove-section=.comment --remove-section=.note --strip-unneeded '/home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/lib/qt5/qml/QtWebEngine/libqtwebengineplugin.so'
DEBUG: Python function sysroot_strip finished
DEBUG: Python function do_populate_sysroot finished
DEBUG: Executing python function do_qa_staging
NOTE: QA checking staging
ERROR: QA Issue: Qt5WebEngineCore.pc failed sanity test (tmpdir) in path /home/yusuf/yocto-krogoth/poky/qt5Toolchain/tmp/work/cortexa7hf-neon-vfpv4-poky-linux-gnueabi/qtwebengine/5.6.0+gitAUTOINC+643aa579fc_8252b18aa3-r0/sysroot-destdir/usr/lib/pkgconfig [pkgconfig]
ERROR: QA staging was broken by the package built above
DEBUG: Python function do_qa_staging finished
ERROR: Function failed: do_qa_staging
What is cause of this problem? How do I fix this?
To be exact, the error occurs during executing task do_qa_staging().
Similar issue has been raised on openembedded list in March: [oe] [meta-qt5][PATCH] qtbase: fix up pkgconfig replacements. You can see that in it was responded that:
I tested with qtwebengine
PV="5.5.99+5.6.0-rc+gitAUTOINC+3f02c25de4_779a2388fc" and it is
working.
Then OP wrote that he just simply removed the meta-luneui layer (can be done by changing the value of BBLAYERS variable in bblayers.conf file in your build/conf/ directory).
Also this patch seems to be a fix for this issue.
As you are facing this issue I suggest that you should try with qtwebengine in version 5.5 and see what's the result. To try it, as quoted above, change the value of PV variable in qtwebengine_git.bb recipe.
Related
I have a playbooks that checks to see if the endpoint is registered to Spacewalk using the stat module
- name: "Check spacewalk registraton"
stat:
path: /usr/sbin/rhn_check
register: sw_registered
- debug:
msg: "{{ sw_registered }}"
Output is:
TASK [Check spacewalk registraton] *********************************************
ok: [hostname]
TASK [debug] *******************************************************************
ok: [hostname] =>
msg:
changed: false
failed: false
stat:
atime: 1670244246.6493175
attr_flags: e
attributes:
- extents
block_size: 4096
blocks: 32
charset: us-ascii
checksum: 7b22e2e756706ef1b81e50cda7c41005e15441d7
ctime: 1623819058.4283004
dev: 64768
device_type: 0
executable: true
**exists: true**
gid: 0
gr_name: root
inode: 143991
isblk: false
ischr: false
isdir: false
isfifo: false
isgid: false
islnk: false
isreg: true
issock: false
isuid: false
mimetype: text/x-python
mode: '0755'
mtime: 1536233638.0
nlink: 1
path: /usr/sbin/rhn_check
pw_name: root
readable: true
rgrp: true
roth: true
rusr: true
size: 15291
uid: 0
version: '290956743'
wgrp: false
woth: false
writeable: true
wusr: true
xgrp: true
xoth: true
xusr: true
So the sw_registered.stat.exists is a value of true
Further in my role are tasks based on this variable
- name: "Yum update for RHEL6 and above using RedHat Satellite"
yum:
name: '*'
state: latest
exclude: rhn-client-tools
when: (ansible_distribution_major_version >= "6") and (sw_registered.stat.exists is not defined and sw_registered.stat.exists is false)
Output from that task is
TASK [QL-patching : Yum update for RHEL6 and above using RedHat Satellite] *****
skipping: [hostname]
I would expect that task to be skipped but the next task is:
- name: "Yum update for RHEL6 and above using spacewalk"
yum:
name: '*'
state: latest
disable_gpg_check: yes
when: (ansible_distribution_major_version >= "6") and (sw_registered.stat.exists is defined and sw_registered.stat.exists is true )
Output from that task is:
TASK [QL-patching : Yum update for RHEL6 and above using spacewalk] ************
skipping: [hostname]
I expect this task be executed and not skipped. What am I missing here?
Based on the comment of Zeitounator you may have a look into the following minimal example
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Test file
stat:
path: "/home/{{ ansible_user }}/test.file"
register: result
- name: Show result
debug:
msg: "{{ result.stat.exists }}"
resulting into an output of
TASK [Show result] ******
ok: [localhost] =>
msg: false
TASK [Show result] ******
ok: [localhost] =>
msg: true
and depending on if the file to test exists or not.
The key result.stat.exists will be defined in both cases if the stat test task was executed successful. This is because of Return Values of the stat module. Therefore the Conditional task based on registered variables could be simplified to something like
- name: Show result
debug:
msg: "File exists."
when: result.stat.exists
resulting into an output of
TASK [Show result] ******
ok: [localhost] =>
msg: File exists.
if the file is available or skipped if not.
You may also consinder to Provide default values as also mentioned and to catch corner cases like a former failing task because of insufficient access rights or if the task wasn't running because of Check mode. In such cases the result set could look like
TASK [Test file] ***************************
fatal: [localhost]: FAILED! => changed=false
msg: Permission denied
...ignoring
TASK [Show result] *************************
ok: [localhost] =>
msg:
changed: false
failed: true
msg: Permission denied
I'm trying to learn nextflow but it's not going very well. I started with the tutorial of this site: https://www.nextflow.io/docs/latest/getstarted.html (I'm the one who installed nextflow).
I copied this script :
#!/usr/bin/env nextflow
params.str = 'Hello world!'
process splitLetters {
output:
file 'chunk_*' into letters
"""
printf '${params.str}' | split -b 6 - chunk_
"""
}
process convertToUpper {
input:
file x from letters.flatten()
output:
stdout result
"""
cat $x | tr '[a-z]' '[A-Z]'
"""
}
result.view { it.trim() }
But when I run it (nextflow run tutorial.nf), in the terminal I have this :
N E X T F L O W ~ version 22.03.1-edge
Launching `tutorial.nf` [intergalactic_waddington] DSL2 - revision: be42f295f4
No such variable: result
-- Check script 'tutorial.nf' at line: 29 or see '.nextflow.log' file for more details
And in the log file I have this :
avr.-20 14:14:12.319 [main] DEBUG nextflow.cli.Launcher - $> nextflow run tutorial.nf
avr.-20 14:14:12.375 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 22.03.1-edge
avr.-20 14:14:12.466 [main] INFO nextflow.cli.CmdRun - Launching `tutorial.nf` [intergalactic_waddington] DSL2 - revision: be42f295f4
avr.-20 14:14:12.481 [main] DEBUG nextflow.plugin.PluginsFacade - Setting up plugin manager > mode=prod; plugins-dir=/home/user/.nextflow/plugins; core-plugins: nf-amazon#1.6.0,nf-azure#0.13.0,nf-console#1.0.3,nf-ga4gh#1.0.3,nf-google#1.1.4,nf-sqldb#0.3.0,nf-tower#1.4.0
avr.-20 14:14:12.483 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins default=[]
avr.-20 14:14:12.494 [main] INFO org.pf4j.DefaultPluginStatusProvider - Enabled plugins: []
avr.-20 14:14:12.495 [main] INFO org.pf4j.DefaultPluginStatusProvider - Disabled plugins: []
avr.-20 14:14:12.501 [main] INFO org.pf4j.DefaultPluginManager - PF4J version 3.4.1 in 'deployment' mode
avr.-20 14:14:12.515 [main] INFO org.pf4j.AbstractPluginManager - No plugins
avr.-20 14:14:12.571 [main] DEBUG nextflow.Session - Session uuid: 67344021-bff5-4131-9c07-e101756fb5ea
avr.-20 14:14:12.571 [main] DEBUG nextflow.Session - Run name: intergalactic_waddington
avr.-20 14:14:12.573 [main] DEBUG nextflow.Session - Executor pool size: 8
avr.-20 14:14:12.604 [main] DEBUG nextflow.cli.CmdRun -
Version: 22.03.1-edge build 5695
avr.-20 14:14:12.629 [main] DEBUG nextflow.Session - Work-dir: /home/user/Documents/formations/nextflow/testScript/work [ext2/ext3]
avr.-20 14:14:12.629 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /home/user/Documents/formations/nextflow/testScript/bin
avr.-20 14:14:12.637 [main] DEBUG nextflow.executor.ExecutorFactory - Extension executors providers=[]
avr.-20 14:14:12.648 [main] DEBUG nextflow.Session - Observer factory: DefaultObserverFactory
avr.-20 14:14:12.669 [main] DEBUG nextflow.cache.CacheFactory - Using Nextflow cache factory: nextflow.cache.DefaultCacheFactory
avr.-20 14:14:12.678 [main] DEBUG nextflow.util.CustomThreadPool - Creating default thread pool > poolSize: 9; maxThreads: 1000
avr.-20 14:14:12.741 [main] DEBUG nextflow.Session - Session start invoked
avr.-20 14:14:13.423 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution
avr.-20 14:14:13.446 [main] DEBUG nextflow.Session - Session aborted -- Cause: No such property: result for class: Script_6634cd79
avr.-20 14:14:13.463 [main] ERROR nextflow.cli.Launcher - #unknown
groovy.lang.MissingPropertyException: No such property: result for class: Script_6634cd79
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:65)
at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:51)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:341)
at Script_6634cd79.runScript(Script_6634cd79:29)
at nextflow.script.BaseScript.runDsl2(BaseScript.groovy:170)
at nextflow.script.BaseScript.run(BaseScript.groovy:203)
at nextflow.script.ScriptParser.runScript(ScriptParser.groovy:220)
at nextflow.script.ScriptRunner.run(ScriptRunner.groovy:212)
at nextflow.script.ScriptRunner.execute(ScriptRunner.groovy:120)
at nextflow.cli.CmdRun.run(CmdRun.groovy:334)
at nextflow.cli.Launcher.run(Launcher.groovy:480)
at nextflow.cli.Launcher.main(Launcher.groovy:639)
What should I do ?
Thanks a lot for your help.
Nextflow includes a new language extension called DSL2, which became the default syntax in version 22.03.0-edge. However, it's possible to override this behavior by either adding nextflow.enable.dsl=1 to the top of your script, or by setting the -dsl1 option when you run your script:
nextflow run tutorial.nf -dsl1
Alternatively, roll back to the latest release (as opposed to an 'edge' pre-release). It's possible to specify the Nextflow version using the NXF_VER environment variable:
NXF_VER=21.10.6 nextflow run tutorial.nf
I find DSL2 drastically simplifies the writing of complex workflows and would highly recommend getting started with it. Unfortunately, the documentation is lagging a bit, but lifting it over is relatively straight forward once you get the hang of things:
params.str = 'Hello world!'
process splitLetters {
output:
path 'chunk_*'
"""
printf '${params.str}' | split -b 6 - chunk_
"""
}
process convertToUpper {
input:
path x
output:
stdout
"""
cat $x | tr '[a-z]' '[A-Z]'
"""
}
workflow {
splitLetters | flatten() | convertToUpper | view()
}
Results:
nextflow run tutorial.nf -dsl2
N E X T F L O W ~ version 21.10.6
Launching `tutorial.nf` [prickly_kilby] - revision: 0c6f835b9c
executor > local (3)
[b8/84a1de] process > splitLetters [100%] 1 of 1 ✔
[86/fd19ea] process > convertToUpper (2) [100%] 2 of 2 ✔
HELLO
WORLD!
I have setup a mail task in ansible to send emails if yum update is marked as 'changed'.
Here is my current working code:
- name: Send mail alert if updated
community.general.mail:
to:
- 'recipient1'
cc:
- 'recipient2'
subject: Update Alert
body: 'Ansible Tower Updates have been applied on the following system: {{ ansible_hostname }}'
sender: "ansible.updates#domain.com"
delegate_to: localhost
when: yum_update.changed
This works great, however, every system that gets updated per host group sends a separate email. Last night for instance I had a group of 20 servers update and received 20 separate emails. I'm aware of why this happens, but my question is how would I script this to add all the systems to one email? Is that even possible or should I just alert that the group was updated and inform teams of what servers are in each group? (I'd prefer not to take the second option)
Edit 1:
I have added the code suggested and am now unable to receive any emails. Here's the error message:
"msg": "The conditional check '_changed|length > 0' failed. The error was: error while evaluating conditional (_changed|length > 0): {{ hostvars|dict2items| selectattr('value.yum_update.changed')| map(attribute='key')|list }}: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'yum_update'\n\nThe error appears to be in '/tmp/bwrap_1073_o8ibkgrl/awx_1073_0eojw5px/project/yum-update-ent_template_servers.yml': line 22, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Send mail alert if updated\n ^ here\n",
I am also attaching my entire playbook for reference:
---
- name: Update enterprise template servers
hosts: ent_template_servers
tasks:
- name: Update all packages
yum:
name: '*'
state: latest
register: yum_update
- name: Reboot if needed
import_tasks: /usr/share/ansible/tasks/reboot-if-needed-centos.yml
- name: Kernel Cleanup
import_tasks: /usr/share/ansible/tasks/kernel-cleanup.yml
- debug:
var: yum_update.changed
- name: Send mail alert if updated
community.general.mail:
to:
- 'email#domain.com'
subject: Update Alert
body: |-
Updates have been applied on the following system(s):
{{ _changed }}
sender: "ansible.updates#domain.com"
delegate_to: localhost
run_once: true
when: _changed|length > 0
vars:
_changed: "{{ hostvars|dict2items|
selectattr('yum_update.changed')|
map(attribute='key')|list }}"
...
Ansible version is: 2.9.27
Ansible Tower version is: 3.8.3
Thanks in advance!
For example, the mail task below
- debug:
var: yum_update.changed
- community.general.mail:
sender: ansible
to: root
subject: Update Alert
body: |-
Updates have been applied to the following system:
{{ _changed }}
delegate_to: localhost
run_once: true
when: _changed|length > 0
vars:
_changed: "{{ hostvars|dict2items|
selectattr('value.yum_update.changed')|
map(attribute='key')|list }}"
TASK [debug] ***************************************************************
ok: [host01] =>
yum_update.changed: true
ok: [host02] =>
yum_update.changed: false
ok: [host03] =>
yum_update.changed: true
TASK [community.general.mail] **********************************************
ok: [host01 -> localhost]
will send
From: ansible#domain.com
To: root#domain.com
Cc:
Subject: Update Alert
Date: Wed, 09 Feb 2022 16:55:47 +0100
X-Mailer: Ansible mail module
Updates have been applied to the following system:
['host01', 'host03']
Remove the condition below if you want to receive also empty lists
when: _changed|length > 0
Debug
'ansible.vars.hostvars.HostVarsVars object' has no attribute 'yum_update'
Q: "What I could try?"
A: Some of the hosts are missing the variables yum_update. You can test it
- debug:
msg: "{{ hostvars|dict2items|
selectattr('value.yum_update.changed')|
map(attribute='key')|list }}"
run_once: true
Either make sure that the variable is defined on all hosts or use json_query. This filter tolerates missing attributes, e.g.
- debug:
msg: "{{ hostvars|dict2items|
json_query('[?value.yum_update.changed].key') }}"
run_once: true
Q: "The 'debug' task prior to the 'mail' task gives me the same output. But it fails when the 'mail' task is executed."
A: Minimize the code and isolate the problem. For example, in the code below you can see
Variable yum_update.changed is missing on host03
The filter json_query ignores this
The filter selectattr fails
- debug:
var: yum_update.changed
- debug:
msg: "{{ hostvars|dict2items|
json_query('[?value.yum_update.changed].key') }}"
run_once: true
- debug:
msg: "{{ hostvars|dict2items|
selectattr('value.yum_update.changed')|
map(attribute='key')|list }}"
run_once: true
gives
TASK [debug] **************************************************
ok: [host01] =>
yum_update.changed: true
ok: [host02] =>
yum_update.changed: false
ok: [host03] =>
yum_update.changed: VARIABLE IS NOT DEFINED!
TASK [debug] **************************************************
ok: [host01] =>
msg:
- host01
TASK [debug] **************************************************
fatal: [host01]: FAILED! =>
msg: |-
The task includes an option with an undefined variable.
The error was: 'ansible.vars.hostvars.HostVarsVars object'
has no attribute 'yum_update'
Both filters give the same results if all variables are present
TASK [debug] **************************************************
ok: [host01] =>
yum_update.changed: true
ok: [host02] =>
yum_update.changed: false
ok: [host03] =>
yum_update.changed: true
TASK [debug] **************************************************
ok: [host01] =>
msg:
- host01
- host03
TASK [debug] **************************************************
ok: [host01] =>
msg:
- host01
- host03
We're trying to use a default broadcast via WebRTC webpage that comes with Red5Pro server
https://hostname-here/live/broadcast.jsp?host=hostname-here
Client logs:
[live]:: Publish options:
{
"protocol": "wss",
"port": 8083,
"app": "live",
"streamMode": "live",
"mediaElementId": "red5pro-publisher",
"iceServers": [
{
"urls": "stun:stun2.l.google.com:19302"
}
],
"iceTransport": "udp",
"bandwidth": {
"audio": 56,
"video": 750
},
"mediaConstraints": {
"audio": true,
"video": {
"width": {
"min": 640,
"max": 640
},
"height": {
"min": 480,
"max": 480
},
"frameRate": {
"min": 8,
"max": 24
}
}
},
"host": "hostname-here",
"streamName": "teststream2"
}
red5pro-sdk.min.js:formatted:5033 2019-04-14T18:16:58.931Z - [red5pro-sdk] debug: (RTCPublisher) [publish]
red5pro-sdk.min.js:formatted:5033 2019-04-14T18:16:58.931Z - [red5pro-sdk] debug: (R5ProPublisherSocket) [websocket:setup] wss://hostname-here:8083/live?id=teststream2.
red5pro-sdk.min.js:formatted:5033 2019-04-14T18:16:58.932Z - [red5pro-sdk] debug: (R5ProPublisherSocket) [teardown] >>
red5pro-sdk.min.js:formatted:5033 2019-04-14T18:16:58.932Z - [red5pro-sdk] debug: (R5ProPublisherSocket) [WebSocket(wss://hostname-here:8083/live?id=teststream2)] close() >>
red5pro-sdk.min.js:formatted:5033 2019-04-14T18:16:58.933Z - [red5pro-sdk] debug: (R5ProPublisherSocket) << [WebSocket(wss://hostname-here:8083/live?id=teststream2)] close()
red5pro-sdk.min.js:formatted:5033 2019-04-14T18:16:58.933Z - [red5pro-sdk] debug: (R5ProPublisherSocket) << [teardown]
red5pro-sdk.min.js:formatted:255 WebSocket connection to 'wss://hostname-here:8083/live?id=teststream2' failed: Error during WebSocket handshake: Unexpected response code: 400
createWebSocket # red5pro-sdk.min.js:formatted:255
t.create # red5pro-sdk.min.js:formatted:1830
value # red5pro-sdk.min.js:formatted:2196
value # red5pro-sdk.min.js:formatted:5680
(anonymous) # r5pro-publisher-failover.js:393
promisify # r5pro-publisher-failover.js:338
publish # r5pro-publisher-failover.js:377
(anonymous) # r5pro-publisher-failover.js:198
red5pro-sdk.min.js:formatted:5033 2019-04-14T18:17:01.727Z - [red5pro-sdk] warn: (R5ProPublisherSocket) [websocketerror]: Error from WebSocket. error.
red5pro-sdk.min.js:formatted:5033 2019-04-14T18:17:01.728Z - [red5pro-sdk] debug: ([window:orientation]) [removeOrientationChangeHandler]:: onorientationchange removed.
3r5pro-publisher-failover.js:311 [Red5ProPublisher] Connect.Failure.
r5pro-publisher-failover.js:405 [live]:: Error in publish request: [object Event]
(anonymous) # r5pro-publisher-failover.js:405
Promise.catch (async)
(anonymous) # r5pro-publisher-failover.js:403
promisify # r5pro-publisher-failover.js:338
publish # r5pro-publisher-failover.js:377
(anonymous) # r5pro-publisher-failover.js:198
red5pro-sdk.min.js:formatted:5033 2019-04-14T18:17:01.731Z - [red5pro-sdk] warn: (R5ProPublisherSocket) [websocketclose]: 1006
red5pro-sdk.min.js:formatted:5033 2019-04-14T18:17:01.732Z - [red5pro-sdk] debug: (RTCPublisher) RTCPublisher
red5pro-sdk.min.js:formatted:5033 2019-04-14T18:17:01.732Z - [red5pro-sdk] debug: (R5ProPublishPeer) [teardown]
Server logs:
[WARN] [NioProcessor-20] org.red5.net.websocket.WebSocketConnection - Closing connection with status: 1002
[WARN] [NioProcessor-20] org.red5.net.websocket.codec.WebSocketDecoder - Handshake failed
org.red5.net.websocket.WebSocketException: Handshake failed, path not enabled
at org.red5.net.websocket.codec.WebSocketDecoder.parseClientRequest(WebSocketDecoder.java:302)
at org.red5.net.websocket.codec.WebSocketDecoder.doHandShake(WebSocketDecoder.java:186)
at org.red5.net.websocket.codec.WebSocketDecoder.doDecode(WebSocketDecoder.java:98)
at org.apache.mina.filter.codec.CumulativeProtocolDecoder.decode(CumulativeProtocolDecoder.java:180)
at org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:253)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:641)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1300(DefaultIoFilterChain.java:48)
at org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:1114)
at org.apache.mina.filter.ssl.SslHandler.flushScheduledEvents(SslHandler.java:323)
at org.apache.mina.filter.ssl.SslFilter.messageReceived(SslFilter.java:565)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:641)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1300(DefaultIoFilterChain.java:48)
at org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:1114)
at org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:121)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:641)
at org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:634)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:539)
at org.apache.mina.core.polling.AbstractPollingIoProcessor.access$1200(AbstractPollingIoProcessor.java:68)
at org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.process(AbstractPollingIoProcessor.java:1242)
at org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.process(AbstractPollingIoProcessor.java:1231)
at org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:683)
at org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Red5Pro server has a valid SSL certificate and a public IP. The version is 5.2.0, ports 8081 and 8083 are open.
We have the libraries mentioned in this answer installed
Turned out the reason for the issue was the WebRTC plugin was disabled.
Not sure why I did not figure that out right away. Maybe I was confused by messages in the log about WebSocket plugin and the WebSocket error
[INFO] [main] com.red5pro.license.LicenseManager - addListener: Red5Pro-Clustering
[INFO] [main] org.red5.server.plugin.PluginLauncher - Loaded plugin: com.red5pro.cluster.plugin.ClusterPlugin
[INFO] [main] com.red5pro.license.LicenseManager - addListener: Red5Pro-RTSP-Plugin
[INFO] [main] org.red5.server.plugin.PluginLauncher - Loaded plugin: com.red5pro.rtsp.plugin.RTSPPlugin
[INFO] [main] com.red5pro.activation.ProPluginator - Version - server: RED5/1,0,10,0 pro: 5.2.0.b271-release
[INFO] [main] com.red5pro.activation.ProPluginator - Operating system: Linux version: 4.15.0-1023-azure
[INFO] [main] com.red5pro.activation.ProPluginator - Processor arch: amd64 available: 2
[INFO] [main] com.red5pro.activation.ProPluginator - Memory - free: 190632384 total: 251002880 max: 1626734592
[INFO] [main] com.red5pro.activation.ProPluginator - Starting Red5 Professional, pluginator version 5.2.0.271-RELEASE - b22d2d1 (on: 10.12.2018 09:38)
[INFO] [main] com.red5pro.override.internal.ProvisionResolverService - setting MBR spliterator ~
[INFO] [main] com.red5pro.override.internal.ProvisionResolverService - inspecting prewire
[INFO] [main] com.red5pro.license.LicenseManager - addListener: Red5Pro-SecondScreen-Websockets
[INFO] [main] com.red5pro.activation.ProPluginator - Red5 Professional Activating
[INFO] [main] com.red5pro.activation.ProPluginator - Plugination activation waiting for server to settle...
[INFO] [main] org.red5.server.plugin.PluginLauncher - Loaded plugin: com.red5pro.activation.ProPluginator
[INFO] [main] com.red5pro.license.LicenseManager - addListener: Red5Pro-Cloudstorage
[INFO] [main] org.red5.server.plugin.PluginLauncher - Loaded plugin: com.red5pro.media.storage.CloudstoragePlugin
[INFO] [main] org.red5.server.plugin.PluginLauncher - Loaded plugin: org.red5.net.websocket.WebSocketPlugin
[INFO] [main] com.red5pro.license.LicenseManager - addListener: Red5Pro-AutoScale
[INFO] [main] org.red5.server.plugin.PluginLauncher - Loaded plugin: com.red5pro.clustering.autoscale.AutoScale
[INFO] [main] org.red5.net.websocket.WebSocketTransport - WebSocket (wss) will be bound to [0.0.0.0:8083]
Maybe that helps somebody.
cs_root_host is set up right:
grep root_host /var/lib/riak-cs/generated.configs/app.2015.07.09.13.59.07.config
{cs_root_host,"s3.example.com"},
But when I upload file:
s3cmd put test.jpg s3://images --acl-public
I get in return:
Public URL of the object is: http://images.s3.amazonaws.com/test.jpg
Where is the issue?
Added:
Here is output - everything looks fine, except the last line:
(example.com is just replacement for real domain which I don't want to public)
s3cmd -d -c .s3cfg put newfile.jpg s3://images --acl-public
DEBUG: ConfigParser: Reading file '.s3cfg'
DEBUG: ConfigParser: access_key->YD...17_chars...U
DEBUG: ConfigParser: bucket_location->RU
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: encoding->UTF-8
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->s3.example.com
DEBUG: ConfigParser: host_bucket->%(bucket)s.s3.example.com
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->127.0.0.1
DEBUG: ConfigParser: proxy_port->80
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->4096
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: secret_key->kG...37_chars...=
DEBUG: ConfigParser: send_chunk->4096
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->10
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: Updating Config.Config encoding -> UTF-8
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'put' using UTF-8
DEBUG: Unicodising 'newfile.jpg' using UTF-8
DEBUG: Unicodising 's3://images' using UTF-8
DEBUG: Command: put
INFO: Compiling list of local files...
DEBUG: DeUnicodising u'' using UTF-8
DEBUG: DeUnicodising u'newfile.jpg' using UTF-8
DEBUG: Unicodising 'newfile.jpg' using UTF-8
DEBUG: Unicodising 'newfile.jpg' using UTF-8
INFO: Applying --exclude/--include
DEBUG: CHECK: newfile.jpg
DEBUG: PASS: newfile.jpg
INFO: Summary: 1 local files to upload
DEBUG: Content-Type set to 'image/jpeg'
DEBUG: String 'newfile.jpg' encoded to 'newfile.jpg'
DEBUG: SignHeaders: 'PUT\n\nimage/jpeg\n\nx-amz-acl:public-read\nx-amz-date:Fri, 10 Jul 2015 09:55:37 +0000\n/images/newfile.jpg'
DEBUG: CreateRequest: resource[uri]=/newfile.jpg
DEBUG: Unicodising 'newfile.jpg' using UTF-8
DEBUG: SignHeaders: 'PUT\n\nimage/jpeg\n\nx-amz-acl:public-read\nx-amz-date:Fri, 10 Jul 2015 09:55:37 +0000\n/images/newfile.jpg'
newfile.jpg -> s3://images/newfile.jpg [1 of 1]
DEBUG: get_hostname(images): images.s3.example.com
DEBUG: format_uri(): http://images.s3.example.com/newfile.jpg
32600 of 32600 100% in 0s 14.49 MB/sDEBUG: Response: {'status': 200, 'headers': {'content-length': '0', 'server': 'nginx', 'connection': 'keep-alive', 'etag': '"89e39f454c69a1ce1fadec3a222fc292"', 'date': 'Fri, 10 Jul 2015 09:55:37 GMT', 'content-type': 'text/plain'}, 'reason': 'OK', 'data': '', 'size': 32600}
32600 of 32600 100% in 0s 391.54 kB/s done
DEBUG: MD5 sums: computed=89e39f454c69a1ce1fadec3a222fc292, received="89e39f454c69a1ce1fadec3a222fc292"
Public URL of the object is: http://images.s3.amazonaws.com/newfile.jpg
This is not a Riak CS issue. s3cmd itself produce public url string
and print it.
For my environment, with s3cmd of master branch of commit
7bdefc81823699069706ea3680bfa65ec8ad3db5 (just fetched today, 2015-07-14),
it shows (seemingly) the corrent URL.
% ~/g/s3cmd/build/scripts-2.7/s3cmd -c .s3cfg.15018.alice put rebar.config -P s3://test/a
rebar.config -> s3://test/a [1 of 1]
2791 of 2791 100% in 0s 196.88 kB/s done
Public URL of the object is: http://test.s3.example.com/a
From the source code of s3cmd, it seems it uses host_bucket or host_base
configuration depending on bucket name (or maybe other configurations.)
Some other details on my environment
s3cmd configuration : host_base = s3.example.com and host_bucket = %(bucket)s.s3.example.com
Server is Riak CS of develop branch (commit 1f954aaae45429923f65fdad40c7916a55ab79f3)
Riak CS configuration : cs_root_host = s3.example.com