Why the DBT macro executes before the pre_hook statement in an incremental model? - dbt

Does anyone know why the pre_hook statement gets executed after the macro append_placekeys is called?
I am creating a Temporary table in the pre_hook configuration which is going to be called by the macro. However, DBT logs showed the macro executing before the creation of the temp table.
{{
config(
materialized='incremental',
unique_key='DX_ID',
pre_hook=[
"""
CREATE OR REPLACE TEMP TABLE DXH_STORE_PLCK_IN
AS
SELECT DISTINCT
s.dx_id AS msa_id,
s.nm AS store_name,
s.city AS city,
s.state AS state,
s.address AS street_addr,
s.zip5 AS zipcode,
s.country AS COUNTRY_CODE,
s.latitude AS LATITUDE,
s.longitude AS LONGITUDE,
COALESCE(p.last_updt_dt, TO_DATE('2023-01-01')) AS last_updt_dt
FROM {{ ref('dx_store_attr') }} s
{%- if is_incremental() %}
FULL JOIN {{ this }} p USING (dx_id)
WHERE
(
p.placekey IS NULL
OR
DATEDIFF(day, p.last_updt_dt::DATE, CURRENT_TIMESTAMP()::DATE) > 90
)
ORDER BY last_updt_dt ASC, s.dx_id ASC
LIMIT 3000
{%- else %}
LIMIT 50000
{%- endif -%}
;
"""
],
tags=["dx", "refresh:daily"]
)
}}
--load dx_store_plck
{{
append_placekeys(
tbl_in='DXH_STORE_PLCK_IN',
tbl_out='DXH_STORE_PLCK_TEMP'
)
}}
SELECT
t.msa_id AS dx_id,
t.placekey AS placekey,
t.error AS error,
CURRENT_TIMESTAMP() AS last_updt_dt
FROM
{{ database }}.{{ schema }}.DXH_STORE_PLCK_TEMP AS t
DBT Logs:
[0m20:44:55.939031 [debug] [Thread-1 ]: SQL status: SUCCESS 1 in 0.19 seconds
[0m20:44:55.940927 [debug] [Thread-1 ]: Using snowflake connection "model.XXX.dx_store_plck"
[0m20:44:55.942096 [debug] [Thread-1 ]: On model.XXX.dx_store_plck: /* {"app": "dbt", "dbt_version": "1.3.0", "profile_name": "XXX", "target_name": "dev", "node_id": "model.XXX.dx_store_plck"} */
CALL DB.SCHEMA_DEV.APPEND_PLACEKEYS(
'DXH_STORE_PLCK_IN',
(
SELECT object_construct(MAPPING.*)
FROM (
SELECT
'msa_id' AS PRIMARY_KEY,
'store_name' AS LOCATION_NAME,
'city' AS CITY,
'state' AS REGION,
'street_addr' AS STREET_ADDRESS,
'zipcode' AS POSTAL_CODE,
'LATITUDE' AS LATITUDE,
'LONGITUDE' AS LONGITUDE,
'false' AS strict_address_match,
'false' AS strict_name_match
) AS MAPPING
),
'DXH_STORE_PLCK_TEMP', 'temp', 'get_placekeys_v', 1000
);
[0m20:44:56.882161 [debug] [Thread-1 ]: SQL status: SUCCESS 1 in 0.94 seconds
[0m20:44:56.888032 [debug] [Thread-1 ]: Writing injected SQL for node "model.XXX.dx_store_plck"
[0m20:44:56.891930 [debug] [Thread-1 ]: finished collecting timing info
[0m20:44:56.893285 [debug] [Thread-1 ]: Began executing node model.XXX.dx_store_plck
[0m20:44:56.897175 [debug] [Thread-1 ]: Using snowflake connection "model.XXX.dx_store_plck"
[0m20:44:56.898207 [debug] [Thread-1 ]: On model.sdna_us_project.dx_store_plck: /* {"app": "dbt", "dbt_version": "1.3.0", "profile_name": "XXX", "target_name": "dev", "node_id": "model.XXX.dx_store_plck"} */
select current_warehouse() as warehouse
[0m20:44:57.061101 [debug] [Thread-1 ]: SQL status: SUCCESS 1 in 0.16 seconds
[0m20:44:57.063617 [debug] [Thread-1 ]: Using snowflake connection "model.XXX.dx_store_plck"
[0m20:44:57.064833 [debug] [Thread-1 ]: On model.XXX.dx_store_plck: /* {"app": "dbt", "dbt_version": "1.3.0", "profile_name": "XXX", "target_name": "dev", "node_id": "model.XXX.dx_store_plck"} */
use warehouse XXX_M;
[0m20:44:57.326229 [debug] [Thread-1 ]: SQL status: SUCCESS 1 in 0.26 seconds
[0m20:44:57.356896 [debug] [Thread-1 ]: Using snowflake connection "model.XXX.dx_store_plck"
[0m20:44:57.358488 [debug] [Thread-1 ]: On model.XXX.dx_store_plck: /* {"app": "dbt", "dbt_version": "1.3.0", "profile_name": "sdna_us", "target_name": "dev", "node_id": "model.XXX.dx_store_plck"} */
USE ROLE FUNCTIONAL_ROLE_XXX;
[0m20:44:57.508780 [debug] [Thread-1 ]: SQL status: SUCCESS 1 in 0.15 seconds
[0m20:44:57.510018 [debug] [Thread-1 ]: Using snowflake connection "model.XXX.dx_store_plck"
[0m20:44:57.510897 [debug] [Thread-1 ]: On model.XXX.dx_store_plck: /* {"app": "dbt", "dbt_version": "1.3.0", "profile_name": "sdna_us", "target_name": "dev", "node_id": "model.XXX.dx_store_plck"} */
alter session set query_tag ='DB.SCHEMA_DEV.DX_STORE_PLCK';
[0m20:44:57.648295 [debug] [Thread-1 ]: SQL status: SUCCESS 1 in 0.14 seconds
[0m20:44:57.652786 [debug] [Thread-1 ]: Using snowflake connection "model.XXX.dx_store_plck"
[0m20:44:57.653900 [debug] [Thread-1 ]: On model.sdna_us_project.dx_store_plck: /* {"app": "dbt", "dbt_version": "1.3.0", "profile_name": "XXX", "target_name": "dev", "node_id": "model.XXX.dx_store_plck"} */
CREATE OR REPLACE TEMP TABLE DXH_STORE_PLCK_IN
AS
SELECT DISTINCT
s.dx_id AS msa_id,
s.nm AS store_name,
s.city AS city,
s.state AS state,
s.address AS street_addr,
s.zip5 AS zipcode,
s.country AS COUNTRY_CODE,
s.latitude AS LATITUDE,
s.longitude AS LONGITUDE,
COALESCE(p.last_updt_dt, TO_DATE('2023-01-01')) AS last_updt_dt
FROM DB.SCHEMA_DEV.store_attr s
FULL JOIN DB.SCHEMA_DEV.dx_store_plck p USING (dx_id)
WHERE
(
p.placekey IS NULL
OR
DATEDIFF(day, p.last_updt_dt::DATE, CURRENT_TIMESTAMP()::DATE) > 90
)
ORDER BY last_updt_dt ASC, s.dx_id ASC
LIMIT 3000;
[0m20:44:59.140263 [debug] [Thread-1 ]: SQL status: SUCCESS 1 in 1.49 seconds

dbt takes two passes over your model file; first to parse it and build the dag, and then to actually execute it.
In order to properly template (parse) your model, it has to execute any macros contained in it. (Among other reasons, ref is also a macro, and is critical for this step). This means that your append_placekeys macro gets executed twice; first, when the model is parsed, and then, when it is executed.
You can prevent this by using the special jinja variable called execute. execute is False during the first parsing pass, but True during the actual model execution. I would probably edit the macro itself to add an {% if execute %} block around the database call, but you could also just gate the whole macro call in the model file:
{{
config(
...
)
}}
--load dx_store_plck
{% if execute %}
{{
append_placekeys(
tbl_in='DXH_STORE_PLCK_IN',
tbl_out='DXH_STORE_PLCK_TEMP'
)
}}
{% endif %}
SELECT
t.msa_id AS dx_id,
t.placekey AS placekey,
t.error AS error,
CURRENT_TIMESTAMP() AS last_updt_dt
FROM
{{ database }}.{{ schema }}.DXH_STORE_PLCK_TEMP AS t
All that aside, this code contains a bunch of dbt antipatterns. You should really never have to write any DDL when using dbt. Both your pre-hook and the macro are relatively fragile operations, and you should re-write your model logic so you don't need them. Finally, models should pretty much always select from {{ ref('a_model') }} or from {{ source('a_source', 'a_source_tbl') }}, and not directly from a database relation, as you're doing here.

This was the resulting final model:
{{
config(
materialized='incremental',
unique_key='DX_ID',
pre_hook=[
"""
CREATE OR REPLACE TABLE {{ database }}.{{ schema }}.dxh_store_plck_in
AS
SELECT DISTINCT
s.dx_id AS msa_id,
s.btg_store_nm AS store_name,
s.btg_city AS city,
s.btg_state AS state,
s.btg_address AS street_addr,
s.btg_zip5 AS zipcode,
'US' AS COUNTRY_CODE,
s.latitude AS LATITUDE,
s.longitude AS LONGITUDE,
COALESCE(p.last_updt_dt, TO_DATE('2023-01-01')) AS last_updt_dt
FROM {{ ref('dx_store_attr') }} s
FULL JOIN {{ this }} p USING (dx_id)
{%- if is_incremental() %}
WHERE
(
p.placekey IS NULL
OR
DATEDIFF(day, p.last_updt_dt::DATE, CURRENT_TIMESTAMP()::DATE) > 90
)
ORDER BY last_updt_dt ASC, s.dx_id ASC
LIMIT 5000
{%- endif -%}
;
""",
"""
{{
append_placekeys(
tbl_in=database|as_text ~'.' ~ schema|as_text ~'.dxh_store_plck_in',
tbl_out=database|as_text ~'.' ~ schema|as_text ~'.dxh_store_plck_temp'
)
}}
"""
],
tags=["dx", "refresh:daily"]
)
}}
--load dx_store_plck
SELECT
t.msa_id AS dx_id,
t.placekey AS placekey,
t.error AS error,
CURRENT_TIMESTAMP() AS last_updt_dt
FROM {{ database }}.{{ schema }}.dxh_store_plck_temp AS t

Related

How can nextcloudcmd update changes on Nextcloud itself?

Nextcloud version 3.5.4-20220806.084713.fea986309-1.0~focal1
Using Qt 5.12.8, built against Qt 5.12.8
Using 'OpenSSL 1.1.1f 31 Mar 2020'
Running on Ubuntu 20.04.5 LTS, x86_64
The issue you are facing:
I am trying to access files and make change on them using nextcloudcmd . But I don't see the changes made on the Nextcloud, the changes are made only locally.
The way I use nextcloudcmd :
rm -rf ~/Nextcloud && mkdir ~/Nextcloud
nextcloudcmd -u ***** -p '*****' -h ~/Nextcloud https://#ppp.woelkli.com
At this point its able to sync files. But when I make changes to the files the changes are not pushed updated on the Nextcloud itself. How can I push changes in to cloud?
=> Log of nextcloudcmd -u *** -p '***' -h ~/Nextcloud https://#ppp.woelkli.com:
01-14 10:03:27:344 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/ocs/v1.php/cloud/capabilities?format=json" has X-Request-ID "8220a8ea-876b-4532-b866-66c57b01177c"
01-14 10:03:27:344 [ info nextcloud.sync.networkjob ]: OCC::JsonApiJob created for "https://ppp.woelkli.com" + "ocs/v1.php/cloud/capabilities" ""
01-14 10:03:28:939 [ info nextcloud.sync.networkjob.jsonapi ]: JsonApiJob of QUrl("https://#ppp.woelkli.com/ocs/v1.php/cloud/capabilities?format=json") FINISHED WITH STATUS "OK"
01-14 10:03:28:939 [ debug default ] [ main(int, char**)::<lambda ]: Server capabilities QJsonObject({"activity":{"apiv2":["filters","filters-api","previews","rich-strings"]},"bruteforce":{"delay":0},"core":{"pollinterval":60,"reference-api":true,"reference-regex":"(\\s|\\n|^)(https?:\\/\\/)((?:[-A-Z0-9+_]+\\.)+[-A-Z]+(?:\\/[-A-Z0-9+&##%?=~_|!:,.;()]*)*)(\\s|\\n|$)","webdav-root":"remote.php/webdav"},"dav":{"bulkupload":"1.0","chunking":"1.0"},"external":{"v1":["sites","device","groups","redirect"]},"files":{"bigfilechunking":true,"blacklisted_files":[".htaccess"],"comments":true,"directEditing":{"etag":"c748e8fc588b54fc5af38c4481a19d20","url":"https://ppp.woelkli.com/ocs/v2.php/apps/files/api/v1/directEditing"},"undelete":true,"versioning":true},"files_sharing":{"api_enabled":true,"default_permissions":1,"federation":{"expire_date":{"enabled":true},"expire_date_supported":{"enabled":true},"incoming":true,"outgoing":true},"group":{"enabled":false,"expire_date":{"enabled":true}},"group_sharing":false,"public":{"enabled":true,"expire_date":{"enabled":false},"expire_date_internal":{"enabled":false},"expire_date_remote":{"enabled":false},"multiple_links":true,"password":{"askForOptionalPassword":false,"enforced":true},"send_mail":false,"upload":false,"upload_files_drop":false},"resharing":true,"sharebymail":{"enabled":true,"expire_date":{"enabled":true,"enforced":false},"password":{"enabled":true,"enforced":true},"send_password_by_mail":false,"upload_files_drop":{"enabled":true}},"sharee":{"always_show_unique":true,"query_lookup_default":false},"user":{"expire_date":{"enabled":true},"send_mail":false}},"metadataAvailable":{"gps":["/image\\/.*/"],"size":["/image\\/.*/"]},"notes":{"api_version":["0.2","1.3"],"version":"4.6.0"},"notifications":{"admin-notifications":["ocs","cli"],"ocs-endpoints":["list","get","delete","delete-all","icons","rich-strings","action-web","user-status"],"push":["devices","object-data","delete"]},"ocm":{"apiVersion":"1.0-proposal1","enabled":true,"endPoint":"https://ppp.woelkli.com/ocm","resourceTypes":[{"name":"file","protocols":{"webdav":"/public.php/webdav/"},"shareTypes":["user","group"]}]},"password_policy":{"api":{"generate":"https://ppp.woelkli.com/ocs/v2.php/apps/password_policy/api/v1/generate","validate":"https://ppp.woelkli.com/ocs/v2.php/apps/password_policy/api/v1/validate"},"enforceNonCommonPassword":true,"enforceNumericCharacters":true,"enforceSpecialCharacters":false,"enforceUpperLowerCase":true,"minLength":12},"provisioning_api":{"AccountPropertyScopesFederatedEnabled":true,"AccountPropertyScopesPublishedEnabled":true,"AccountPropertyScopesVersion":2,"version":"1.15.0"},"spreed":{"config":{"attachments":{"allowed":true,"folder":"/Talk"},"call":{"enabled":true},"chat":{"max-length":32000,"read-privacy":0},"conversations":{"can-create":true},"previews":{"max-gif-size":3145728},"signaling":{"hello-v2-token-key":"-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEZjYYA01asJ+h/1+YflsnNfwXBSGa\nz+4vunVgFMhBDhSRZJv51H2KyJWTszJW+n1vgdp8gjfy4KNPhyjmzPO/tw==\n-----END PUBLIC KEY-----\n","session-ping-limit":200}},"features":["audio","video","chat-v2","conversation-v4","guest-signaling","empty-group-room","guest-display-names","multi-room-users","favorites","last-room-activity","no-ping","system-messages","delete-messages","mention-flag","in-call-flags","conversation-call-flags","notification-levels","invite-groups-and-mails","locked-one-to-one-rooms","read-only-rooms","listable-rooms","chat-read-marker","chat-unread","webinary-lobby","start-call-flag","chat-replies","circles-support","force-mute","sip-support","sip-support-nopin","chat-read-status","phonebook-search","raise-hand","room-description","rich-object-sharing","temp-user-avatar-api","geo-location-sharing","voice-message-sharing","signaling-v3","publishing-permissions","clear-history","direct-mention-flag","notification-calls","conversation-permissions","rich-object-list-media","rich-object-delete","unified-search","chat-permission","silent-send","silent-call","send-call-notification","talk-polls","message-expiration","reactions","chat-reference-id"],"version":"15.0.2"},"theming":{"background":"https://ppp.woelkli.com/apps/theming/image/background?v=50","background-default":false,"background-plain":false,"color":"#0082c9","color-element":"#0082c9","color-element-bright":"#0082c9","color-element-dark":"#0082c9","color-text":"#ffffff","favicon":"https://ppp.woelkli.com/apps/theming/image/logo?useSvg=1&v=50","logo":"https://ppp.woelkli.com/apps/theming/image/logo?useSvg=1&v=50","logoheader":"https://ppp.woelkli.com/apps/theming/image/logo?useSvg=1&v=50","name":"wölkli","slogan":"Secure Cloud Storage in Switzerland","url":"https://woelkli.com"},"user_status":{"enabled":true,"supports_emoji":true},"weather_status":{"enabled":true}})
01-14 10:03:28:939 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/ocs/v2.php/apps/user_status/api/v1/user_status?format=json" has X-Request-ID "3c414428-18d2-4112-b01a-3e3653621175"
01-14 10:03:28:940 [ info nextcloud.sync.networkjob ]: OCC::JsonApiJob created for "https://ppp.woelkli.com" + "/ocs/v2.php/apps/user_status/api/v1/user_status" "OCC::UserStatusConnector"
01-14 10:03:28:940 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/ocs/v1.php/cloud/user?format=json" has X-Request-ID "bbc3bf4c-74fc-430d-81d7-138234341795"
01-14 10:03:28:940 [ info nextcloud.sync.networkjob ]: OCC::JsonApiJob created for "https://ppp.woelkli.com" + "ocs/v1.php/cloud/user" ""
01-14 10:03:29:078 [ info nextcloud.sync.networkjob.jsonapi ]: JsonApiJob of QUrl("https://#ppp.woelkli.com/ocs/v2.php/apps/user_status/api/v1/user_status?format=json") FINISHED WITH STATUS "OK"
01-14 10:03:29:298 [ info nextcloud.sync.networkjob.jsonapi ]: JsonApiJob of QUrl("https://#ppp.woelkli.com/ocs/v1.php/cloud/user?format=json") FINISHED WITH STATUS "OK"
01-14 10:03:29:302 [ info nextcloud.sync.engine ]: There are 74930769920 bytes available at "/home/alper/Nextcloud/"
01-14 10:03:29:302 [ info nextcloud.sync.engine ]: New sync (no sync journal exists)
01-14 10:03:29:302 [ info nextcloud.sync.engine ]: "Using Qt 5.12.8 SSL library OpenSSL 1.1.1f 31 Mar 2020 on Ubuntu 20.04.5 LTS"
01-14 10:03:29:302 [ info nextcloud.sync.database ]: sqlite3 version "3.31.1"
01-14 10:03:29:302 [ info nextcloud.sync.database ]: sqlite3 locking_mode= "exclusive"
01-14 10:03:29:302 [ info nextcloud.sync.database ]: sqlite3 journal_mode= "wal"
01-14 10:03:29:305 [ info nextcloud.sync.database ]: sqlite3 synchronous= "NORMAL"
01-14 10:03:29:312 [ info nextcloud.sync.database ]: Forcing remote re-discovery by deleting folder Etags
01-14 10:03:29:312 [ info nextcloud.sync.engine ]: NOT Using Selective Sync
01-14 10:03:29:312 [ info nextcloud.sync.engine ]: #### Discovery start ####################################################
01-14 10:03:29:312 [ info nextcloud.sync.engine ]: Server ""
01-14 10:03:29:312 [ info sync.discovery ]: STARTING "" OCC::ProcessDirectoryJob::NormalQuery "" OCC::ProcessDirectoryJob::NormalQuery
01-14 10:03:29:313 [ info nextcloud.sync.accessmanager ]: 6 "PROPFIND" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/" has X-Request-ID "0bd895d8-8ec8-429d-9a37-4dd85a2c89a9"
01-14 10:03:29:313 [ info nextcloud.sync.networkjob ]: OCC::LsColJob created for "https://ppp.woelkli.com" + "/" "OCC::DiscoverySingleDirectoryJob"
01-14 10:03:29:462 [ info nextcloud.sync.networkjob.lscol ]: LSCOL of QUrl("https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/") FINISHED WITH STATUS "OK"
01-14 10:03:29:463 [ warning default ]: System exclude list file could not be read: "/home/alper/Nextcloud/Nextcloud/.sync-exclude.lst"
01-14 10:03:29:463 [ info sync.discovery ]: Processing "Nextcloud" | valid: false/false/true | mtime: 0/0/1673085410 | size: 0/0/0 | etag: ""//"63b941e2324d9" | checksum: ""//"" | perm: ""//"DNVCKR" | fileid: ""//"99398452ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeDirectory | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:463 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:463 [ info sync.discovery ]: Discovered "Nextcloud" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeDirectory
01-14 10:03:29:463 [ warning default ]: System exclude list file could not be read: "/home/alper/Nextcloud/Notes/.sync-exclude.lst"
01-14 10:03:29:463 [ info sync.discovery ]: Processing "Notes" | valid: false/false/true | mtime: 0/0/1673021853 | size: 0/0/0 | etag: ""//"63b8499d22325" | checksum: ""//"" | perm: ""//"DNVCKR" | fileid: ""//"99376158ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeDirectory | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:463 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:464 [ info sync.discovery ]: Discovered "Notes" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeDirectory
01-14 10:03:29:464 [ warning default ]: System exclude list file could not be read: "/home/alper/Nextcloud/Talk/.sync-exclude.lst"
01-14 10:03:29:464 [ info sync.discovery ]: Processing "Talk" | valid: false/false/true | mtime: 0/0/1673021842 | size: 0/0/0 | etag: ""//"63b84992517bb" | checksum: ""//"" | perm: ""//"DNVCKR" | fileid: ""//"99376154ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeDirectory | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:464 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:464 [ info sync.discovery ]: Discovered "Talk" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeDirectory
01-14 10:03:29:464 [ warning default ]: System exclude list file could not be read: "/home/alper/Nextcloud/todo/.sync-exclude.lst"
01-14 10:03:29:464 [ info sync.discovery ]: Processing "todo" | valid: false/false/true | mtime: 0/0/1673667233 | size: 0/0/0 | etag: ""//"63c222a1476e3" | checksum: ""//"" | perm: ""//"DNVCKR" | fileid: ""//"99376056ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeDirectory | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:464 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:464 [ info sync.discovery ]: Discovered "todo" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeDirectory
01-14 10:03:29:464 [ info sync.discovery ]: STARTING "Nextcloud" OCC::ProcessDirectoryJob::NormalQuery "Nextcloud" OCC::ProcessDirectoryJob::ParentDontExist
01-14 10:03:29:464 [ info nextcloud.sync.accessmanager ]: 6 "PROPFIND" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Nextcloud" has X-Request-ID "63605bb7-f72e-4a11-9796-af9d40240e0a"
01-14 10:03:29:464 [ info nextcloud.sync.networkjob ]: OCC::LsColJob created for "https://ppp.woelkli.com" + "/Nextcloud" "OCC::DiscoverySingleDirectoryJob"
01-14 10:03:29:464 [ info sync.discovery ]: STARTING "Notes" OCC::ProcessDirectoryJob::NormalQuery "Notes" OCC::ProcessDirectoryJob::ParentDontExist
01-14 10:03:29:465 [ info nextcloud.sync.accessmanager ]: 6 "PROPFIND" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Notes" has X-Request-ID "a35d715f-29d1-4b0e-aae5-57c953d8c743"
01-14 10:03:29:465 [ info nextcloud.sync.networkjob ]: OCC::LsColJob created for "https://ppp.woelkli.com" + "/Notes" "OCC::DiscoverySingleDirectoryJob"
01-14 10:03:29:465 [ info sync.discovery ]: STARTING "Talk" OCC::ProcessDirectoryJob::NormalQuery "Talk" OCC::ProcessDirectoryJob::ParentDontExist
01-14 10:03:29:465 [ info nextcloud.sync.accessmanager ]: 6 "PROPFIND" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Talk" has X-Request-ID "186d0a04-d2de-418b-8948-f4a5794788a4"
01-14 10:03:29:465 [ info nextcloud.sync.networkjob ]: OCC::LsColJob created for "https://ppp.woelkli.com" + "/Talk" "OCC::DiscoverySingleDirectoryJob"
01-14 10:03:29:465 [ info sync.discovery ]: STARTING "todo" OCC::ProcessDirectoryJob::NormalQuery "todo" OCC::ProcessDirectoryJob::ParentDontExist
01-14 10:03:29:465 [ info nextcloud.sync.accessmanager ]: 6 "PROPFIND" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/todo" has X-Request-ID "64b7aede-d893-423f-bf43-2431d708059a"
01-14 10:03:29:466 [ info nextcloud.sync.networkjob ]: OCC::LsColJob created for "https://ppp.woelkli.com" + "/todo" "OCC::DiscoverySingleDirectoryJob"
01-14 10:03:29:613 [ info nextcloud.sync.networkjob.lscol ]: LSCOL of QUrl("https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Notes") FINISHED WITH STATUS "OK"
01-14 10:03:29:652 [ info nextcloud.sync.networkjob.lscol ]: LSCOL of QUrl("https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Nextcloud") FINISHED WITH STATUS "OK"
01-14 10:03:29:789 [ info nextcloud.sync.networkjob.lscol ]: LSCOL of QUrl("https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/todo") FINISHED WITH STATUS "OK"
01-14 10:03:29:790 [ info sync.discovery ]: Processing "todo/.todo-list.org_archive" | valid: false/false/true | mtime: 0/0/1673083763 | size: 0/0/347 | etag: ""//"12af94e75dbfc087dc6b8038a1a7e131" | checksum: ""//"" | perm: ""//"WDNVR" | fileid: ""//"99398133ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeFile | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:790 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:790 [ info sync.discovery ]: Discovered "todo/.todo-list.org_archive" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeFile
01-14 10:03:29:790 [ info sync.discovery ]: Processing "todo/todo-list.org" | valid: false/false/true | mtime: 0/0/1673666892 | size: 0/0/148 | etag: ""//"d145e38a7f9b55a53706cc101a635779" | checksum: ""//"" | perm: ""//"WDNVR" | fileid: ""//"99382200ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeFile | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:790 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:790 [ info sync.discovery ]: Discovered "todo/todo-list.org" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeFile
01-14 10:03:29:790 [ info sync.discovery ]: Processing "todo/zoo" | valid: false/false/true | mtime: 0/0/1673667228 | size: 0/0/0 | etag: ""//"d8a5a3511a0e045530ebd37209a2af32" | checksum: ""//"SHA1:da39a3ee5e6b4b0d3255bfef95601890afd80709" | perm: ""//"WDNVR" | fileid: ""//"99781299ock6rp1oyjad" | inode: 0/0/ | type: CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeSkip/CSyncEnums::ItemTypeFile | e2ee: false/false | e2eeMangledName: ""/"" | file lock: not locked//not locked
01-14 10:03:29:790 [ info sync.discovery ]: OCC::SyncFileItem::LockStatus::UnlockedItem "" "" OCC::SyncFileItem::LockOwnerType::UserLock "" 0 0
01-14 10:03:29:790 [ info sync.discovery ]: Discovered "todo/zoo" CSyncEnums::CSYNC_INSTRUCTION_NEW OCC::SyncFileItem::Down CSyncEnums::ItemTypeFile
01-14 10:03:29:849 [ info nextcloud.sync.networkjob.lscol ]: LSCOL of QUrl("https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/Talk") FINISHED WITH STATUS "OK"
01-14 10:03:29:849 [ info nextcloud.sync.engine ]: #### Discovery end #################################################### 536 ms
01-14 10:03:29:849 [ info nextcloud.sync.engine ]: #### Reconcile (aboutToPropagate) #################################################### 536 ms
01-14 10:03:29:849 [ info nextcloud.sync.engine ]: #### Reconcile (aboutToPropagate OK) #################################################### 536 ms
01-14 10:03:29:850 [ info nextcloud.sync.engine ]: #### Post-Reconcile end #################################################### 537 ms
01-14 10:03:29:854 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::NotYetStarted pending uploads 0 subjobs state OCC::PropagatorJob::NotYetStarted
01-14 10:03:29:854 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "Nextcloud" by OCC::PropagateLocalMkdir(0x5571c346afe0)
01-14 10:03:29:854 [ info nextcloud.sync.database ]: Updating file record for path: "Nextcloud" inode: 5645997 modtime: 1673085410 type: CSyncEnums::ItemTypeDirectory etag: "_invalid_" fileId: "99398452ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:854 [ info nextcloud.sync.propagator ]: Completed propagation of "Nextcloud" by OCC::PropagateLocalMkdir(0x5571c346afe0) with status OCC::SyncFileItem::Success
01-14 10:03:29:858 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:858 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "Notes" by OCC::PropagateLocalMkdir(0x5571c3470360)
01-14 10:03:29:858 [ info nextcloud.sync.database ]: Updating file record for path: "Notes" inode: 5646004 modtime: 1673021853 type: CSyncEnums::ItemTypeDirectory etag: "_invalid_" fileId: "99376158ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:858 [ info nextcloud.sync.propagator ]: Completed propagation of "Notes" by OCC::PropagateLocalMkdir(0x5571c3470360) with status OCC::SyncFileItem::Success
01-14 10:03:29:858 [ info nextcloud.sync.database ]: Updating file record for path: "Nextcloud" inode: 5645997 modtime: 1673085410 type: CSyncEnums::ItemTypeDirectory etag: "63b941e2324d9" fileId: "99398452ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:858 [ info nextcloud.sync.propagator ]: PropagateDirectory::slotSubJobsFinished emit finished OCC::SyncFileItem::Success
01-14 10:03:29:861 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:862 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "Talk" by OCC::PropagateLocalMkdir(0x5571c34cf2e0)
01-14 10:03:29:862 [ info nextcloud.sync.database ]: Updating file record for path: "Talk" inode: 5646005 modtime: 1673021842 type: CSyncEnums::ItemTypeDirectory etag: "_invalid_" fileId: "99376154ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:862 [ info nextcloud.sync.propagator ]: Completed propagation of "Talk" by OCC::PropagateLocalMkdir(0x5571c34cf2e0) with status OCC::SyncFileItem::Success
01-14 10:03:29:862 [ info nextcloud.sync.database ]: Updating file record for path: "Notes" inode: 5646004 modtime: 1673021853 type: CSyncEnums::ItemTypeDirectory etag: "63b8499d22325" fileId: "99376158ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:862 [ info nextcloud.sync.propagator ]: PropagateDirectory::slotSubJobsFinished emit finished OCC::SyncFileItem::Success
01-14 10:03:29:866 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:866 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "todo" by OCC::PropagateLocalMkdir(0x5571c34c0730)
01-14 10:03:29:866 [ info nextcloud.sync.database ]: Updating file record for path: "todo" inode: 5646006 modtime: 1673667233 type: CSyncEnums::ItemTypeDirectory etag: "_invalid_" fileId: "99376056ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:866 [ info nextcloud.sync.propagator ]: Completed propagation of "todo" by OCC::PropagateLocalMkdir(0x5571c34c0730) with status OCC::SyncFileItem::Success
01-14 10:03:29:866 [ info nextcloud.sync.database ]: Updating file record for path: "Talk" inode: 5646005 modtime: 1673021842 type: CSyncEnums::ItemTypeDirectory etag: "63b84992517bb" fileId: "99376154ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:29:866 [ info nextcloud.sync.propagator ]: PropagateDirectory::slotSubJobsFinished emit finished OCC::SyncFileItem::Success
01-14 10:03:29:870 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:870 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "todo/.todo-list.org_archive" by OCC::PropagateDownloadFile(0x5571c34daf00)
01-14 10:03:29:870 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/todo/.todo-list.org_archive" has X-Request-ID "4f03a59f-cc0f-4f03-9c04-6054f726bc94"
01-14 10:03:29:871 [ info nextcloud.sync.networkjob ]: OCC::GETFileJob created for "https://ppp.woelkli.com" + "/todo/.todo-list.org_archive" "OCC::PropagateDownloadFile"
01-14 10:03:29:874 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:874 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "todo/todo-list.org" by OCC::PropagateDownloadFile(0x5571c34757d0)
01-14 10:03:29:875 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/todo/todo-list.org" has X-Request-ID "41f8110d-654b-428c-a7d6-35fed3d2667e"
01-14 10:03:29:875 [ info nextcloud.sync.networkjob ]: OCC::GETFileJob created for "https://ppp.woelkli.com" + "/todo/todo-list.org" "OCC::PropagateDownloadFile"
01-14 10:03:29:878 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:29:879 [ info nextcloud.sync.propagator ]: Starting CSyncEnums::CSYNC_INSTRUCTION_NEW propagation of "todo/zoo" by OCC::PropagateDownloadFile(0x7f29340a17b0)
01-14 10:03:29:879 [ info nextcloud.sync.accessmanager ]: 2 "" "https://#ppp.woelkli.com/remote.php/dav/files/alper.alimoglu#gmail.com/todo/zoo" has X-Request-ID "27d4ff9f-5e9a-46d8-82a0-372ed440001c"
01-14 10:03:29:879 [ info nextcloud.sync.networkjob ]: OCC::GETFileJob created for "https://ppp.woelkli.com" + "/todo/zoo" "OCC::PropagateDownloadFile"
01-14 10:03:29:882 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Running
01-14 10:03:30:096 [ info nextcloud.sync.checksums ]: Computing "SHA1" checksum of "/home/alper/Nextcloud/todo/..todo-list.org_archive.~29f9cf4b" in a thread
01-14 10:03:30:096 [ info nextcloud.sync.propagator.download ]: "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com#ppp.woelkli.com"
01-14 10:03:30:096 [ info nextcloud.sync.database ]: Updating file record for path: "todo/.todo-list.org_archive" inode: 5646007 modtime: 1673083763 type: CSyncEnums::ItemTypeFile etag: "12af94e75dbfc087dc6b8038a1a7e131" fileId: "99398133ock6rp1oyjad" remotePerm: "WDNVR" fileSize: 347 checksum: "SHA1:55cd297d7da6b9c4c7854805bbdb4791099e32a8" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:30:097 [ info nextcloud.sync.propagator ]: Completed propagation of "todo/.todo-list.org_archive" by OCC::PropagateDownloadFile(0x5571c34daf00) with status OCC::SyncFileItem::Success
01-14 10:03:30:097 [ info nextcloud.sync.checksums ]: Computing "SHA1" checksum of "/home/alper/Nextcloud/todo/.zoo.~5c52cf8e" in a thread
01-14 10:03:30:097 [ info nextcloud.sync.checksums ]: Computing "SHA1" checksum of "/home/alper/Nextcloud/todo/.todo-list.org.~689a35f5" in a thread
01-14 10:03:30:098 [ info nextcloud.sync.propagator.download ]: "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com#ppp.woelkli.com"
01-14 10:03:30:098 [ info nextcloud.sync.database ]: Updating file record for path: "todo/zoo" inode: 5646009 modtime: 1673667228 type: CSyncEnums::ItemTypeFile etag: "d8a5a3511a0e045530ebd37209a2af32" fileId: "99781299ock6rp1oyjad" remotePerm: "WDNVR" fileSize: 0 checksum: "SHA1:da39a3ee5e6b4b0d3255bfef95601890afd80709" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:30:098 [ info nextcloud.sync.propagator ]: Completed propagation of "todo/zoo" by OCC::PropagateDownloadFile(0x7f29340a17b0) with status OCC::SyncFileItem::Success
01-14 10:03:30:098 [ info nextcloud.sync.propagator.download ]: "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com" "alper.alimoglu#gmail.com#ppp.woelkli.com"
01-14 10:03:30:098 [ info nextcloud.sync.database ]: Updating file record for path: "todo/todo-list.org" inode: 5646008 modtime: 1673666892 type: CSyncEnums::ItemTypeFile etag: "d145e38a7f9b55a53706cc101a635779" fileId: "99382200ock6rp1oyjad" remotePerm: "WDNVR" fileSize: 148 checksum: "SHA1:d4199fbec083bd84270dcba2f62646509aa1c8f4" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:30:098 [ info nextcloud.sync.propagator ]: Completed propagation of "todo/todo-list.org" by OCC::PropagateDownloadFile(0x5571c34757d0) with status OCC::SyncFileItem::Success
01-14 10:03:30:099 [ info nextcloud.sync.database ]: Updating file record for path: "todo" inode: 5646006 modtime: 1673667233 type: CSyncEnums::ItemTypeDirectory etag: "63c222a1476e3" fileId: "99376056ock6rp1oyjad" remotePerm: "DNVCKR" fileSize: 0 checksum: "" e2eMangledName: "" isE2eEncrypted: false lock: false lock owner type: 0 lock owner: "" lock owner id: "" lock editor: ""
01-14 10:03:30:099 [ info nextcloud.sync.propagator ]: PropagateDirectory::slotSubJobsFinished emit finished OCC::SyncFileItem::Success
01-14 10:03:30:099 [ info nextcloud.sync.propagator.root.directory ]: OCC::SyncFileItem::Success slotSubJobsFinished OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Finished
01-14 10:03:30:102 [ info nextcloud.sync.propagator.root.directory ]: scheduleSelfOrChild OCC::PropagatorJob::Running pending uploads 0 subjobs state OCC::PropagatorJob::Finished
01-14 10:03:30:102 [ info nextcloud.sync.propagator ]: PropagateRootDirectory::slotDirDeletionJobsFinished emit finished OCC::SyncFileItem::Success
01-14 10:03:30:102 [ info nextcloud.sync.engine ]: Sync run took 789 ms
01-14 10:03:30:102 [ info nextcloud.sync.database ]: Closing DB "/home/alper/Nextcloud/.sync_33fdb314a5d8.db"

Ansible - set_facts Set a boolean value based on register.stdout_lines containing a string

How can I set a variable using set_facts to True or False based on whether the register.stdout_lines contains a specific string.
Ansible version - 2.9
This is my stdout_lines output
{
"msg": [
"● confluent-server.service - Apache Kafka - broker",
" Loaded: loaded (/usr/lib/systemd/system/confluent-server.service; enabled; vendor preset: disabled)",
" Drop-In: /etc/systemd/system/confluent-server.service.d",
" └─override.conf",
" Active: active (running) since Tue 2023-01-31 20:57:00 EST; 19h ago",
" Docs: http://docs.confluent.io/",
" Main PID: 6978 (java)",
" CGroup: /system.slice/confluent-server.service",
]
}
And I want to set a variable server_running to True if the above output contains the string active (running) ( which it does in above case - otherwise that should be set to false )
I tried this - but that is not correct
- name: success for start
set_fact:
start_success: >-
"{{ confluent_status.stdout_lines | join('') | search(' active (running)') }}"
I want start_success above to have true or false value
I am not yet familiar with how to process using filters in ansible output, so trying different things found on the net.
Can I define a variable to true or false as output of whether a condition is true or not ? How would I go about it?
stdout_lines is basically a list containing an item for each stdout line.
If you only need to access the output, you could easily use stdout instead.
The following example shows how to determine the value of a variable based on
a condition:
- name: success for start
set_fact:
start_success: >-
{% if ' active (running)' in confluent_status.stdout %}True{% else %}False{% endif %}
It's also possible to set stdout_callback = yaml in ansible.cfg for a better formatted output.
You can use regex_search to check the string you're looking for.
Below I provide an example of a playbook.
- name: Check status
hosts: localhost
gather_facts: no
vars:
confluent_status:
stdout_lines: [
"● confluent-server.service - Apache Kafka - broker",
" Loaded: loaded (/usr/lib/systemd/system/confluent-server.service; enabled; vendor preset: disabled)",
" Drop-In: /etc/systemd/system/confluent-server.service.d",
" └─override.conf",
" Active: active (running) since Tue 2023-01-31 20:57:00 EST; 19h ago",
" Docs: http://docs.confluent.io/",
" Main PID: 6978 (java)",
" CGroup: /system.slice/confluent-server.service",
]
tasks:
- name: set_status either to True or False
set_fact:
set_status: "{% if (confluent_status.stdout_lines | regex_search('active \\(running\\)')) %}True{% else %}False{% endif %}"
- name: output set_status variable set in the previous task
debug:
msg: "{{ set_status }}"
- name: just a debug that outputs directly the status so you can use the condition directly to any task if needed.
debug:
msg: "{% if (confluent_status.stdout_lines | regex_search('active \\(running\\)')) %}True{% else %}False{% endif %}"
Gives:
PLAY [Check status] ************************************************************************************************************************************************************************
TASK [set_status either to True or False] **************************************************************************************************************************************************
ok: [localhost]
TASK [output set_status variable set in the previous task] *********************************************************************************************************************************
ok: [localhost] => {
"msg": true
}
TASK [Check if status is True or false] ****************************************************************************************************************************************************
ok: [localhost] => {
"msg": true
}

django-filter looking for templates only in venv

Tried to make custom widget for RangeFilter. Every time django raises
TemplateDoesNotExist
Template-loader postmortem
Django tried loading these templates, in this order:
Using engine django:
django.template.loaders.filesystem.Loader: C:\Users\Alexander\PycharmProjects\Search\venv\lib\site-packages\django\forms\templates\MyRangeWidget.html (Source does not exist)
django.template.loaders.app_directories.Loader: C:\Users\Alexander\PycharmProjects\Search\venv\lib\site-packages\django_filters\templates\MyRangeWidget.html (Source does not exist)
django.template.loaders.app_directories.Loader: C:\Users\Alexander\PycharmProjects\Search\venv\lib\site-packages\django\contrib\admin\templates\MyRangeWidget.html (Source does not exist)
django.template.loaders.app_directories.Loader: C:\Users\Alexander\PycharmProjects\Search\venv\lib\site-packages\django\contrib\auth\templates\MyRangeWidget.html (Source does not exist)
django.template.loaders.app_directories.Loader: C:\Users\Alexander\PycharmProjects\Search\venv\lib\site-packages\debug_toolbar\templates\MyRangeWidget.html (Source does not exist)
django.template.loaders.app_directories.Loader: C:\Users\Alexander\PycharmProjects\Search\venv\lib\site-packages\bootstrap5\templates\MyRangeWidget.html (Source does not exist)
Every path django search for template is in venv. And when i put my widget to match this path, it working just fine.
venv\lib\site-packages\django_filters\templates\MyRangeWidget.html
But i believe its not correct way to do this.
My widget:
<div class="input-group">
{% for widget in widget.subwidgets %}
{% if forloop.first %}
<span class="input-group-text"><i class="bi bi-chevron-left"></i></span>
{% endif %}
{% if forloop.last %}
<span class="input-group-text"><i class="bi bi-chevron-right"></i></span>
{% endif %}
{% include widget.template_name %}
{% endfor %}
filters.py
import django_filters
from .models import *
from .widgets import MyRangeWidget
class LandFilter(django_filters.FilterSet):
price = django_filters.RangeFilter(widget=MyRangeWidget)
size = django_filters.RangeFilter(widget=MyRangeWidget)
class Meta:
model = PriceChangeHistory
fields = ['price', 'size']
widgets.py
import django_filters.widgets
class MyRangeWidget(django_filters.widgets.RangeWidget):
template_name = 'MyRangeWidget.html'
settings.py
BASE_DIR = Path(__file__).resolve().parent.parent
INSTALLED_APPS = [
'django_filters',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'debug_toolbar',
'bootstrap5',
'search',
]
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [BASE_DIR / 'templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
Just to mention, i tried different paths in template_name. Including even absolute path before i notice that django not even try to search for it anywhere except venv.
Have no idea if its me, or its django-filter package problem.
2nd day of searching way how to fix it, and right after i ask it here i found the way.
Added django.forms in installed apps and FORM_RENDERER variable like so:
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.forms',
'django_filters',
'debug_toolbar',
'bootstrap5',
'search',
]
FORM_RENDERER = 'django.forms.renderers.TemplatesSetting'
Now template_name in widget works as expected.

How to alert via email in Ansible

I have setup a mail task in ansible to send emails if yum update is marked as 'changed'.
Here is my current working code:
- name: Send mail alert if updated
community.general.mail:
to:
- 'recipient1'
cc:
- 'recipient2'
subject: Update Alert
body: 'Ansible Tower Updates have been applied on the following system: {{ ansible_hostname }}'
sender: "ansible.updates#domain.com"
delegate_to: localhost
when: yum_update.changed
This works great, however, every system that gets updated per host group sends a separate email. Last night for instance I had a group of 20 servers update and received 20 separate emails. I'm aware of why this happens, but my question is how would I script this to add all the systems to one email? Is that even possible or should I just alert that the group was updated and inform teams of what servers are in each group? (I'd prefer not to take the second option)
Edit 1:
I have added the code suggested and am now unable to receive any emails. Here's the error message:
"msg": "The conditional check '_changed|length > 0' failed. The error was: error while evaluating conditional (_changed|length > 0): {{ hostvars|dict2items| selectattr('value.yum_update.changed')| map(attribute='key')|list }}: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'yum_update'\n\nThe error appears to be in '/tmp/bwrap_1073_o8ibkgrl/awx_1073_0eojw5px/project/yum-update-ent_template_servers.yml': line 22, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Send mail alert if updated\n ^ here\n",
I am also attaching my entire playbook for reference:
---
- name: Update enterprise template servers
hosts: ent_template_servers
tasks:
- name: Update all packages
yum:
name: '*'
state: latest
register: yum_update
- name: Reboot if needed
import_tasks: /usr/share/ansible/tasks/reboot-if-needed-centos.yml
- name: Kernel Cleanup
import_tasks: /usr/share/ansible/tasks/kernel-cleanup.yml
- debug:
var: yum_update.changed
- name: Send mail alert if updated
community.general.mail:
to:
- 'email#domain.com'
subject: Update Alert
body: |-
Updates have been applied on the following system(s):
{{ _changed }}
sender: "ansible.updates#domain.com"
delegate_to: localhost
run_once: true
when: _changed|length > 0
vars:
_changed: "{{ hostvars|dict2items|
selectattr('yum_update.changed')|
map(attribute='key')|list }}"
...
Ansible version is: 2.9.27
Ansible Tower version is: 3.8.3
Thanks in advance!
For example, the mail task below
- debug:
var: yum_update.changed
- community.general.mail:
sender: ansible
to: root
subject: Update Alert
body: |-
Updates have been applied to the following system:
{{ _changed }}
delegate_to: localhost
run_once: true
when: _changed|length > 0
vars:
_changed: "{{ hostvars|dict2items|
selectattr('value.yum_update.changed')|
map(attribute='key')|list }}"
TASK [debug] ***************************************************************
ok: [host01] =>
yum_update.changed: true
ok: [host02] =>
yum_update.changed: false
ok: [host03] =>
yum_update.changed: true
TASK [community.general.mail] **********************************************
ok: [host01 -> localhost]
will send
From: ansible#domain.com
To: root#domain.com
Cc:
Subject: Update Alert
Date: Wed, 09 Feb 2022 16:55:47 +0100
X-Mailer: Ansible mail module
Updates have been applied to the following system:
['host01', 'host03']
Remove the condition below if you want to receive also empty lists
when: _changed|length > 0
Debug
'ansible.vars.hostvars.HostVarsVars object' has no attribute 'yum_update'
Q: "What I could try?"
A: Some of the hosts are missing the variables yum_update. You can test it
- debug:
msg: "{{ hostvars|dict2items|
selectattr('value.yum_update.changed')|
map(attribute='key')|list }}"
run_once: true
Either make sure that the variable is defined on all hosts or use json_query. This filter tolerates missing attributes, e.g.
- debug:
msg: "{{ hostvars|dict2items|
json_query('[?value.yum_update.changed].key') }}"
run_once: true
Q: "The 'debug' task prior to the 'mail' task gives me the same output. But it fails when the 'mail' task is executed."
A: Minimize the code and isolate the problem. For example, in the code below you can see
Variable yum_update.changed is missing on host03
The filter json_query ignores this
The filter selectattr fails
- debug:
var: yum_update.changed
- debug:
msg: "{{ hostvars|dict2items|
json_query('[?value.yum_update.changed].key') }}"
run_once: true
- debug:
msg: "{{ hostvars|dict2items|
selectattr('value.yum_update.changed')|
map(attribute='key')|list }}"
run_once: true
gives
TASK [debug] **************************************************
ok: [host01] =>
yum_update.changed: true
ok: [host02] =>
yum_update.changed: false
ok: [host03] =>
yum_update.changed: VARIABLE IS NOT DEFINED!
TASK [debug] **************************************************
ok: [host01] =>
msg:
- host01
TASK [debug] **************************************************
fatal: [host01]: FAILED! =>
msg: |-
The task includes an option with an undefined variable.
The error was: 'ansible.vars.hostvars.HostVarsVars object'
has no attribute 'yum_update'
Both filters give the same results if all variables are present
TASK [debug] **************************************************
ok: [host01] =>
yum_update.changed: true
ok: [host02] =>
yum_update.changed: false
ok: [host03] =>
yum_update.changed: true
TASK [debug] **************************************************
ok: [host01] =>
msg:
- host01
- host03
TASK [debug] **************************************************
ok: [host01] =>
msg:
- host01
- host03

successful snapshot fails to load some shards, RepositoryMissingException in elasticsearch

I had a backup successfully complete to my s3 bucket in elasticsearch:
{
"state": "SUCCESS",
"start_time": "2014-12- 06T00:12:39.362Z",
"start_time_in_millis": 1417824759362,
"end_time": "2014-12-06T00:33:34.352Z",
"end_time_in_millis": 1417826014352,
"duration_in_millis": 1254990,
"failures": [],
"shards": {
"total": 345,
"failed": 0,
"successful": 345
}
}
But when I restore from the snapshot, I have a few failed shards, with the following message:
[2014-12-08 00:00:05,580][WARN ][cluster.action.shard] [Sunder] [kibana-int][4] received shard failed for [kibana-int][4],
node[_QG8dkDaRD-H1uPL_p57lw], [P], restoring[elasticsearch:snapshot_1], s[INITIALIZING], indexUUID [SAuv_EU3TBGZ71NhkC7WOA],
reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[kibana-int][4] failed recovery];
nested: IndexShardRestoreFailedException[[kibana-int][4] restore failed];
nested: RepositoryMissingException[[elasticsearch] missing]; ]]
how I do reconcile the data, or if necessary remove the shards from my cluster to complete the recovery?