Issue with creation of SSL key via ansible - ssl

Please help to understand why this task is not working for me?
---
- name: Generation of SSL key
openssl_privatekey:
path: "/opt/mongodbkey"
size: 741
force: true
My error :
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in '/root/mongodb/roles/mongodb/tasks/ssl.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- name: Generation of SSL key
^ here
Also one more quick question. Is it right method for this manual command?
openssl rand -base64 741 > /opt/mongodbkey

Related

vich/uploader-bundle:1.21.1 - ReplacingFile does not create directory according to configured directory_namer

Package
Version
vich/uploader-bundle
1.21.1
Symfony
5.4.19
PHP
7.4.30
I'am trying to use Vich\UploaderBundle\FileAbstraction\ReplacingFile (as described on https://github.com/dustin10/VichUploaderBundle/blob/1.21.1/docs/other_usages/replacing_file.md) to inject a file from the local file-system.
But the following ErrorException is thrown upon persisting the entity with the attached file:
ErrorException: Warning: copy(/var/www/var/storage/files/da/39/da39a3ee5e6b4b0d3255bfef95601890afd80709): failed to open stream: No such file or directory in /var/www/vendor/vich/uploader-bundle/src/Storage/FileSystemStorage.php on line 25
Call Stack:
0.0037 400328 1. {main}() /var/www/bin/console:0
0.0957 406696 2. require_once('/var/www/vendor/autoload_runtime.php') /var/www/bin/console:11
0.1202 4558088 3. Symfony\Component\Runtime\Runner\Symfony\ConsoleApplicationRunner->run() /var/www/vendor/autoload_runtime.php:35
0.1202 4558088 4. Symfony\Component\Console\Application->run(...
0.1224 4567696 5. Symfony\Bundle\FrameworkBundle\Console\Application->doRun(...
0.1781 15215392 6. Symfony\Component\Console\Application->doRun(...
0.2879 22494832 7. Symfony\Bundle\FrameworkBundle\Console\Application->doRunCommand(...
0.2879 22494832 8. Symfony\Component\Console\Application->doRunCommand(...
0.2933 22638600 9. Symfony\Component\Console\Command\Command->run(...
0.2935 22638600 10. App\Command\WatcherCommand->execute(...
5.9695 23904056 11. App\Service\FileHandler->handleEvent(...
12.2081 28132368 12. Doctrine\ORM\EntityManager->persist(...
12.2081 28132368 13. Doctrine\ORM\UnitOfWork->persist(...
12.2081 28132400 14. Doctrine\ORM\UnitOfWork->doPersist(...
12.2082 28132776 15. Doctrine\ORM\UnitOfWork->persistNew(...
12.2084 28133768 16. Doctrine\ORM\Event\ListenersInvoker->invoke($...
12.2084 28133768 17. Symfony\Bridge\Doctrine\ContainerAwareEventManager->dispatchEvent(...
12.2084 28133768 18. Vich\UploaderBundle\EventListener\Doctrine\UploadListener->prePersist(...
12.2089 28144592 19. Vich\UploaderBundle\Handler\UploadHandler->upload(...
12.2135 28407936 20. Vich\UploaderBundle\Storage\AbstractStorage->upload(...
12.2152 28439984 21. Vich\UploaderBundle\Storage\FileSystemStorage->doUpload(...
19.9282 28442848 22. copy($source_file = '/var/www/var/storage/symlinks/24410740-1466-45a9-be93-cbdbde4be40a/test6.txt', $destination_file = '/var/www/var/storage/files/da/39/da39a3ee5e6b4b0d3255bfef95601890afd80709') /var/www/vendor/vich/uploader-bundle/src/Storage/FileSystemStorage.php:25
19.9282 28444520 23. Symfony\Component\ErrorHandler\ErrorHandler->handleError(...
Link to Line in Question: https://github.com/dustin10/VichUploaderBundle/blob/1.21.1/src/Storage/FileSystemStorage.php#L25
It seams that the configured directory_namer in config/packages/vich_uploader.yaml did not do it's thing:
vich_uploader:
db_driver: orm
#mappings:
# products:
# uri_prefix: /images/products
# upload_destination: '%kernel.project_dir%/public/images/products'
# namer: Vich\UploaderBundle\Naming\SmartUniqueNamer
mappings:
file:
uri_prefix: null
upload_destination: '%app.storage.files_dir%'
namer:
service: App\Service\UploaderFileNamer
options: { property: 'checksum' }
directory_namer:
service: Vich\UploaderBundle\Naming\SubdirDirectoryNamer
options: { chars_per_dir: 2, dirs: 2 }
inject_on_load: true
This is my configured upload_destination:
❯ console debug:container --parameters | grep "files_dir"
app.storage.files_dir /var/www/var/storage/files
The ErrorException seams to be correct because the target-directory does not exist:
php#dev-64fb4d859b-btcv9:/var/www$ ls /var/www/var/storage/files/da/39
ls: cannot access '/var/www/var/storage/files/da/39': No such file or directory
^^ directory-namer 2nd level - does not exit ^^
php#dev-64fb4d859b-btcv9:/var/www$ ls /var/www/var/storage/files/da/
ls: cannot access '/var/www/var/storage/files/da/': No such file or directory
^^ directory-namer 1st level - does not exit ^^
php#dev-64fb4d859b-btcv9:/var/www$ ls /var/www/var/storage/files/
^^ configured upload-destination - exists ^^
Is that the expected behavior and do I need to trigger the directory_namer separately when using ReplacingFile? If so - how do can i accomplish that ?
In case it's a bug, is there some way to get the target path in advance so I can create the directory myself?
I was excepting that the ReplacingFile is handled like an UploadedFile and the directory_namer would create the target-directory according to the configuration.
I've tried using UploadedFile and it all worked as expected (the directory-namer did create the target-directory)

KIBANA - WAZUH pattern index

I have a project to install wazuh as FIM on linux, AIX and windows.
I managed to install Manager and all agents on all systems and I can see all three connected on the Kibana web as agents.
I created test file on the linux agent and I can find it also on web interface, so servers are connected.
Here is test file found in wazuh inventory tab
But, I am not recieving any logs if I modify this test file.
This is my settings in ossec.conf under syscheck on agent server>
<directories>/var/ossec/etc/test</directories>
<directories report_changes="yes" check_all="yes" realtime="yes">/var/ossec/etc/test</directories>
And now I ma also strugling to understand meanings of index patterns, index templates and fields.
I dont understand what they are and why we need to set it.
My settings on manager server - /usr/share/kibana/data/wazuh/config/wazuh.yml
alerts.sample.prefix: 'wazuh-alerts-*'
pattern: 'wazuh-alerts-*'
On the kibana web I also have this error when I am trying to check ,,events,, -the are no logs in the events.
Error: The field "timestamp" associated with this object no longer exists in the index pattern. Please use another field.
at FieldParamType.config.write.write (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:627309)
at http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:455052
at Array.forEach (<anonymous>)
at writeParams (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:455018)
at AggConfig.write (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:355081)
at AggConfig.toDsl (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:355960)
at http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:190748
at Array.forEach (<anonymous>)
at agg_configs_AggConfigs.toDsl (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:189329)
at http://MYIP:5601/42959/bundles/plugin/wazuh/4.2.5-4206-1/wazuh.chunk.6.js:55:1397640
Thank you.
About FIM:
here you can find the FIM documentation in case you don't have it:
https://documentation.wazuh.com/current/user-manual/capabilities/file-integrity/fim-configuration.html
https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/syscheck.html.
The first requirement for this to work would be to ensure a FIM alert is triggered, could you check the alerts.json file on your manager? It is usually located under /var/ossec/logs/alerts/alerts.json In order to test this fully I would run "tail -f /var/ossec/logs/alerts/alerts.json" and make a change in yout directory , if no alerts is generated, then we will need to check the agent configuration.
About indexing:
Here you can find some documentation:
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html
https://www.elastic.co/guide/en/kibana/current/managing-index-patterns.html#scripted-fields
https://documentation.wazuh.com/current/user-manual/kibana-app/reference/elasticsearch.html
Regarding your error, The best way to solve this is to delete the index. To do this:
got to Kibana -> Stack management -> index patterns and there delete wazuh-alerts-*.
Then if you enter to Wazuh App the health check will create it again or you can follow this to create your index:
Go to kibana -> stack management -> index pattern and select Create index pattern.
Hope this information helps you.
Regards.
thank you for your answer.
I managed to step over this issue, but I hit another error.
When I check tail -f /var/ossec/logs/alerts/alerts.json I got never ending updating, thousands lines with errors like.
{"timestamp":"2022-01-31T12:40:08.458+0100","rule":{"level":5,"description":"Systemd: Service has entered a failed state, and likely has not started.","id":"40703","firedtimes":7420,"mail":false,"groups":["local","systemd"],"gpg13":["4.3"],"gdpr":["IV_35.7.d"]},"agent":{"id":"003","name":"MYAGENTSERVERNAME","ip":"X.X.X.X"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629208.66501653","full_log":"Jan 31 12:40:07 MYAGENTSERVERNAME systemd: Unit rbro-cbs-adapter-int.service entered failed state.","predecoder":{"program_name":"systemd","timestamp":"Jan 31 12:40:07","hostname":"MYAGENTSERVERNAME"},"decoder":{"name":"systemd"},"location":"/var/log/messages"}
But, I can also find alert if I change monitored file. (file> wazuhtest)
{"timestamp":"2022-01-31T12:45:59.874+0100","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","mitre":{"id":["T1492"],"tactic":["Impact"],"technique":["Stored Data Manipulation"]},"firedtimes":1,"mail":false,"groups":["ossec","syscheck","syscheck_entry_modified","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"003","name":"MYAGENTSERVERNAME","ip":"x.x.xx.x"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629559.67086751","full_log":"File '/var/ossec/etc/wazuhtest' modified\nMode: realtime\nChanged attributes: size,mtime,inode,md5,sha1,sha256\nSize changed from '61' to '66'\nOld modification time was: '1643618571', now it is '1643629559'\nOld inode was: '786558', now it is '786559'\nOld md5sum was: '2dd5fe4d08e7c58dfdba76e55430ba57'\nNew md5sum is : 'd8b218e9ea8e2da8e8ade8498d06cba8'\nOld sha1sum was: 'ca9bac5a2d8e6df4aa9772b8485945a9f004a2e3'\nNew sha1sum is : 'bd8b8b5c20abfe08841aa4f5aaa1e72f54a46d31'\nOld sha256sum was: '589e6f3d691a563e5111e0362de0ae454aea52b7f63014cafbe07825a1681320'\nNew sha256sum is : '7f26a582157830b1a725a059743e6d4d9253e5f98c52d33863bc7c00cca827c7'\n","syscheck":{"path":"/var/ossec/etc/wazuhtest","mode":"realtime","size_before":"61","size_after":"66","perm_after":"rw-r-----","uid_after":"0","gid_after":"0","md5_before":"2dd5fe4d08e7c58dfdba76e55430ba57","md5_after":"d8b218e9ea8e2da8e8ade8498d06cba8","sha1_before":"ca9bac5a2d8e6df4aa9772b8485945a9f004a2e3","sha1_after":"bd8b8b5c20abfe08841aa4f5aaa1e72f54a46d31","sha256_before":"589e6f3d691a563e5111e0362de0ae454aea52b7f63014cafbe07825a1681320","sha256_after":"7f26a582157830b1a725a059743e6d4d9253e5f98c52d33863bc7c00cca827c7","uname_after":"root","gname_after":"root","mtime_before":"2022-01-31T09:42:51","mtime_after":"2022-01-31T12:45:59","inode_before":786558,"inode_after":786559,"diff":"1c1\n< dadadadadad\n---\n> dfsdfdadadadadad\n","changed_attributes":["size","mtime","inode","md5","sha1","sha256"],"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"}
{"timestamp":"2022-01-31T12:46:08.452+0100","rule":{"level":3,"description":"Log file rotated.","id":"591","firedtimes":5,"mail":false,"groups":["ossec"],"pci_dss":["10.5.2","10.5.5"],"gpg13":["10.1"],"gdpr":["II_5.1.f","IV_35.7.d"],"hipaa":["164.312.b"],"nist_800_53":["AU.9"],"tsc":["CC6.1","CC7.2","CC7.3","PI1.4","PI1.5","CC7.1","CC8.1"]},"agent":{"id":"003","name":"MYAGENTSERVERNAME","ip":"x.x.xx.x"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629568.67099280","full_log":"ossec: File rotated (inode changed): '/var/ossec/etc/wazuhtest'.","decoder":{"name":"ossec"},"location":"wazuh-logcollector"}
Also I can see this alert in messages logs on the manager server>
Jan 31 12:46:10 MYMANAGERSERVERNAME filebeat[186670]: 2022-01-31T12:46:10.379+0100#011WARN#011[elasticsearch]#011elasticsearch/client.go:405#011Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc07610e0563729bf, ext:10888984451164, loc:(*time.Location)(0x55958e3622a0)}, Meta:{"pipeline":"filebeat-7.14.0-wazuh-alerts-pipeline"}, Fields:{"agent":{"ephemeral_id":"dd9ff0c5-d5a9-4a0e-b1b3-0e9d7e8997ad","hostname":"MYMANAGERSERVERNAME","id":"03fb57ca-9940-4886-9e6e-a3b3e635cd35","name":"MYMANAGERSERVERNAME","type":"filebeat","version":"7.14.0"},"ecs":{"version":"1.10.0"},"event":{"dataset":"wazuh.alerts","module":"wazuh"},"fields":{"index_prefix":"wazuh-alerts-4.x-"},"fileset":{"name":"alerts"},"host":{"name":"MYMANAGERSERVERNAME"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/alerts/alerts.json"},"offset":127261462},"message":"{"timestamp":"2022-01-31T12:46:08.452+0100","rule":{"level":3,"description":"Log file rotated.","id":"591","firedtimes":5,"mail":false,"groups":["ossec"],"pci_dss":["10.5.2","10.5.5"],"gpg13":["10.1"],"gdpr":["II_5.1.f","IV_35.7.d"],"hipaa":["164.312.b"],"nist_800_53":["AU.9"],"tsc":["CC6.1","CC7.2","CC7.3","PI1.4","PI1.5","CC7.1","CC8.1"]},"agent":{"id":"003","name":"xlcppt36","ip":"10.74.96.34"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629568.67099280","full_log":"ossec: File rotated (inode changed): '/var/ossec/etc/wazuhtest'.","decoder":{"name":"ossec"},"location":"wazuh-logcollector"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::706-64776", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc00095ea90), Source:"/var/ossec/logs/alerts/alerts.json", Offset:127262058, Timestamp:time.Time{wall:0xc076063e1f1b1286, ext:133605185, loc:(*time.Location)(0x55958e3622a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x2c2, Device:0xfd08}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"illegal_argument_exception","reason":"data_stream [<wazuh-alerts-4.x-{2022.01.31||/d{yyyy.MM.dd|UTC}}>] must not contain the following characters [ , ", *, \, <, |, ,, >, /, ?]"}
Here is output form apps check.
curl "http://localhost:9200"
{
"version" : {
"number" : "7.14.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "6bc13727ce758c0e943c3c21653b3da82f627f75",
"build_date" : "2021-09-15T10:18:09.722761972Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
filebeat test output
elasticsearch: http://127.0.0.1:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 127.0.0.1
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 7.14.2
So .. I can see alerts coming from Agent, but Its not reaching Kibana yet. On the kibana web I can see agent active and connected.

Use dotted YAML variables file in Ansible

I'm trying to achieve the following using Ansible:
Define a YAML file with some variables in the dotted format inside it (variables.yml)
database.hosts[0]: "db0"
database.hosts[1]: "db1"
database.hosts[2]: "db2"
foo.bar: 1
foo.baz: 2
Load the variables in variables.yml by using the include_vars module in my playbook (playbook.yml) and print them in a tree structure
- hosts: all
gather_facts: not
tasks:
- name: "Loading vars"
run_once: true
include_vars:
file: 'variables.yml'
- name: "Testing"
debug:
msg: "{{ foo }}"
- name: "Testing"
debug:
msg: "{{ database }}"
Running this results in the following error:
fatal: [host0]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'foo' is undefined\n\nThe error appears to be in '.../playbook.yml': line 9, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: \"Testing\"\n ^ here\n"}
Which makes it clear that each property in the YAML file has been loaded as a separate property and not as properties within two trees rooted in database and foo.
Of course, the playbook works as expected if I specify the properties as follows:
database:
hosts:
- "db0"
- "db1"
- "db2"
foo:
bar: 1
baz: 2
However, I need the YAML variables file to be in the dotted format instead of in the classic indented format. Is there any way to achieve this? E.g.: a module different from include_vars or some configuration that I can add to the ansible.cfg file? I have already tried to use hash_behaviour=merge, but that didn't help.
Q: "I need the YAML variables file to be in the dotted format instead of in the classic indented format. Is there any way to achieve this?"
A: No. It's' not possible. See Creating valid variable names.

Ansible docker invalid character %

I am writing some ansible to automate my docker deployments.
I am trying to set an env variable to be % but I get this error:
ERROR! Syntax Error while loading YAML.
found character that cannot start any token
The error appears to have been in '/vagrant/roles/zoneminder_docker/tasks/main.yml': line 21, column 24, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
MYSQL_ROOT_PASSWORD: mysqlpsswd
MYSQL_ROOT_HOST: %
^ here
Is there any way to escape this character? As if I exclude this line, my deployment fails.
Is there any way to escape this character? As if I exclude this line, my deployment fails.
The thing you are looking for is to make that value a string literal:
MYSQL_ROOT_PASSWORD: mysqlpsswd
MYSQL_ROOT_HOST: '%'

Unexpected error while loading data

I am getting an "Unexpected" error. I tried a few times, and I still could not load the data. Is there any other way to load data?
gs://log_data/r_mini_raw_20120510.txt.gzto567402616005:myv.may10c
Errors:
Unexpected. Please try again.
Job ID: job_4bde60f1c13743ddabd3be2de9d6b511
Start Time: 1:48pm, 12 May 2012
End Time: 1:51pm, 12 May 2012
Destination Table: 567402616005:myvserv.may10c
Source URI: gs://log_data/r_mini_raw_20120510.txt.gz
Delimiter: ^
Max Bad Records: 30000
Schema:
zoneid: STRING
creativeid: STRING
ip: STRING
update:
I am using the file that can be found here:
http://saraswaticlasses.net/bad.csv.zip
bq load -F '^' --max_bad_record=30000 mycompany.abc bad.csv id:STRING,ceid:STRING,ip:STRING,cb:STRING,country:STRING,telco_name:STRING,date_time:STRING,secondary:STRING,mn:STRING,sf:STRING,uuid:STRING,ua:STRING,brand:STRING,model:STRING,os:STRING,osversion:STRING,sh:STRING,sw:STRING,proxy:STRING,ah:STRING,callback:STRING
I am getting an error "BigQuery error in load operation: Unexpected. Please try again."
The same file works from Ubuntu while it does not work from CentOS 5.4 (Final)
Does the OS encoding need to be checked?
The file you uploaded has an unterminated quote. Can you delete that line and try again? I've filed an internal bigquery bug to be able to handle this case more gracefully.
$grep '"' bad.csv
3000^0^1.202.218.8^2f1f1491^CN^others^2012-05-02 20:35:00^^^^^"Mozilla/5.0^generic web browser^^^^^^^^
When I run a load from my workstation (Ubuntu), I get a warning about the line in question. Note that if you were using a larger file, you would not see this warning, instead you'd just get a failure.
$bq show --format=prettyjson -j job_e1d8636e225a4d5f81becf84019e7484
...
"status": {
"errors": [
{
"location": "Line:29057 / Field:12",
"message": "Missing close double quote (\") character: field starts with: <Mozilla/>",
"reason": "invalid"
}
]
My suspicion is that you have rows or fields in your input data that exceed the 64 KB limit. Perhaps re-check the formatting of your data, check that it is gzipped properly, and if all else fails, try importing uncompressed data. (One possibility is that the entire compressed file is being interpreted as a single row/field that exceeds the aforementioned limit.)
To answer your original question, there are a few other ways to import data: you could upload directly from your local machine using the command-line tool or the web UI, or you could use the raw API. However, all of these mechanisms (including the Google Storage import that you used) funnel through the same CSV parser, so it's possible that they'll all fail in the same way.