Create VM snapshots using power CLI with no memory quiesce - scripting

I need to have a script to run in vcenter power CLI for creating VM snapshots.
Below is the requirement.I am new to scripting.Can someone please help me to do this.
server names should be taken from a text/csv file.
snapshot should be created with the name we are giving and also with no memory quiesce (equivalent to uncheck the 'snapshot the virtual machine's memory' option in GUI, I believe.).
VM name, created snapshot name,creation date and status (successful or failed) should be exported to an output csv file.
Any guidance would be much appreciated.

Do you have any code??? I will not write the entire script for you, but you have to connect to a vCenter, use get-vm and then you can use:
New-Snapshot [-Name] [-Description ] [-Memory] [-Quiesce] [-RunAsync] [-VM] [-Server ] [-WhatIf] [-Confirm] []
Here is the reference: https://www.vmware.com/support/developer/PowerCLI/PowerCLI41U1/html/New-Snapshot.html

Related

Deploy sql workflow with DBX

I am developing deployment via DBX to Azure Databricks. In this regard I need a data job written in SQL to happen everyday. The job is located in the file data.sql. I know how to do it with a python file. Here I would do the following:
build:
python: "pip"
environments:
default:
workflows:
- name: "workflow-name"
#schedule:
quartz_cron_expression: "0 0 9 * * ?" # every day at 9.00
timezone_id: "Europe"
format: MULTI_TASK #
job_clusters:
- job_cluster_key: "basic-job-cluster"
<<: *base-job-cluster
tasks:
- task_key: "task-name"
job_cluster_key: "basic-job-cluster"
spark_python_task:
python_file: "file://filename.py"
But how can I change it so I can run a SQL job instead? I imagine it is the last two lines of code (spark_python_task: and python_file: "file://filename.py") which needs to be changed.
There are various ways to do that.
(1) One of the most simplest is to add a SQL query in the Databricks SQL lens, and then reference this query via sql_task as described here.
(2) If you want to have a Python project that re-uses SQL statements from a static file, you can add this file to your Python Package and then call it from your package, e.g.:
sql_statement = ... # code to read from the file
spark.sql(sql_statement)
(3) A third option is to use the DBT framework with Databricks. In this case you probably would like to use dbt_task as described here.
I found a simple workaround (although might not be the prettiest) to simply change the data.sql to a python file and run the queries using spark. This way I could use the same spark_python_task.

KIBANA - WAZUH pattern index

I have a project to install wazuh as FIM on linux, AIX and windows.
I managed to install Manager and all agents on all systems and I can see all three connected on the Kibana web as agents.
I created test file on the linux agent and I can find it also on web interface, so servers are connected.
Here is test file found in wazuh inventory tab
But, I am not recieving any logs if I modify this test file.
This is my settings in ossec.conf under syscheck on agent server>
<directories>/var/ossec/etc/test</directories>
<directories report_changes="yes" check_all="yes" realtime="yes">/var/ossec/etc/test</directories>
And now I ma also strugling to understand meanings of index patterns, index templates and fields.
I dont understand what they are and why we need to set it.
My settings on manager server - /usr/share/kibana/data/wazuh/config/wazuh.yml
alerts.sample.prefix: 'wazuh-alerts-*'
pattern: 'wazuh-alerts-*'
On the kibana web I also have this error when I am trying to check ,,events,, -the are no logs in the events.
Error: The field "timestamp" associated with this object no longer exists in the index pattern. Please use another field.
at FieldParamType.config.write.write (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:627309)
at http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:455052
at Array.forEach (<anonymous>)
at writeParams (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:455018)
at AggConfig.write (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:355081)
at AggConfig.toDsl (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:355960)
at http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:190748
at Array.forEach (<anonymous>)
at agg_configs_AggConfigs.toDsl (http://MYIP:5601/42959/bundles/plugin/data/kibana/data.plugin.js:1:189329)
at http://MYIP:5601/42959/bundles/plugin/wazuh/4.2.5-4206-1/wazuh.chunk.6.js:55:1397640
Thank you.
About FIM:
here you can find the FIM documentation in case you don't have it:
https://documentation.wazuh.com/current/user-manual/capabilities/file-integrity/fim-configuration.html
https://documentation.wazuh.com/current/user-manual/reference/ossec-conf/syscheck.html.
The first requirement for this to work would be to ensure a FIM alert is triggered, could you check the alerts.json file on your manager? It is usually located under /var/ossec/logs/alerts/alerts.json In order to test this fully I would run "tail -f /var/ossec/logs/alerts/alerts.json" and make a change in yout directory , if no alerts is generated, then we will need to check the agent configuration.
About indexing:
Here you can find some documentation:
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html
https://www.elastic.co/guide/en/kibana/current/managing-index-patterns.html#scripted-fields
https://documentation.wazuh.com/current/user-manual/kibana-app/reference/elasticsearch.html
Regarding your error, The best way to solve this is to delete the index. To do this:
got to Kibana -> Stack management -> index patterns and there delete wazuh-alerts-*.
Then if you enter to Wazuh App the health check will create it again or you can follow this to create your index:
Go to kibana -> stack management -> index pattern and select Create index pattern.
Hope this information helps you.
Regards.
thank you for your answer.
I managed to step over this issue, but I hit another error.
When I check tail -f /var/ossec/logs/alerts/alerts.json I got never ending updating, thousands lines with errors like.
{"timestamp":"2022-01-31T12:40:08.458+0100","rule":{"level":5,"description":"Systemd: Service has entered a failed state, and likely has not started.","id":"40703","firedtimes":7420,"mail":false,"groups":["local","systemd"],"gpg13":["4.3"],"gdpr":["IV_35.7.d"]},"agent":{"id":"003","name":"MYAGENTSERVERNAME","ip":"X.X.X.X"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629208.66501653","full_log":"Jan 31 12:40:07 MYAGENTSERVERNAME systemd: Unit rbro-cbs-adapter-int.service entered failed state.","predecoder":{"program_name":"systemd","timestamp":"Jan 31 12:40:07","hostname":"MYAGENTSERVERNAME"},"decoder":{"name":"systemd"},"location":"/var/log/messages"}
But, I can also find alert if I change monitored file. (file> wazuhtest)
{"timestamp":"2022-01-31T12:45:59.874+0100","rule":{"level":7,"description":"Integrity checksum changed.","id":"550","mitre":{"id":["T1492"],"tactic":["Impact"],"technique":["Stored Data Manipulation"]},"firedtimes":1,"mail":false,"groups":["ossec","syscheck","syscheck_entry_modified","syscheck_file"],"pci_dss":["11.5"],"gpg13":["4.11"],"gdpr":["II_5.1.f"],"hipaa":["164.312.c.1","164.312.c.2"],"nist_800_53":["SI.7"],"tsc":["PI1.4","PI1.5","CC6.1","CC6.8","CC7.2","CC7.3"]},"agent":{"id":"003","name":"MYAGENTSERVERNAME","ip":"x.x.xx.x"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629559.67086751","full_log":"File '/var/ossec/etc/wazuhtest' modified\nMode: realtime\nChanged attributes: size,mtime,inode,md5,sha1,sha256\nSize changed from '61' to '66'\nOld modification time was: '1643618571', now it is '1643629559'\nOld inode was: '786558', now it is '786559'\nOld md5sum was: '2dd5fe4d08e7c58dfdba76e55430ba57'\nNew md5sum is : 'd8b218e9ea8e2da8e8ade8498d06cba8'\nOld sha1sum was: 'ca9bac5a2d8e6df4aa9772b8485945a9f004a2e3'\nNew sha1sum is : 'bd8b8b5c20abfe08841aa4f5aaa1e72f54a46d31'\nOld sha256sum was: '589e6f3d691a563e5111e0362de0ae454aea52b7f63014cafbe07825a1681320'\nNew sha256sum is : '7f26a582157830b1a725a059743e6d4d9253e5f98c52d33863bc7c00cca827c7'\n","syscheck":{"path":"/var/ossec/etc/wazuhtest","mode":"realtime","size_before":"61","size_after":"66","perm_after":"rw-r-----","uid_after":"0","gid_after":"0","md5_before":"2dd5fe4d08e7c58dfdba76e55430ba57","md5_after":"d8b218e9ea8e2da8e8ade8498d06cba8","sha1_before":"ca9bac5a2d8e6df4aa9772b8485945a9f004a2e3","sha1_after":"bd8b8b5c20abfe08841aa4f5aaa1e72f54a46d31","sha256_before":"589e6f3d691a563e5111e0362de0ae454aea52b7f63014cafbe07825a1681320","sha256_after":"7f26a582157830b1a725a059743e6d4d9253e5f98c52d33863bc7c00cca827c7","uname_after":"root","gname_after":"root","mtime_before":"2022-01-31T09:42:51","mtime_after":"2022-01-31T12:45:59","inode_before":786558,"inode_after":786559,"diff":"1c1\n< dadadadadad\n---\n> dfsdfdadadadadad\n","changed_attributes":["size","mtime","inode","md5","sha1","sha256"],"event":"modified"},"decoder":{"name":"syscheck_integrity_changed"},"location":"syscheck"}
{"timestamp":"2022-01-31T12:46:08.452+0100","rule":{"level":3,"description":"Log file rotated.","id":"591","firedtimes":5,"mail":false,"groups":["ossec"],"pci_dss":["10.5.2","10.5.5"],"gpg13":["10.1"],"gdpr":["II_5.1.f","IV_35.7.d"],"hipaa":["164.312.b"],"nist_800_53":["AU.9"],"tsc":["CC6.1","CC7.2","CC7.3","PI1.4","PI1.5","CC7.1","CC8.1"]},"agent":{"id":"003","name":"MYAGENTSERVERNAME","ip":"x.x.xx.x"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629568.67099280","full_log":"ossec: File rotated (inode changed): '/var/ossec/etc/wazuhtest'.","decoder":{"name":"ossec"},"location":"wazuh-logcollector"}
Also I can see this alert in messages logs on the manager server>
Jan 31 12:46:10 MYMANAGERSERVERNAME filebeat[186670]: 2022-01-31T12:46:10.379+0100#011WARN#011[elasticsearch]#011elasticsearch/client.go:405#011Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xc07610e0563729bf, ext:10888984451164, loc:(*time.Location)(0x55958e3622a0)}, Meta:{"pipeline":"filebeat-7.14.0-wazuh-alerts-pipeline"}, Fields:{"agent":{"ephemeral_id":"dd9ff0c5-d5a9-4a0e-b1b3-0e9d7e8997ad","hostname":"MYMANAGERSERVERNAME","id":"03fb57ca-9940-4886-9e6e-a3b3e635cd35","name":"MYMANAGERSERVERNAME","type":"filebeat","version":"7.14.0"},"ecs":{"version":"1.10.0"},"event":{"dataset":"wazuh.alerts","module":"wazuh"},"fields":{"index_prefix":"wazuh-alerts-4.x-"},"fileset":{"name":"alerts"},"host":{"name":"MYMANAGERSERVERNAME"},"input":{"type":"log"},"log":{"file":{"path":"/var/ossec/logs/alerts/alerts.json"},"offset":127261462},"message":"{"timestamp":"2022-01-31T12:46:08.452+0100","rule":{"level":3,"description":"Log file rotated.","id":"591","firedtimes":5,"mail":false,"groups":["ossec"],"pci_dss":["10.5.2","10.5.5"],"gpg13":["10.1"],"gdpr":["II_5.1.f","IV_35.7.d"],"hipaa":["164.312.b"],"nist_800_53":["AU.9"],"tsc":["CC6.1","CC7.2","CC7.3","PI1.4","PI1.5","CC7.1","CC8.1"]},"agent":{"id":"003","name":"xlcppt36","ip":"10.74.96.34"},"manager":{"name":"MYMANAGERSERVERNAME"},"id":"1643629568.67099280","full_log":"ossec: File rotated (inode changed): '/var/ossec/etc/wazuhtest'.","decoder":{"name":"ossec"},"location":"wazuh-logcollector"}","service":{"type":"wazuh"}}, Private:file.State{Id:"native::706-64776", PrevId:"", Finished:false, Fileinfo:(*os.fileStat)(0xc00095ea90), Source:"/var/ossec/logs/alerts/alerts.json", Offset:127262058, Timestamp:time.Time{wall:0xc076063e1f1b1286, ext:133605185, loc:(*time.Location)(0x55958e3622a0)}, TTL:-1, Type:"log", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x2c2, Device:0xfd08}, IdentifierName:"native"}, TimeSeries:false}, Flags:0x1, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"illegal_argument_exception","reason":"data_stream [<wazuh-alerts-4.x-{2022.01.31||/d{yyyy.MM.dd|UTC}}>] must not contain the following characters [ , ", *, \, <, |, ,, >, /, ?]"}
Here is output form apps check.
curl "http://localhost:9200"
{
"version" : {
"number" : "7.14.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "6bc13727ce758c0e943c3c21653b3da82f627f75",
"build_date" : "2021-09-15T10:18:09.722761972Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
filebeat test output
elasticsearch: http://127.0.0.1:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 127.0.0.1
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 7.14.2
So .. I can see alerts coming from Agent, but Its not reaching Kibana yet. On the kibana web I can see agent active and connected.

How do I clone the virtual disk of running VM on Hyper-V 2012

I want to clone the virtual disk of a running VM on Hyper-V 2012.
I know that snapshotting the VM will generate a diff disk (file extension .AVHDX). All subsequent writes are to this snapshot, and its parent is read only.
However, that parent may also be a snapshot, which means I cannot simply copy the parent.
How do I obtain a single file that contains all the virtual hard drive’s data prior to my snapshot?
Put another way, how do I export the parent of the current snapshot to a single .VHDX file?
Ideally, I would like to know how to do this using the Hyper-V V2 WMI API.
Hyper-V 2012 and later supports live merge of snapshots but not export of a running VM.
Hyper-V 2012 R2 supports live export. So, upgrade and move on, nothing to see here.
However...
If you take a snapshot of a VM, you can then add a differencing disk to the parent disk of the snapshot, create a VM from that, export that VM, then destroy the VM, then destroy the differencing disk.
It seems very round-about, but this is the only way that I know of "cloning" a running Hyper-V VM (as it is the VHD that becomes the issue, the settings can be copied and a new VM created, those settings don't need to be 'cloned').
If your VM does not have any snapshots, then you need to create one to make this work. You will be 'cloning' the snapshot while the running VM state is allowed to write back to the AVHDX known as 'now'.
Since this copy that is being created is never powered on, CPU and RAM does not matter.
Because this is not a snapshot, with the export Hyper-V gives you the differencing disk plus the parent.
If you exported a snapshot you get a single virtual disk, since Hyper-V does special things with AVHDX files.
If you want a single file, then you merge the diff that is in the export. The merge ends up breaking the configuration of the exported VM, since the differencing disk is deleted so you have to rename it.
But again, we are doing this to get a copy of the disk in a very clean (and proper) way.
Yes, a lot of work for a running export. And to avoid any file locking issues with virtual disks.
No time for complete code, but I walked this through the PowerShell just to verify it is possible.
$snap = Get-VMSnapshot "datest"
PS C:\Windows\system32> $snap.HardDrives
VMName ControllerType ControllerNumber ControllerLocation DiskNumber Path
------ -------------- ---------------- ------------------ ---------- ----
DATest IDE 0 0 D:\DATest\Server2012.vhdx
PS C:\Windows\system32> New-VHD -Differencing -ParentPath $snap.HardDrives[0].Path -Path D:\test\test.vhdx
ComputerName : SWEETUMS
Path : D:\test\test.vhdx
VhdFormat : VHDX
VhdType : Differencing
FileSize : 4194304
Size : 42949672960
MinimumSize : 42948624384
LogicalSectorSize : 512
PhysicalSectorSize : 4096
BlockSize : 2097152
ParentPath : D:\DATest\Server2012.vhdx
FragmentationPercentage :
Alignment : 1
Attached : False
DiskNumber :
IsDeleted : False
Number :
PS C:\Windows\system32> New-VM -Name Test -Path D:\Test -VHDPath D:\test\test.vhdx
Name State CPUUsage(%) MemoryAssigned(M) Uptime Status
---- ----- ----------- ----------------- ------ ------
Test Off 0 0 00:00:00 Operating normally
Export-vm -Name Test -Path d:\newTest -Passthru
# Get the FullName of the parent disk for renaming later
Merge-VHD 'D:\NewTest\Test\Virtual Hard Disks\test.vhdx' -Force
Rename-Item 'D:\NewTest\Test\Virtual Hard Disks\Server2012.vhdx' 'D:\NewTest\Test\Virtual Hard Disks\test.vhdx'
I have to blog all of this now..
Using Windows 2012 R2?
You can export directly using the ExportSystemDefinition method of the Msvm_VirtualSystemManagementService class
PowerShell example from Taylor Brown:
# Obtain Msvm_ComputerSystem object corresponding to VM to export
$vmName = "MyVirtualMachine"
$Msvm_ComputerSystem = Get-WmiObject -Namespace root\virtualization\v2 -Class Msvm_ComputerSystem -Filter "ElementName='$vmName'"
# Edit copy of the Msvm_VirtualSystemExportSettingData associated with your VM
$Msvm_VirtualSystemExportSettingData = $Msvm_ComputerSystem.GetRelated("Msvm_VirtualSystemExportSettingData","Msvm_SystemExportSettingData",$null,$null, $null, $null, $false, $null)
$Msvm_VirtualSystemExportSettingData.CopySnapshotConfiguration = 0 # 0=ExportAllSnapshots, 1=ExportNoSnapshots, 2=ExportOneSnapshot
# Call ExportSystemDefinition method of Msvm_VirtualSystemManagementService singleton
$Msvm_VirtualSystemManagementService = Get-WmiObject -Namespace root\virtualization\v2 -Class Msvm_VirtualSystemManagementService
$Msvm_VirtualSystemManagementService.ExportSystemDefinition($Msvm_ComputerSystem, "c:\export_folder", $Msvm_VirtualSystemExportSettingData.GetText(1))

Generating TPC-DS database for sql server

How do I populate the Transaction Processing Performance Council's TPC-DS database for SQL Server? I have downloaded the TPC-DS tool but there are few tutorials about how to use it.
In case you are using windows, you gotta have visual studio 2005 or later. Unzip dsgen in the folder tools there is dsgen2.sln file, open it using visual studio and build the project, will generate tables for you, I've tried that and I loaded tables manually into sql server
I've just succeeded in generating these queries.
There are some tips may not the best but useful.
cp ${...}/query_templates/* ${...}/tools/
add define _END = ""; to each query.tpl
${...}/tools/dsqgen -INPUT templates.lst -OUTPUT_DIR /home/query99/
Let's describe the base steps:
Before go to the next steps double-check that the required TPC-DS Kit has not been already prepared for your DB
Download TPC-DS Tools
Build Tools as described in 'v2.11.0rc2\tools\How_To_Guide-DS-V2.0.0.docx' (I used VS2015)
Create DB
Take the DB schema described in tpcds.sql and tpcds_ri.sql (they located in 'v2.11.0rc2\tools\'-folder), suit it to your DB if required.
Generate data that be stored to database
# Windows
dsdgen.exe /scale 1 /dir .\tmp /suffix _001.dat
# Linux
dsdgen -scale 1 -dir /tmp -suffix _001.dat
Upload data to DB
# example for ClickHouse
database_name=tpcds
ch_password=12345
for file_fullpath in /tmp/tpc-ds/*.dat; do
filename=$(echo ${file_fullpath##*/})
tablename=$(echo ${filename%_*})
echo " - $(date +"%T"): start processing $file_fullpath (table: $tablename)"
query="INSERT INTO $database_name.$tablename FORMAT CSV"
cat $file_fullpath | clickhouse-client --format_csv_delimiter="|" --query="$query" --password $ch_password
done
Generate queries
# Windows
set tmpl_lst_path="..\query_templates\templates.lst"
set tmpl_dir="..\query_templates"
set dialect_path="..\..\clickhouse-dialect"
set result_dir="..\queries"
set tmpl_name="query1.tpl"
dsqgen /input %tmpl_lst_path% /directory %tmpl_dir% /dialect %dialect_path% /output_dir %result_dir% /scale 1 /verbose y /template %tmpl_name%
# Linux
# see for example https://github.com/pingcap/tidb-bench/blob/master/tpcds/genquery.sh
To fix the error 'Substitution .. is used before being initialized' follow this fix.

how to get the information of system log of taskscheduler under the command line with WMIC?

I need to know whether the taskscheduler has run successfully. I found difficult to get a return value by the wmic command, because they aren't included in the system, security, application hardwareevents and so on logfile. Following is my attempts:
wmic ntevent "eventcode=140" get message /value
The above code returns the message: "no instance(s) availalbe."
As I don't know which kind of logfile includes the log records of taskschedule, I select them by the eventcode.
wmic ntevent where "logfile='system' and eventcode='140'" get message /vule
No matter, logfile='appliaction' or logfile='security', at the end I can't get the result I want.
What i need to point out is the log records are allocated in Microsoft-Windows-TaskScheduler/Operational in graphical interfaces.
Unfortunately, wmic can only be used on classic logs. List them by
wmic NTEventlog get LogfileName or by powershell Get-EventLog -AsString -List:
Application
HardwareEvents
Internet Explorer
Key Management Service
PreEmptive
Security
System
TuneUp
Windows PowerShell
Switch to wevtutil: wevtutil enum-logs enumerates the available logs and
wevtutil qe Microsoft-Windows-TaskScheduler/Operational /q:"*[System[Provider[#Name='Microsoft-Windows-TaskScheduler'] and (EventID=140)]]" /uni:false /f:text
command could be a starting point for you. Pretty weird syntax and result hard do parse. Moreover:
The primary focus of WEVTUTIL is the configuration and setup of
event logs, to retrieve event log data the PowerShell cmdlet
Get-WinEvent is easier to use and more flexible:
Switch to powershell: (Get-WinEvent -ListLog *).LogName enumerates the available logs and one can simply filter and format result from next command:
Get-WinEvent -LogName Microsoft-Windows-TaskScheduler/Operational
For instance, to format the result in list view to see all properties:
Get-WinEvent -LogName Microsoft-Windows-TaskScheduler/Operational | Format-List *
Read more in Powershell and the Applications and Services Logs.