PowerShell script for formatting multiple disks
need Powershell script I'm Serching for this content but did not get any script
I Tried to reproduce the same in my environment to delete the Multiple disks using PowerShell
As I tried below script to delete the multiple disks.
#List all disks
Get-Disk
#List disks that are required to Format Get-Partition –DiskNumber 2,3
# Clear the require Drives Clear-Disk -Number 3,2 –RemoveData
#Initializes
Initialize-Disk -Number 2,3
# Set Drive Name to the Drive
New-Partition -DiskNumber 2,3 -UseMaximumSize | Format-Volume -FileSystem NTFS -NewFileSystemLabel myDrive
# Set Drive Letter to the Drive
Get-Partition -DiskNumber 3 | Set-Partition -NewDriveLetter S
Related
I used to work with a cluster using SLURM scheduler, but now I am more or less forced to switch to a SGE-based cluster, and I'm trying to get a hang of it. The thing I was working on SLURM system involves running an executable using N input files, and set a SLURM configuration file in this fashion,
slurmConf.conf SLURM configuration file
0 /path/to/exec /path/to/input1
1 /path/to/exec /path/to/input2
2 /path/to/exec /path/to/input3
3 /path/to/exec /path/to/input4
4 /path/to/exec /path/to/input5
5 /path/to/exec /path/to/input6
6 /path/to/exec /path/to/input7
7 /path/to/exec /path/to/input8
8 /path/to/exec /path/to/input9
9 /path/to/exec /path/to/input10
And my working submission script in SLURM contains this line;
srun -n $SLURM_NNODES --multi-prog $slconf
$slconf refers to a path to that configuration file
This setup worked as I wanted - to run the executable with 10 different inputs at the same time with 10 nodes. Now that I just transitioned to SGE system, I want to do the same thing but I tried to read the manual and found nothing quite like SLURM. Could you please give me some light on how to achieve the same thing on SGE system?
Thank you very much!
You could use the "job array" feature of the Grid Engine.
Create a shell script sge_job.sh
#!/bin/sh
#
# sge_job.sh -- SGE job description script
#
#$ -t 1-10
/path/to/exec /path/to/input$SGE_TASK_ID
And submit this script to SGE with qsub.
qsub sge_job.sh
Dmitri Chubarov's answer is excellent, and the most robust way to proceed as it places less load on the submit node when submitting many jobs (>1000). Alternatively, you can wrap qsub in a for loop:
for i in {1..10}
do
echo "/path/to/exec /path/to/input${i}" | qsub
done
I sometimes use the above when whatever varies as input is not easily captured as a range of integers.
Example:
for f in `ls /some/path/input*`
do
echo "/path/to/exec ${f}" | qsub
done
I have a bunch of instances running in GCE. I want to programmatically get a list of the internal IP addresses of them without logging into the instances (locally).
I know I can run:
gcloud compute instances list
But are there any flags I can pass to just get the information I want?
e.g.
gcloud compute instances list --internal-ips
or similar? Or am I going to have to dust off my sed/awk brain and parse the output?
I also know that I can get the output in JSON using --format=json, but I'm trying to do this in a bash script.
The simplest way to programmatically get a list of internal IPs (or external IPs) without a dependency on any tools other than gcloud is:
$ gcloud --format="value(networkInterfaces[0].networkIP)" compute instances list
$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances list
This uses --format=value which also requires a projection which is a list of resource keys that select resource data values. For any command you can use --format=flattened to get the list of resource key/value pairs:
$ gcloud --format=flattened compute instances list
A few things here.
First gcloud's default output format for listing is not guaranteed to be stable, and new columns may be added in the future. Don't script against this!
The three output modes are three output modes that are accessible with the format flag, --format=json, --format=yaml, and format=text, are based on key=value pairs and can scripted against even if new fields are introduced in the future.
Two good ways to do what you want are to use JSON and the jq tool,
gcloud compute instances list --format=json \
| jq '.[].networkInterfaces[].networkIP'
or text format and grep + line-oriented using tools,
gcloud compute instances list --format=text \
| grep '^networkInterfaces\[[0-9]\+\]\.networkIP:' | sed 's/^.* //g'
I hunted around and couldn't find a straight answer, probably because efficient tools weren't available when others replied to the original question. GCP constantly updates their libraries & APIs and we can use the filter and projections to extract targeted attributes.
Here I outline how to reserve an external static IP, see how it's attributes are named & organised, and then export the external IP address so that I can use it in other scripts (e.g. assign this to a VM instance or authorise this network (IP address) on a Cloud SQL instance.
Reserve a static IP in a region of your choice
gcloud compute --project=[PROJECT] addresses create [NAME] --region=[REGION]
[Informational] View the details of the regional static IP that was reserved
gcloud compute addresses describe [NAME] --region [REGION] --format=flattened
[Informational] List the attributes of the static IP in the form of key-value pairs
gcloud compute addresses describe [NAME] --region [REGION] --format='value(address)'
Extract the desired value (e.g. external IP address) as a parameter
export STATIC_IP=$(gcloud compute addresses describe [NAME] --region [REGION] --format='value(address)’)
Use the exported parameter in other scripts
echo $STATIC_IP
The best possible way would be to have readymade gcloud command use the same as and when needed.
This can be achieved using table() format option with gcloud as per below:
gcloud compute instances list --format='table(id,name,status,zone,networkInterfaces[0].networkIP :label=Internal_IP,networkInterfaces[0].accessConfigs[0].natIP :label=External_IP)'
What does it do for you?
Get you data in clean format
Give you option to add or remove columns
Need additional columns? How to find column name even before you run the above command?
Execute the following, which will give you data in raw JSON format consisting value and its name, copy those names and add them into your table() list. :-)
gcloud compute instances list --format=json
Plus Point: This is pretty much same syntax you can tweak with any GCP resources data to fetch including with gcloud, kubectl etc.
As far as I know you can't filter on specific fields in the gcloud tool.
Something like this will work for a Bash script, but it still feels a bit brittle:
gcloud compute instances list --format=yaml | grep " networkIP:" | cut -c 14-100
I agree with #Christiaan. Currently there is no automated way to get the internal IPs using the gcloud command.
You can use the following command to print the internal IPs (4th column):
gcloud compute instances list | tail -n+2 | awk '{print $4}'
or the following one if you want to have the pair <instance_name> <internal_ip> (1st and 4th column)
gcloud compute instances list | tail -n+2 | awk '{print $1, $4}'
I hope it helps.
I want to clone the virtual disk of a running VM on Hyper-V 2012.
I know that snapshotting the VM will generate a diff disk (file extension .AVHDX). All subsequent writes are to this snapshot, and its parent is read only.
However, that parent may also be a snapshot, which means I cannot simply copy the parent.
How do I obtain a single file that contains all the virtual hard drive’s data prior to my snapshot?
Put another way, how do I export the parent of the current snapshot to a single .VHDX file?
Ideally, I would like to know how to do this using the Hyper-V V2 WMI API.
Hyper-V 2012 and later supports live merge of snapshots but not export of a running VM.
Hyper-V 2012 R2 supports live export. So, upgrade and move on, nothing to see here.
However...
If you take a snapshot of a VM, you can then add a differencing disk to the parent disk of the snapshot, create a VM from that, export that VM, then destroy the VM, then destroy the differencing disk.
It seems very round-about, but this is the only way that I know of "cloning" a running Hyper-V VM (as it is the VHD that becomes the issue, the settings can be copied and a new VM created, those settings don't need to be 'cloned').
If your VM does not have any snapshots, then you need to create one to make this work. You will be 'cloning' the snapshot while the running VM state is allowed to write back to the AVHDX known as 'now'.
Since this copy that is being created is never powered on, CPU and RAM does not matter.
Because this is not a snapshot, with the export Hyper-V gives you the differencing disk plus the parent.
If you exported a snapshot you get a single virtual disk, since Hyper-V does special things with AVHDX files.
If you want a single file, then you merge the diff that is in the export. The merge ends up breaking the configuration of the exported VM, since the differencing disk is deleted so you have to rename it.
But again, we are doing this to get a copy of the disk in a very clean (and proper) way.
Yes, a lot of work for a running export. And to avoid any file locking issues with virtual disks.
No time for complete code, but I walked this through the PowerShell just to verify it is possible.
$snap = Get-VMSnapshot "datest"
PS C:\Windows\system32> $snap.HardDrives
VMName ControllerType ControllerNumber ControllerLocation DiskNumber Path
------ -------------- ---------------- ------------------ ---------- ----
DATest IDE 0 0 D:\DATest\Server2012.vhdx
PS C:\Windows\system32> New-VHD -Differencing -ParentPath $snap.HardDrives[0].Path -Path D:\test\test.vhdx
ComputerName : SWEETUMS
Path : D:\test\test.vhdx
VhdFormat : VHDX
VhdType : Differencing
FileSize : 4194304
Size : 42949672960
MinimumSize : 42948624384
LogicalSectorSize : 512
PhysicalSectorSize : 4096
BlockSize : 2097152
ParentPath : D:\DATest\Server2012.vhdx
FragmentationPercentage :
Alignment : 1
Attached : False
DiskNumber :
IsDeleted : False
Number :
PS C:\Windows\system32> New-VM -Name Test -Path D:\Test -VHDPath D:\test\test.vhdx
Name State CPUUsage(%) MemoryAssigned(M) Uptime Status
---- ----- ----------- ----------------- ------ ------
Test Off 0 0 00:00:00 Operating normally
Export-vm -Name Test -Path d:\newTest -Passthru
# Get the FullName of the parent disk for renaming later
Merge-VHD 'D:\NewTest\Test\Virtual Hard Disks\test.vhdx' -Force
Rename-Item 'D:\NewTest\Test\Virtual Hard Disks\Server2012.vhdx' 'D:\NewTest\Test\Virtual Hard Disks\test.vhdx'
I have to blog all of this now..
Using Windows 2012 R2?
You can export directly using the ExportSystemDefinition method of the Msvm_VirtualSystemManagementService class
PowerShell example from Taylor Brown:
# Obtain Msvm_ComputerSystem object corresponding to VM to export
$vmName = "MyVirtualMachine"
$Msvm_ComputerSystem = Get-WmiObject -Namespace root\virtualization\v2 -Class Msvm_ComputerSystem -Filter "ElementName='$vmName'"
# Edit copy of the Msvm_VirtualSystemExportSettingData associated with your VM
$Msvm_VirtualSystemExportSettingData = $Msvm_ComputerSystem.GetRelated("Msvm_VirtualSystemExportSettingData","Msvm_SystemExportSettingData",$null,$null, $null, $null, $false, $null)
$Msvm_VirtualSystemExportSettingData.CopySnapshotConfiguration = 0 # 0=ExportAllSnapshots, 1=ExportNoSnapshots, 2=ExportOneSnapshot
# Call ExportSystemDefinition method of Msvm_VirtualSystemManagementService singleton
$Msvm_VirtualSystemManagementService = Get-WmiObject -Namespace root\virtualization\v2 -Class Msvm_VirtualSystemManagementService
$Msvm_VirtualSystemManagementService.ExportSystemDefinition($Msvm_ComputerSystem, "c:\export_folder", $Msvm_VirtualSystemExportSettingData.GetText(1))
How do I populate the Transaction Processing Performance Council's TPC-DS database for SQL Server? I have downloaded the TPC-DS tool but there are few tutorials about how to use it.
In case you are using windows, you gotta have visual studio 2005 or later. Unzip dsgen in the folder tools there is dsgen2.sln file, open it using visual studio and build the project, will generate tables for you, I've tried that and I loaded tables manually into sql server
I've just succeeded in generating these queries.
There are some tips may not the best but useful.
cp ${...}/query_templates/* ${...}/tools/
add define _END = ""; to each query.tpl
${...}/tools/dsqgen -INPUT templates.lst -OUTPUT_DIR /home/query99/
Let's describe the base steps:
Before go to the next steps double-check that the required TPC-DS Kit has not been already prepared for your DB
Download TPC-DS Tools
Build Tools as described in 'v2.11.0rc2\tools\How_To_Guide-DS-V2.0.0.docx' (I used VS2015)
Create DB
Take the DB schema described in tpcds.sql and tpcds_ri.sql (they located in 'v2.11.0rc2\tools\'-folder), suit it to your DB if required.
Generate data that be stored to database
# Windows
dsdgen.exe /scale 1 /dir .\tmp /suffix _001.dat
# Linux
dsdgen -scale 1 -dir /tmp -suffix _001.dat
Upload data to DB
# example for ClickHouse
database_name=tpcds
ch_password=12345
for file_fullpath in /tmp/tpc-ds/*.dat; do
filename=$(echo ${file_fullpath##*/})
tablename=$(echo ${filename%_*})
echo " - $(date +"%T"): start processing $file_fullpath (table: $tablename)"
query="INSERT INTO $database_name.$tablename FORMAT CSV"
cat $file_fullpath | clickhouse-client --format_csv_delimiter="|" --query="$query" --password $ch_password
done
Generate queries
# Windows
set tmpl_lst_path="..\query_templates\templates.lst"
set tmpl_dir="..\query_templates"
set dialect_path="..\..\clickhouse-dialect"
set result_dir="..\queries"
set tmpl_name="query1.tpl"
dsqgen /input %tmpl_lst_path% /directory %tmpl_dir% /dialect %dialect_path% /output_dir %result_dir% /scale 1 /verbose y /template %tmpl_name%
# Linux
# see for example https://github.com/pingcap/tidb-bench/blob/master/tpcds/genquery.sh
To fix the error 'Substitution .. is used before being initialized' follow this fix.
How would you create an installation setup that runs against multiple schemas taking into consideration the latest version of the database updates? Ideally: update a single file with a new version number, then send the DBAs an archive containing everything needed to perform the database update.
Here is the directory structure:
| install.sql
| install.bat
|
\---DATABASE_1.3.4.0
| README.txt
|
\---SCHEMA_01
| install.sql
| SCM1_VIEW_NAME_01_VW.vw
| SCM1_VIEW_NAME_02_VW.vw
| SCM1_PACKAGE_01_PKG.pkb
| SCM1_PACKAGE_01_PKG.pks
|
\---SCHEMA_02
install.sql
SCM2_VIEW_NAME_01_VW.vw
SCM2_VIEW_NAME_02_VW.vw
SCM2_PACKAGE_01_PKG.pkb
SCM2_PACKAGE_01_PKG.pks
The following code (sanitized and trimmed for brevity and security) is in install.sql:
ACCEPT tns
ACCEPT schemaUsername
ACCEPT schemaPassword
CONNECT &&schemaUsername/&&schemaPassword#&&tns
##install.sql
/
The following code is in install.bat:
#echo off
sqlplus /nolog #install.sql
pause
There are several schemas, not all of which need updates each time. Those that do not need updates will not have directories created.
What I would like to do is create two files:
version.txt
schemas.txt
These two (hand-crafted) files would be used by install.sql to determine which version of scripts to run.
For example:
version.txt
1.3.4.0
schemas.txt
SCHEMA_01
SCHEMA_02
What I really would like to know is how would you read those text files from install.sql to run the corresponding install scripts? (Without PL/SQL; other Oracle-specific conventions are acceptable.)
All ideas welcome; many thanks in advance.
Here is a solution.
install.bat
#echo off
REM *************************************************************************
REM
REM This script performs a database upgrade for the application suite.
REM
REM *************************************************************************
setLocal EnableDelayedExpansion
REM *************************************************************************
REM
REM Read the version from the file.
REM
REM *************************************************************************
set /p VERSION=<version.txt
set DB=DB_%VERSION%
set SCHEMAS=%DB%\schema-order.txt
REM *************************************************************************
REM
REM Each line in the schema-order.txt file contains the name of a schema.
REM Blank lines are ignored.
REM
REM *************************************************************************
for /f "tokens=* delims= " %%a in (%SCHEMAS%) do (
if not "%%a" == "" sqlplus /nolog #install.sql %VERSION% %%a
)
Primary install.sql
ACCEPT schemaUsername CHAR DEFAULT &2 PROMPT 'Schema Owner [&2]: '
ACCEPT schemaPassword CHAR PROMPT 'Password: ' HIDE
PROMPT Verifying Database Connection
CONNECT &&schemaUsername/&&schemaPassword#&&tns
DEFINE INSTALL_PATH = DB_&1&&ds^&2&&ds
##&&INSTALL_PATH^install.sql
This uses a batch file to parse the files, then passes the parameters to the SQL script on the command-line.
Secondary install.sql
Each line in the file executed by the first installation script can then use the INSTALL_PATH variable to reference a file containing actual SQL to run. This secondary script is responsible for running the individual SQL files that actually exact a change in the database.
##&&INSTALL_PATH^DIR&&ds^SCM1_VIEW_OBJECT_VW.vw
This solution could be modified to automatically run all files in a specific order through clever use of sorting and naming of directories (i.e., the SQL files listed in a table directory run before the SQL files in a view directory).