Azcopy command issue with parameters - azure-storage

I'm using Azcopy within a shell script to copy blobs within a container from one storage account to another on Azure.
Using the following command -
azcopy copy "https://$source_storage_name.blob.core.windows.net/$container_name/?$source_sas" "https://$dest_storage_name.blob.core.windows.net/$container_name/?$dest_sas" --recursive
I'm generating the SAS token for both source and destination accounts and passing them as parameters in the command above along with the storage account and container names.
On execution, I keep getting this error ->
failed to parse user input due to error: the inferred source/destination combination could not be identified, or is currently not supported
When I manually enter the storage account names, container name and SAS tokens, the command executes successfully and storage data gets transferred as expected. However, when I use parameters in the azcopy command I get the error.
Any suggestions on this would be greatly appreciated.
Thanks!

You can use the below PowerShell Script
param
(
[string] $source_storage_name,
[string] $source_container_name,
[string] $dest_storage_name,
[string] $dest_container_name,
[string] $source_sas,
[string] $dest_sas
)
.\azcopy.exe copy "https://$source_storage_name.blob.core.windows.net/$source_container_name/?$source_sas" "https://$dest_storage_name.blob.core.windows.net/$container_name/?$dest_sas" --recursive=true
To execute the above script you can run the below command.
.\ScriptFileName.ps1 -source_storage_name "<XXXXX>" -source_container_name "<XXXXX>" -source_sas "<XXXXXX>" -dest_storage_name "<XXXXX>" -dest_container_name "<XXXXXX>" -dest_sas "<XXXXX>"
I am Generating SAS token for both the Storage from here . Make Sure to Check all the boxes as i did in the picture.
OutPut ---

Related

Powershell script as a step in sql job giving error

I am trying to create a sql job which syncs users from a csv file to ad group.
My powershell script is one of the steps of this job. Issue is that my script is supposed to run on another server which has Active Directory but i keep on getting error when i run this step.
My script is following:
invoke-Command -Session Server-Name
Import-Module activedirectory
$ADUsers = Import-csv \\Server-Name\folder\file.csv
foreach ($User in $ADUsers)
{
$Username = $User.sAMAccountName
$group=$user.adgroup
if (Get-ADUser -F {SamAccountName -eq $Username})
{
foreach($group in $groups){Add-ADGroupMember -identity $group -Members $Username}
Write-Output "$username has beeen added to group $group"
}
}
Error i am getting is
Executed as user: Username. A job step received an error at line 2 in a PowerShell script. The corresponding line is 'Invoke-Command -Session Server-Name. Correct the script and reschedule the job. The error information returned by PowerShell is: 'Cannot bind parameter 'Session'. Cannot convert the "Server-Name" value of type "System.String" to type "System.Management.Automation.Runspaces.PSSession". '. Process Exit Code -1. The step failed.
server name has '-' in between so need to know if that is causing the issue
or i am using wrong way to run this script on a different server from a sql job
Any help would be appreciated!
Jaspreet I am not expert on powershell but seems like you are passing the wrong parameters.Just referring to Microsoft docs seems like you need to pass the computer name rather than -Session
Try with this line of code at starting
invoke-Command -ComputerName Server-Name.
For more please refer Microsoft docs
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/invoke-command?view=powershell-6#examples

Hana Sap B1: Execute query using hdbuserstore - * 10: invalid username or password SQLSTATE: 28000

Actually I'm working in Hana DB and Suse Enterprise Server, my objective is use the Scripting and Cronjob to automatize some tasks.
Using hdbuserstore located in /usr/sap/hdbclient I've created and profile for create a HANA command line secure connection, check this link.
My profile works perfect, I did:
Create a user backup into HanaDB, using
*create user backup password Aa12345678*
Then, I've added BACKUPS PRIVILEGES to it:
*grant BACKUP ADMIN to backup*
Later, I've used ./hdbuserstore to create a profile to use via command line:
./hdbuserstore SET back5prf localhost:30015 backup Aa12345678
Then list the profile: ./hdbuserstore LIST
DATA FILE : /root/.hdb/hanab1/SSFS_HDB.DAT
KEY BACK2PRF
ENV : NDB#hanab1:30015
USER: backups
KEY BACK3PRF
ENV : hanab1:30015
USER: backups
KEY BACK5PRF
ENV : localhost:30015
USER: backup
KEY BACKUPSTORE
ENV : localhost:30015
USER: backups
How you guys can see, the profile is ready. Finally when I tried to use the following command:
./hdbsql -U BACK5PRF "SELECT * FROM SCHEMAS"
The system returns the following message:
* 10: invalid username or password SQLSTATE: 28000
Why I'm getting this error event the user and password are ok?
There is other way to execute HANA SQL query without enter into the hdbsql console to automate via Scripting?
This error message can have several causes:
actual wrong user name/password
user account is locked (you need to unlock it in the user management)
user account is a restricted user and not allowed to connect via ODBC/JDBC or from anywhere "outside" SAP HANA
I suggest you make sure that the user works in SAP HANA studio and then setup the cron job.
I had the same issue, the cause was that I used a $ in my password which gets treated as special character in linux (and there more special characters).
So in this case one solution is not to use any character that a treated as special characters or to escape them which in my case resulted the following command for hdbuserstore:
hdbuserstore SET BKADMIN 1xx.x.x.xx:30015 ADMIN_BACKUP password\$12
Check the thread on answer.sap.com.

How to construct S3 URL for copying to Redshift?

I am trying to import a CSV file into a Redshift cluster. I have successfully completed the example in the Redshift documentation. Now I am trying to COPY from my own CSV file.
This is my command:
copy frontend_chemical from 's3://awssampledb/mybucket/myfile.CSV'
credentials 'aws_access_key_id=xxxxx;aws_secret_access_key=xxxxx'
delimiter ',';
This is the error I see:
An error occurred when executing the SQL command:
copy frontend_chemical from 's3://awssampledb/mybucket/myfile.CSV'
credentials 'aws_access_key_id=XXXX...'
[Amazon](500310) Invalid operation: The specified S3 prefix 'mybucket/myfile.CSV' does not exist
Details:
-----------------------------------------------
error: The specified S3 prefix 'mybucket/myfile.CSV' does not exist
code: 8001
context:
query: 3573
location: s3_utility.cpp:539
process: padbmaster [pid=2432]
-----------------------------------------------;
Execution time: 0.7s
1 statement failed.
I think I'm constructing the S3 URL wrong, but how should I do it?
My Redshift cluster is in the US East (N Virginia) region.
The Amazon Redshift COPY command can be used to load multiple files in parallel.
For example:
Bucket = mybucket
The files are in the bucket under the path data
Then refer to the contents as:
s3://mybucket/data
For example:
COPY frontend_chemical
FROM 's3://mybucket/data'
CREDENTIALS 'aws_access_key_id=xxxxx;aws_secret_access_key=xxxxx'
DELIMITER ',';
This will load all files within the data directory. You can also refer to a specific file by including it in the path, eg s3://mybucket/data/file.csv

Error in BQ shell Loading Datastore with write_disposition as Write append

1: I tried to load on an existing table [using Datastore file]
2. Bq Shell asked me to add write_disposition to write append to load to existing table
3. If I do the above, throws an error as follows:
load --source_format=DATASTORE_BACKUP --write_disposition=WRITE_append --allow_jagged_rows=None sample_red.t1estchallenge_1 gs://test.appspot.com/bucket/ahFzfnZpcmdpbi1yZWQtdGVzdHJBCxIcX0FFX0RhdGFzdG9yZUFkbWluX09wZXJhdGlvbhiBwLgCDAsSFl9BRV9CYWNrdXBfSW5mb3JtYXRpb24YAQw.entity.backup_info
Error parsing command: flag --allow_jagged_rows=None: ('Non-boolean argument to boolean flag',None)
I tried allow jagged rows = 0 and allow jagged rows = None, nothing works just the same error.
Please advise on this.
UPDATE: As Mosha suggested --allow_jagged_rows=false has worked. It should be before --write_disposition=Write_truncate. But this has led to another issue on encoding. Can anyone say what should be the encoding type for DATASTORE_BACKUP?. I tried both --encoding=UTF-8 and --encoding=ISO-8859.
load --source_format=DATASTORE_BACKUP --allow_jagged_rows=false --write_disposition=WRITE_TRUNCATE sample_red.t1estchallenge_1 gs://test.appspot.com/STAGING/ahFzfnZpcmdpbi1yZWQtdGVzdHJBCxIcX0FFX0RhdGFzdG9yZUFkbWluX09wZXJhdGlvbhiBwLgCDAsSFl9BRV9CYWNrdXBfSW5mb3JtYXRpb24YAQw.entityname.backup_info
Please advise.
You should use "false" (or "true") with boolean arguments, i.e.
--allow_jagged_rows=false

Powershell 4.0 - plink and table-like data

I am running PS 4.0 and the following command in interaction with a Veritas Netbackup master server on a Unix host via plink:
PS C:\batch> $testtest = c:\batch\plink blah#blersniggity -pw "blurble" "/usr/openv/netbackup/bin/admincmd/nbpemreq -due -date 01/17/2014" | Format-Table -property Status
As you can see, I attempted a "Format-Table" call at the end of this.
The resulting value of the variable ($testtest) is a string that is laid out exactly like the table in the Unix console, with Status, Job Code, Servername, Policy... all that listed in order. But, it populates the variable in Powershell as just that: a vanilla string.
I want to use this in conjunction with a stored procedure on a SQL box, which would be TONS easier if I could format it into a table. How do I use Powershell to tabulate it exactly how it is extracted from the Unix prompt via Plink?
You'll need to parse it and create PS Objects to be able to use the format-* cmdlets. I do enough of it that I wrote this to help:
http://gallery.technet.microsoft.com/scriptcenter/New-PSObjectFromMatches-87d8ce87
You'll need to be able to isolate the data and write a regex to capture the bits you want.