How do I select each 3rd instance after I split in powershell - azure-powershell

I stored the data in a variable from azure command $bandwidth = Get-AzConsumptionUsageDetail.
Now when I call $bandwidth.InstanceID I have 888 objects of some thing like:
/subscriptions/0858ffa5-d2dd-420f-a958-85f6911c121fe/resourceGroups/AZR-AEX-SCCUSTMETRICSPROD-Development/providers/Microsoft.Storage/storageAccounts/ wmstoragemboxlandingdev /subscriptions/0858ffa5-d2dd-420f-a958-85f6911c121fe/resourceGroups/AZR-AEX-SCCUSTMETRICSPROD-Development/providers/Microsoft.Storage/storageAccounts/ wmstoragemboxlandingdev
I'm trying to extract the Resource group name from each line in which I tried the below approach:
bandwidth.InstanceID -split ('/')
Now I how do I select only the resource group name for each object in a single line of code.
Any help would be highly appreciated.

You could try the below :
$bandwidth.InstanceID | %{$_.split('/')[4]}
Explanation :
$bandwidth.InstanceID - This is your output array of InstanceID.
% - Foreach operator
$_ - iterative element passed from the previous pipe - here $bandwidth.InstanceID
split('/') - this method returns the array of all split string
[4] - this is the array index of the split string in which resource group name resides.

Related

Using WHERE with multiple columns with different data types to satisfy a single input in bash and postgressql

please assist with the following. i m trying to run a script that accepts one argument $1. The argument can either be a string or character or an integer. I want to use the argument in there where clause to search for the element in the database.
This is the table i want to search from:enter image description here
When i use the multiple conditions with OR , it works only when either the argument is a number or text.
This what my code looks like enter image description here
`
ELEMENT=$($PSQL "SELECT * FROM elements e FULL JOIN properties p USING(atomic_number) WHERE symbol = '$1' OR name = '$1' OR atomic_number = $1;")
`
and this is the results i get when i run with different aurgumentsenter image description here
Please help.
Thank you in advance
This will always fail on any non-numeric argument.
You are passing in H for hydrogen, but taking whatever was passed in and using it in the atomic_number comparison as an unquoted number, which the DB engine is trying to figure out what to do with. H isn't a number, and isn't a quoted string, so it must be the name of a column...but it isn't, so you are using invalid syntax.
I don't have a postgres available right now, but try something like this -
ELEMENT=$( $PSQL "
SELECT *
FROM elements e
FULL JOIN properties p USING(atomic_number)
WHERE symbol = '$1'
OR name = '$1'
OR atomic_number = CAST(`$1` as INTEGER); " )
Also, as an aside... avoid all-capital variable names.
As a convention, those are supposed to be system vars.
And please - please don't embed images except as helpful clarification.
Never rely on them to provide info if it can be avoided. Copy/paste actual formatted text people can copy/paste in their own testing.
An alternate way to construct the query: requires bash
looks_like_a_number() {
# only contains digits
[[ "$1" == +([[:digit:]]) ]]
}
sanitize() {
# at a minimum, handle embedded single quotes
printf '%s' "${1//\'/\'\'}"
}
if looks_like_a_number "$1"; then
field="atomic_number"
value=$1
elif [[ ${#1} -eq 1 ]]; then
field="symbol"
printf -v value "'%s'" "$(sanitize "$1")"
else
field="name"
printf -v value "'%s'" "$(sanitize "$1")"
fi
q="SELECT *
FROM elements e
FULL JOIN properties p USING(atomic_number)
WHERE $field = $value;"
printf '%s\n' "$q"
result=$("$PSQL" "$q")

Arguments mismatch using where IN clause in query

I have column in hive table like below
testing_time
2018-12-31 14:45:55
2018-12-31 15:50:58
Now I want to get the distinct values as a variable so I can use in another query.
I have done like below
abc=`hive -e "select collect_set(testing_time)) from db.tbl";`
echo $abc
["2018-12-31 14:45:55","2018-12-31 15:50:58"]
xyz=${abc:1:-1}
when I do
hive -e "select * from db.tbl where testing_time in ($xyz)"
I get below error
Arguments for IN should be the same type! Types are {timestamp IN (string, string)
what the the mistake I am doing?
What is the correct way of achieving my result?
Note: I know I can use subquery for this scenario but I would like to use variable to achieve my result
Problem is that you're comparing timestamp (column testing_time) with string (i.e. "2018-12-31 14:45:55"), so you need to convert string to timestamp, which you can do via TIMESTAMP(string).
Here's a bash script that adds the conversion:
RES="" # here we will save the resulting SQL
IFS=","
read -ra ITEMS <<< "$xyz" # split timestamps into array
for ITEM in "${ITEMS[#]}"; do
RES="${RES}TIMESTAMP($ITEM)," # add the timestamp to RES variable,
# surrounded by TIMESTAMP(x)
done
unset IFS
RES="${RES%?}" # delete the extra comma
Then you can run the constructed SQL query:
hive -e "select * from db.tbl where testing_time in ($RES)"

SSIS - Using flat file as a Parameter/Variable

I would like to know how to use a flat file (with only one value, say datetime) as a Parameter/Variable. Instead of feeding a SQL query value from Edit SQL task into a variable I want to save them as a flat file and then load them again as a Parameter/Variable.
This can be done using Script Task .
1 Set ReadonlyVeriable == file name
2 select ReadWriteveriable name = Variablename you have to populate.
3 Script write logic to find the value ( read file and get value)
set the value
this.Dts.Variables("sFileContent").Value = StreamText ;

Pig Latin - Extracting fields meeting two different filter criteria from chararray line and grouping in a bag

I am new to Pig Latin.
I want to extract all lines that match a filter criteria (have a word "line_token" ) from log files and then from these matching lines extract two different fields meeting two separate field match criteria . Since the lines aren't structured well I am loading them as a char array.
When I try to run the following code - I get an error
"Invalid resource schema: bag schema must have tuple as its field"
I have tried to perform an explicit cast to a tuple but that does not work
input_lines = LOAD '/inputdir/' AS ( line:chararray);
filtered_lines = FILTER input_lines BY (line MATCHES '.*line_token1.*' );
tokenized_lines = FOREACH filtered_lines GENERATE FLATTEN(TOKENIZE(line)) AS tok_line;
my_wordbag = FOREACH tokenized_lines {
word1 = FILTER tok_line BY ( $0 MATCHES '.*word_token1.*' ) ;
word2 = FILTER tok_line BY ( $0 MATCHES '.*word_token1.*' ) ;
GENERATE word1 , word2 as my_tuple ;
-- I also tried --> GENERATE (word1 , word2) as my_tuple ;
}
dump my_wordbag;
I suppose I am taking a very wrong approach.
Please note - my logs aren't structured well - so I cant mend the way I load
Post loading and initial filtering for lines of interest ( which is straightforward) - I guess I need to do something different rather than tokenize line and iterate through fields trying to find fields.
Or maybe I should use joins ?
Also if I know the structure of line beforehand well as all text fields, then will loading it differently ( not as a chararray) make it an easier problem ?
For now I made a compromise - I added a extra filter clause in my original - line filter and settled for picking just one field from line. When I get back to it I will try with joins and post that code ... - here's my working code that gets me a useful output - but not all that I want.
-- read input lines from poorly structured log
input_lines = LOAD '/log-in-dir-in-hdfs' AS ( line:chararray) ;
-- Filter for line filter criteria and date interested in passed as arg
filtered_lines = FILTER input_lines BY (
( line MATCHES '.*line_filter1*' )
AND ( line MATCHES '.*line_filter2.*' )
AND ( line MATCHES '.*$forDate.*' )
) ;
-- Tokenize every line
tok_lines = FOREACH filtered_lines
GENERATE TOKENIZE(line) AS tok_line;
-- Pick up specific field frm tokenized line based on column filter criteria
fnames = FOREACH tok_lines {
fname = FILTER tok_line BY ( $0 MATCHES '.*field_selection.*' ) ;
GENERATE FLATTEN(fname) as nnfname;
}
-- Count occurances of that field and store it with field name
-- My original intent is to store another field name as well
-- I will do that once I figure how to put both of them in a tuple
flgroup = FOREACH fnames
GENERATE FLATTEN(TOKENIZE((chararray)$0)) as cfname;
grpfnames = group flgroup by cfname;
readcounts = FOREACH grpfnames GENERATE COUNT(flgroup), group ;
STORE readcounts INTO '/out-dir-in-hdfs';
As I understand, after the FLATTEN operation, you have single line (tok_line) in each row and you want to extract 2 words from each line. REGEX_EXTRACT will help you achieve this. I'm not a REGEX expert so will leave writing the REGEX part up to you.
data = FOREACH tokenized_lines
GENERATE
REGEX_EXTRACT(tok_line, <first word regex goes here>) as firstWord,
REGEX_EXTRACT(tok_line, <second word regex goes here>) as secondWord;
I hope this helps.
You must refer to the alias, not the column.
So:
word1 = FILTER tokenized_lines BY ( $0 MATCHES '.*word_token1.*' ) ;
word1 and word2 are going to be aliases as well, not columns.
How do you need the output to look like?

How can I use the COUNT value obtained from a call to mkqlite()?

I'm using mksqlite to create and access an SQL database from matlab, and I want to get the number of rows in a table. I've tried this:
num = mksqlite('SELECT COUNT(*) FROM myTable');
, but the returned value isn't very helpful. If I put a breakpoint in my script and examine the variable, I find that it's a struct with a single field, called 'COUNT(_)', which seems to actually be an invalid name for a field, so I can't access it:
K>> class(num)
ans =
struct
K>> num
num =
COUNT(_): 0
K>> num.COUNT(_)
??? num.COUNT(_)
|
Error: The input character is not valid in MATLAB statements or expressions.
K>> num.COUNT()
??? Reference to non-existent field 'COUNT'.
K>> num.COUNT
??? Reference to non-existent field 'COUNT'.
Even the MATLAB IDE can't access it. If I try to double click the field in the variable editor, this gets spat out:
??? openvar('num.COUNT(_)', num.COUNT(_));
|
Error: The input character is not valid in MATLAB statements or expressions.
So how can I access this field?
You are correct that the problem is that mksqlite somehow manages to create an invalid field name that can't be read. The simplest solution is to add an AS clause to your SQL so that the field has a sensible name:
>> num = mksqlite('SELECT COUNT(*) AS cnt FROM myTable')
num =
cnt: 0
Then to remove the extra layer of indirection you can do:
>> num = num.cnt;
>> num
num =
0