Table Format Alphabetically - formatting

I'm running the following and would like to format by Name (Alphabetically) but for some reason m having difficulty.
get-wmiobject Win32_Product | Format-Table Name, IdentifyingNumber, LocalPackage -AutoSize

Related

Use of LIKE in PostgreSQL with brackets

I try to be so specific as possible.
Currently I use MS SQL Server 2012.
A simplified table PlanMission contain these rows
|---------------------|---------------------|
| Bold_Id | MCurrentStates |
|---------------------|---------------------|
| 10776 |[original[scheme] |
|---------------------|---------------------|
| 10777 |[operative][inproc] |
|---------------------|---------------------|
| 10778 |[operative][closed] |
|---------------------|---------------------|
| 10779 |[operative][planopen]|
|---------------------|---------------------|
The Bold_id column is just an ID that is unique.
The column MCurrentStates is a VARCHAR column containing states as substrings.
A state is simply a string surrounded by brackets like [planopen]
So the column may be empty or have many states like example above.
IN MS SQL if I do like this
SELECT Bold_Id, MCurrentStates
FROM PlanMission
WHERE MCurrentStates LIKE '%[planopen]%'
it don't work. It just list all rows that are not empty in MCurrentStates.
It is solved by insert []
SELECT Bold_Id, MCurrentStates
FROM PlanMission
WHERE MCurrentStates LIKE '%[[]planopen]%'
That works fine.
Now I want to do this also for PostgreSQL.
Simple solution is to remove the brackets.
But my question is how can I do this with a query that is the same for both MS SQL and PostgreSQL?
Try:
SELECT Bold_Id, MCurrentStates
FROM PlanMission
WHERE MCurrentStates LIKE '%/[planopen/]%' ESCAPE '/';

VERTICA Database :- how to get distinct count of "first names" where first name and last name are stored in single column?

I'm new to Vertica DB and was facing a problem.
It is mostly like SQL but I have a Customer table
Customer Table
NAME | AGE | SEX
JOHN KENY |26 |M
JOHN CENA |32 |M
JOHN MCCAIN |35 |M
PETER PAN |33 |M
SELENA GOMEZ |24 |F
Now i would like an output of a query to run on vertica DB to Fetch me DISTINCT customer first name i.e
NAME
JOHN
PETER
SELENA
I'm Trying the SPLIT_PART() function in Vertica but I am not able to execute the query correctly
SELECT DISTINCT NAME FROM
(SELECT SPLIT_PART(NAME,' ',1) from Customer );
gives
ERROR SYNTAX error at or near "Select"
I also tried
SELECT SPLIT_PART(SELECT DISTINCT NAME FROM Customer,' ',1);
resulting in
ERROR SYNTAX error at or near "Select"
but
SELECT SPLIT_PART('JOHN KENY',' ',1) ;
outputs
JOHN
The following query should do the job :
select distinct SPLIT_PART(NAME,' ',1) from Customer
However, note that this is fragile. If this is a production environment (and not a simple exercise), I bet you'll end up with names containing spaces that will break your query.

How to use rex command to extract two fields and chart the count for both in one search query?

I have a log statement like 2017-06-21 12:53:48,426 INFO transaction.TransactionManager.Info:181 -{"message":{"TransactionStatus":true,"TransactioName":"removeLockedUser-1498029828160"}} .
How can i extract TransactionName and TranscationStatus and print in table form TransactionName and its count.
I tried below query but didn't get any success. It is always giving me 0.
sourcetype=10.240.204.69 "TransactionStatus" | rex field=_raw ".TransactionStatus (?.)" |stats count((status=true)) as success_count
Solved it with this :
| makeresults
| eval _raw="2017-06-21 12:53:48,426 INFO transaction.TransactionManager.Info:181 -{\"message\":{\"TransactionStatus\":true,\"TransactioName\":\"removeLockedUser-1498029828160\"}}"
| rename COMMENT AS "Everything above generates sample event data; everything below is your solution"
| rex "{\"TransactionStatus\":(?[^,]),\"TransactioName\":\"(?[^\"])\""
| chart count OVER TransactioName BY TransactionStatus

Splunk - Match different fields in different events from same data source

I have a data source in which I need to return all pairs of events (event1, event2) from a single data source, where field1 from event1 matches field2 from event2.
For example, let's say I have the following data.
I need to return a pair of events where field id from event1, matches field referrer_id from event2. Let's say, to get the following report.
Adam Anderson referred Betty Burger on 2016-01-02 08:00:00.000
Adam Anderson referred Carol Camp on 2016-01-03 08:00:00.000
Betty Burger referred Darren Dougan on 2016-01-04 08:00:00.000
In sql I can do this quite easily with the following command.
select a.first_name as first1, a.last_name as last1, b.first_name as first2,
b.last_name as last2, b.date as date
from myTable a
inner join myTable b on a.id = b.referrer_id;
Which returns the following table,
which gives exactly the data I need.
Now, I've been attempting to replicate this in a splunk query and have run into quite a few issues. First I attempted to use the transaction command, but that aggregated all of the related events together as opposed to matching them a pair at a time.
Next, I attempted to use a subsearch, first finding the id and then searching in the subsearch, first for the first event by id and the appending the second event by referral_id. Then, since append creates a new row instead of appending to the same row, using a stats to aggregate the resulting rows by the matching id field. I did attempt to use appendcols but that didn't return anything for me.
...
| table id
| map search="search id=$id$
| fields first_name, last_name, id
| rename first_name as first1
| rename last_name as last1
| rename id as match_id
| append [search $id$
| search referral_id=$id$
| fields first_name, last_name, referral_id, date
| rename first_name as first2
| rename last_name as span2
| rename referral_id as match_id]"
| fields first1, last1, first2, last2, match_id, time
| stats values(first1) as first1, values(last1) as last1, values(first2) as first2,
values(last2) as last2, values(time) as time by id
The above query works for me and gives me the table I need, but it is incredibly slow due to the repeated searches over the entire time frame, and also limited by the map maxsearches which, for whatever reason, cannot be set to unlimited.
This seems like an overly complicated solution, especially in comparison to the sql query. Surely there must exist a simpler, faster way that this can be done, which isn't limited by the arbitrary limited settings or the multiple repeating search queries. I would greatly appreciate any help.
I ended up using append. Using join gave me faster results, but didn't result in every matching pair, for my example it would return 2 rows instead of three, returning Adam with Betty, but not returning Adam with Carol.
Using append returned a full list, and using stats by id gave me the result I was looking for, a full list of each matching pair. It also gave extra empty fields, so I had to remove those, and then manipulate the resulting mv's into their own individual rows. Splunk doesn't offer a multifield mv expand, so I used a workaround.
...
| rename id as matchId, first_name as first1, last_name as last1
| table matchId, first1, last1
| append [
search ...
| rename referrer_id as matchId, first_name as first2, last_name as last2
| table matchId, first2, last2, date]
| stats list(first1) as first1, list(last1) as last1, list(first2) as first2, list(last2) as last2, list(date) as date by matchId
| search first1!=null last1!=null first2!=null last2!=null
| eval zipped=mvzip(mvzip(first2, last2, ","), date, ",")
| mvexpand zipped
| makemv zipped delim=","
| eval first2=mvindex(zipped, 0)
| eval last2=mvindex(zipped, 1)
| eval date=mvindex(zipped, 2)
| fields - zipped
This is faster than using a map with multiple subsearches, and gives all the of results. It is still limited by the maximum size of the subsearch, but at least provides the necessary data.

Looping through a PostgreSQL table in bash

I have a PostgreSQL table of the following format:
uid | defaults | settings
-------------------------------
abc | ab, bc | -
| |
pqr | pq, ab | -
| |
xyz | xy, pq | -
I am trying to list all the uids which contain ab in the defaults column. In the above case, abc and pqr must be listed.
How do I form the query and loop it around the table to check each row in bash?
#user000001 already provided the bash part. And the query could be:
SELECT uid
FROM tbl1
WHERE defaults LIKE '%ab%'
But this is inherently unreliable, since this would also find 'fab' or 'abz'. It is also hard to create a fast index.
Consider normalizing your schema. Meaning you would have another 1:n table tbl2 with entries for individual defaults and a foreign key to tbl1. Then your query could be:
SELECT uid
FROM tbl1 t1
WHERE EXISTS
(SELECT 1
FROM tbl2 t2
WHERE t2.def = 'ab' -- "default" = reserved word in SQL, so I use "def"
AND t2.tbl1_uid = t1.uid);
Or at least use an array for defaults. Then your query would be:
SELECT uid
FROM tbl1
WHERE 'ab' = ANY (defaults);
It's not really about bash but you can call your query command using psql. You can try this format:
psql -U username -d database_name -c "SELECT uid FROM table_name WHERE defaults LIKE 'ab, %' OR defaults LIKE '%, ab'
Or maybe simply
psql -U username -d database_name -c "SELECT uid FROM table_name WHERE defaults LIKE '%ab%'
-U username is optional.
Use awk:
awk -F\| '$2~/ab/{print $1}' file
Explanation:
The -F\| sets the field seperator to the | character
With $2~/ab/ we filter the lines that contain "ab" in the second column.
With print $1 we print the first column for the lines matched.