In Splunk I need to match search results client IP list with an input lookup CSV file knownip.csv. I want the results, which didn't match with CSV file.
Step 1. Created list of verified known IP as a CSV file saved in my local system
Step 2. Navigated Manager > Lookups > Add New > Lookup Table File
Step 3. Uploaded my file and named it KnownIP.csv. Now under Manager > Lookups > Add New > Lookup Definition I have name=clientIP, lookup-file=KnownIP.csv.
Step 4. Now I defined my search query like this:
search NOT[|inputlookup Lookupfile | fields Name ] index=* serverName>servername113
| rex field=clientIP "(?<clientIP>\d+.\d+.\d+)"
| stats count by clientIP
| search NOT [|inputlookup append=t KnownIP.csv|fields clientIP]
As I noted, I need help getting this search to match the CSV file.
There are a couple ways to do what you're asking for
The first is to use the lookup table as a filter in your initial search, like this:
index=ndx sourcetype=srctp NOT [|inputlookup mylookup | fields ip]
Or, do a lookup, and keep all the entries that have a null value in the lookup's other field ... like this:
index=ndx sourcetype=srctp
| lookup mylookup ip OUTPUT otherfield AS filterfield
| where isnull(filterfield)
Related
I'm trying to create a new BigQuery destination on Airbyte with Octavia cli.
When launching:
octavia apply
I receive:
Error: {"message":"The provided configuration does not fulfill the specification. Errors: json schema validation failed when comparing the data to the json schema. \nErrors:
$.loading_method.method: must be a constant value Standard
Here is my conf:
# Configuration for airbyte/destination-bigquery
# Documentation about this connector can be found at https://docs.airbyte.com/integrations/destinations/bigquery
resource_name: "BigQueryFromOctavia"
definition_type: destination
definition_id: 22f6c74f-5699-40ff-833c-4a879ea40133
definition_image: airbyte/destination-bigquery
definition_version: 1.2.12
# EDIT THE CONFIGURATION BELOW!
configuration:
dataset_id: "airbyte_octavia_thibaut" # REQUIRED | string | The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
project_id: "data-airbyte-poc" # REQUIRED | string | The GCP project ID for the project containing the target BigQuery dataset. Read more here.
loading_method:
## -------- Pick one valid structure among the examples below: --------
# method: "Standard" # REQUIRED | string
## -------- Another valid structure for loading_method: --------
method: "GCS Staging" # REQUIRED | string}
credential:
## -------- Pick one valid structure among the examples below: --------
credential_type: "HMAC_KEY" # REQUIRED | string
hmac_key_secret: ${AIRBYTE_BQ1_HMAC_KEY_SECRET} # SECRET (please store in environment variables) | REQUIRED | string | The corresponding secret for the access ID. It is a 40-character base-64 encoded string. | Example: 1234567890abcdefghij1234567890ABCDEFGHIJ
hmac_key_access_id: ${AIRBYTE_BQ1_HMAC_KEY_ACCESS_ID} # SECRET (please store in environment variables) | REQUIRED | string | HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. | Example: 1234567890abcdefghij1234
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
# keep_files_in_gcs-bucket: "Delete all tmp files from GCS" # OPTIONAL | string | This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
credentials_json: ${AIRBYTE_BQ1_CREDENTIALS_JSON} # SECRET (please store in environment variables) | OPTIONAL | string | The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
dataset_location: "europe-west1" # REQUIRED | string | The location of the dataset. Warning: Changes made after creation will not be applied. Read more here.
transformation_priority: "interactive" # OPTIONAL | string | Interactive run type means that the query is executed as soon as possible, and these queries count towards concurrent rate limit and daily limit. Read more about interactive run type here. Batch queries are queued and started as soon as idle resources are available in the BigQuery shared resource pool, which usually occurs within a few minutes. Batch queries don’t count towards your concurrent rate limit. Read more about batch queries here. The default "interactive" value is used if not set explicitly.
big_query_client_buffer_size_mb: 15 # OPTIONAL | integer | Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here. | Example: 15
It was an indentation issue on my side:
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
Should be at 1 upper level (this wasn't clear in the commented template, hence the error and the possibility that others persons will do the same).
Here is full final conf:
# Configuration for airbyte/destination-bigquery
# Documentation about this connector can be found at https://docs.airbyte.com/integrations/destinations/bigquery
resource_name: "BigQueryFromOctavia"
definition_type: destination
definition_id: 22f6c74f-5699-40ff-833c-4a879ea40133
definition_image: airbyte/destination-bigquery
definition_version: 1.2.12
# EDIT THE CONFIGURATION BELOW!
configuration:
dataset_id: "airbyte_octavia_thibaut" # REQUIRED | string | The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
project_id: "data-airbyte-poc" # REQUIRED | string | The GCP project ID for the project containing the target BigQuery dataset. Read more here.
loading_method:
## -------- Pick one valid structure among the examples below: --------
# method: "Standard" # REQUIRED | string
## -------- Another valid structure for loading_method: --------
method: "GCS Staging" # REQUIRED | string}
credential:
## -------- Pick one valid structure among the examples below: --------
credential_type: "HMAC_KEY" # REQUIRED | string
hmac_key_secret: ${AIRBYTE_BQ1_HMAC_KEY_SECRET} # SECRET (please store in environment variables) | REQUIRED | string | The corresponding secret for the access ID. It is a 40-character base-64 encoded string. | Example: 1234567890abcdefghij1234567890ABCDEFGHIJ
hmac_key_access_id: ${AIRBYTE_BQ1_HMAC_KEY_ACCESS_ID} # SECRET (please store in environment variables) | REQUIRED | string | HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. | Example: 1234567890abcdefghij1234
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
# keep_files_in_gcs-bucket: "Delete all tmp files from GCS" # OPTIONAL | string | This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
credentials_json: ${AIRBYTE_BQ1_CREDENTIALS_JSON} # SECRET (please store in environment variables) | OPTIONAL | string | The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
dataset_location: "europe-west1" # REQUIRED | string | The location of the dataset. Warning: Changes made after creation will not be applied. Read more here.
transformation_priority: "interactive" # OPTIONAL | string | Interactive run type means that the query is executed as soon as possible, and these queries count towards concurrent rate limit and daily limit. Read more about interactive run type here. Batch queries are queued and started as soon as idle resources are available in the BigQuery shared resource pool, which usually occurs within a few minutes. Batch queries don’t count towards your concurrent rate limit. Read more about batch queries here. The default "interactive" value is used if not set explicitly.
big_query_client_buffer_size_mb: 15 # OPTIONAL | integer | Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here. | Example: 15
I get Request body data from excel file.
I have already covert excel to csv format.
I have kind of able to find a solution but it is not working 100% as jsonbody format in not fetching data correctly is shows forward slash in csv import data from runner collections.
Request Body
{{jsonBody}}
Set Global variables jsonBody
Run collection select data file as csv file as per screenshot request body shows with forward slash.
After running the collection I'm getting body incorrect version with forward slash.
This below screenshot show correct version on csv data I require to remove forward slash from csv data
I had similar issue with postman and realized my problem was more of a syntax issue.
Lets say your cvs file has the following columns:
userId | mid | platform | type | ...etc
row1 94J4J | 209444894 | NORTH | PT | ...
row2 324JE | 934421903 | SOUTH | MB | ...
row3 966RT | 158739394 | EAST | PT | ...
This is how you want your json request body to look like:
{
"userId" : "{{userId}}",
"mids":[{
"mid":"{{mid}}",
"platform":"{{platform}}"
}],
"type":["{{type}}"],
.. etc
}
Make sure your colums names match the varibales {{variableName}}
The data coming from CSV is already in a stringified format so you don't need to do anything in pre-request.
example:
let csv be
| jsonBody |
| {"name":"user"}|
Now in postman request just use:
{{jsonBody}}
as {{column_name}} will be considered as data varaible so , in your case {{jsonBody}}
csv:
make sure you save this as csv file :
Now in request use :
output:
if you want to add the json body as value of another then just use :
Output:
I have a simple cvs file loaded in splunk called StandardMaintenance.csv which looks like this...
UnderMaintenance
NO
We currently get bombarded with alerts during our maintenance window. At the start of maintenance, I want to be able to change this to YES to stop the alerts (I have an easy way to do this). I am looking for something standard to add to all alert queries that check this csv for status (lookup as I understand it) and for the query to return nothing if UnderMaintenance = YES, thus not generate a match to the query.
It is basically a binary, ON or OFF. I would appreciate any help you could provide.
NOTE:
You cannot disable the alert by executing splunk query because the
Rest API requires a POST action.
Step 1: Maintain a csv file of all your savedsearches with owners by using below query. You can schedule the query as per your convenience. For example below search creates maintenance.csv and replaces all contents whenever executed.
| rest /servicesNS/-/search/saved/searches | table title eai:acl.owner | outputlookup maintenance.csv
This file would get created in $SPLUNK_HOME/etc/apps/<app name>/lookups
Step 2: Write a script to read data from maintenance.csv file and execute below command to disable searches. (Run before maintenance window)
curl -X POST -k -u admin:pass https://<splunk server>:8089/servicesNS/<owner>/search/saved/searches/<search title>/disable
Step 3: Do the same thing to enable all seaches, just change the command to below (Run after maintenance window)
curl -X POST -k -u admin:pass https://<splunk server>:8089/servicesNS/<owner>/search/saved/searches/<search title>/enable
EDIT 1:
Create StandardMaintenance.csv file under $SPLUNK_HOME/etc/apps/search/lookups.
The StandardMaintenance.csv file contains :
UnderMaintenance
"No"
Use below search query to get results of existing saved searches only if UnderMaintenance = No :
| rest /servicesNS/-/search/saved/searches
| eval UnderMaintenance = "No"
| table title eai:acl.owner UnderMaintenance
| join UnderMaintenance
[| inputlookup StandardMaintenance.csv ]
| table title eai:acl.owner
Hope this helps !
Before each query create a variable (say it's called foo) that you set to true if maintenance is NO and that you do not set otherwise, as below:
... | eval foo=case(maintenance=="NO","true")
Then you put the below at the end of your query:
| eval foo=$foo$
This will make your query execute only if maintenance is NO
I have data that looks like the following:
assetnum | assetdesc
123 | sampledesc
432 | sample desc2
I want to insert another row with four fields so it looks like the following:
SYSNAME | OBJSTRUC | AddChange | En
assetnum | assetdesc
123 | sampledesc
432 | sample desc2
However I am unsure how to do this. Does anyone know how?
I have tried generating rows but I am unsure how to merge so that it looks like this. I have also thought of adding headers but I am unsure how to specify the header (without it being created automatically) I am quite new to Pentaho.
Thanks.
Here is a hack. Assume StepA writes the actual data into a file fileA. Before writing anything into your fileA have a Text file output step and in the content tab, Add Ending line of file field, enter the custom row you need to insert. Since the file is empty at the beginning, your last line will become the first line. Once it is done, you can write the other data as per your original source using Append flag. To set the dependency, use the Block until steps finish to block the actual write in StepA.
I am trying to obtain a list of all the DB Tables that will give me visibility on what tables I may need to JOIN for running SQL scripts.
For example, in TCL when I run "LIST.DICT" it returns "Name of File:" for input. I then enter "PRODUCT" and it returns a list of all available fields.
However, Where can I get a list of all my available Tables or list of my options that I can enter after "Name of File:"?
Here is what I am trying to achieve. In the screen shot below, I would like to run a SQL script that gives me the latest Log File Activity, Date - Time - Description. I would like the script to return '8/13/14 08:40am BR: 3;BuyPkg'
Thank you in advance for your help.
From TCL within the database account containing your database files, type: LISTF
Sample output:
FILES in your vocabulary 03:21:38pm 29 Jun 2015 Page 1
Filename........................... Pathname...................... Type Modulo
File - Contains all logical device names
DICT &DEVICE& /u1/uv/D_&DEVICE& 2 1
DATA &DEVICE& /u1/uv/&DEVICE& 2 3
File - Used by MAKE.MAP.FILE
DICT &MAP& /u1/uv/D_&MAP& 2 1
DATA &MAP& /u1/uv/&MAP& 6 7
File - Contains all parts of Distributed Files
DICT &PARTFILES& /u1/uv/D_&PARTFILES& 2 1
DATA &PARTFILES& /u1/uv/&PARTFILES& 18 7
DICT &PH& D_&PH& 3 1
DATA &PH& &PH& 1
DICT &SAVEDLISTS& D_&SAVEDLISTS& 3 1
DATA &SAVEDLISTS& &SAVEDLISTS& 1
File - Used by uniVerse to access the current directory.
DICT &UFD& /u1/uv/D_UFD 2 1
DATA &UFD& . 19 1
DICT &XML& D_&XML& 18 3
DATA &XML& &XML& 19 1
Firstly, UniVerse has no Log File Activity Date and Time.
However, you can still obtain the table's modified/ accessed date from the file system however.
To do this,
You need to have a subroutine accepting a path of the table to return a date or a time.
e.g. SUBROUTINE GET.FILE.MOD.DATE(DAT.MOD, S.FILE.PATH)
Inside the subroutine, you can use EXECUTE to run shell command like istat for getting these info on a unix e.g.
Please beware that for a dynamic file e.g. there are Data and Overflow parts under a directory. You should compare the dates obtained and return only the latest one.
Globally catalog the subroutine
Create an I-Desc in VOC, e.g. I.FILE.MOD.DATE in the field definition of this I-Desc: SUBR("*GET.FILE.MOD.DATE",F2) and Conv Code as "D/MDY2"
Create another I-Desc e.g. I.FILE.MOD.TIME
Finally, you can
LIST VOC I.FILE.MOD.DATE I.FILE.MOD.TIME DESC WITH TYPE LIKE "F..."
alternatively in SQL,
SELECT I.FILE.MOD.DATE, I.FILE.MOD.TIME, VOC.DESC FROM VOC WHERE TYPE LIKE "F%";