In splunk, how to create Private Lookup table for individual? - splunk

As I am working on network security project. I need to create private lookup table for individual users, such that any other user shouldn't see the content of other users Lookup table.
I have created Lookup table by:
curl -k -u username:pwd https://localhost:8089/servicesNS/nobody/*appname*/data/lookup-table-files -d 'eai:data=/opt/splunk/var/run/splunk/lookup_tmp/april.csv' -d 'name=12_april_lookup.csv'
This created 12_april_lookup.csv file inside .../my_app/lookup/ folder. This Lookup table permission is private at this point.
But,
When I add some data to Lookup table by below search command:
| makeresults | eval name="xyz" | eval token="12345"| outputlookup 12_april_lookup.csv append=True createinapp=True
then file will get created in other app folder with become global permission. Now all user can view file content by
|inputlookup 12_april_lookup.csv

Need to run below command with same app search section:
As this command was running on global app level, so file was created at global level with global permission.
In splunk every app has search section. Based on which app search section file will be created in that app lookup folder.
Make sure every search we do in splunk, You are in correct app section.
| makeresults | eval name="xyz" | eval token="12345"| outputlookup 12_april_lookup.csv append=True createinapp=True

Related

How to use Snowflake identifier function to reference a stage object

I can descrbie a stage w/ the identifier:
desc stage identifier('db.schema.stage_name');
But get an error when I try to use the stage with the at symbol syntax
Have tried these variations but no dice so far:
list #identifier('db.schema.stage_name');
list identifier('#db.schema.stage_name');
list identifier('db.schema.stage_name');
list identifier(#'db.schema.stage_name');
list identifier("#db.schema.stage_name");
The use of IDENTIFIER may indicate the need to query/list content of a stage with stage name provided as variable.
An alternative approach could be usage of directory tables:
Directory tables store a catalog of staged files in cloud storage. Roles with sufficient privileges can query a directory table to retrieve file URLs to access the staged files, as well as other metadata.
Enabling directory table on the stage:
CREATE OR REPLACE STAGE test DIRECTORY = (ENABLE = TRUE);
ALTER STAGE test REFRESH;
Listing content of the stage:
SET var = '#public.test';
SELECT * FROM DIRECTORY($var);
Output:
+---------------+------+---------------+-----+------+----------+
| RELATIVE_PATH | SIZE | LAST_MODIFIED | MD5 | ETAG | FILE_URL |
+---------------+------+---------------+-----+------+----------+

How to detach or remove a data catalog tag from a BigQuery table using gcloud command

Could anyone please share ETL tag template in GCP data catalog?
I'd like to refresh a tag value with its ETL status every time a BigQuery table is updated. I'm trying to use gcloud commands to create a tag template. I need to remove the tag from the template using the gcloud command and add another tag to that template, so that I can maintain the ETL status through automation.
I am able to remove the tag through UI manually. I need corresponding gcloud command for the same.
The procedure is explained in the Data Catalog documentation:
Suppose that you have the following tag template created:
gcloud data-catalog tag-templates create demo_template \
--location=us-central1 \
--display-name="Demo Tag Template" \
--field=id=source,display-name="Source of data asset",type=string,required=TRUE \
--field=id=num_rows,display-name="Number of rows in the data asset",type=double \
--field=id=has_pii,display-name="Has PII",type=bool \
--field=id=pii_type,display-name="PII type",type='enum(EMAIL_ADDRESS|US_SOCIAL_SECURITY_NUMBER|NONE)'
You need to lookup for Data Catalog entry created for your BigQuery table:
ENTRY_NAME=$(gcloud data-catalog entries lookup '//bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET/tables/TABLE' --format="value(name)")
Once you have entry name you can:
Create a tag if it wasn't created on the entry or if it was deleted earlier:
cat > tag_file.json << EOF
{
"source": "BigQuery",
"num_rows": 1000,
"has_pii": true,
"pii_type": "EMAIL_ADDRESS"
}
EOF
gcloud data-catalog tags create --entry=${ENTRY_NAME} --tag-template=demo_template --tag-template-location=us-central1 --tag-file=tag_file.json
The command returns (among others) tag name that can be used with update or delete.
Delete a tag from entry:
gcloud data-catalog tags delete TAG_NAME
Update a tag, so that you don't have to delete it and recreate:
gcloud data-catalog tags update --entry=${ENTRY_NAME} --tag-template=demo_template --tag-template-location=us-central1 --tag-file=tag_file.json
If you lost your tag name use list command:
gcloud data-catalog tags list --entry=${ENTRY_NAME}

vNIC disable CSV output

I am using a powerCLI script to disable vNICs on VMs and to export the status of vNICs. I am trying to export ConnectionState.
I am using Get-VM $vm | Get-NetworkAdapter | Select Name, ConnectionState to extract output to a CSV file.
The CSV generated has vNIC name and ConnectionState but I wish to add VM name for respective VMs. I tried various options but no luck.
There should be an additional property available as part of the output from Get-NetworkAdapter called "Parent". This property should be the name of the VM.
Your command should be updated to look like:
Get-VM $vm | Get-NetworkAdapter | Select-Object Parent, Name, ConnectionState

Splunk - Disabling alerts during maintenance window

I have a simple cvs file loaded in splunk called StandardMaintenance.csv which looks like this...
UnderMaintenance
NO
We currently get bombarded with alerts during our maintenance window. At the start of maintenance, I want to be able to change this to YES to stop the alerts (I have an easy way to do this). I am looking for something standard to add to all alert queries that check this csv for status (lookup as I understand it) and for the query to return nothing if UnderMaintenance = YES, thus not generate a match to the query.
It is basically a binary, ON or OFF. I would appreciate any help you could provide.
NOTE:
You cannot disable the alert by executing splunk query because the
Rest API requires a POST action.
Step 1: Maintain a csv file of all your savedsearches with owners by using below query. You can schedule the query as per your convenience. For example below search creates maintenance.csv and replaces all contents whenever executed.
| rest /servicesNS/-/search/saved/searches | table title eai:acl.owner | outputlookup maintenance.csv
This file would get created in $SPLUNK_HOME/etc/apps/<app name>/lookups
Step 2: Write a script to read data from maintenance.csv file and execute below command to disable searches. (Run before maintenance window)
curl -X POST -k -u admin:pass https://<splunk server>:8089/servicesNS/<owner>/search/saved/searches/<search title>/disable
Step 3: Do the same thing to enable all seaches, just change the command to below (Run after maintenance window)
curl -X POST -k -u admin:pass https://<splunk server>:8089/servicesNS/<owner>/search/saved/searches/<search title>/enable
EDIT 1:
Create StandardMaintenance.csv file under $SPLUNK_HOME/etc/apps/search/lookups.
The StandardMaintenance.csv file contains :
UnderMaintenance
"No"
Use below search query to get results of existing saved searches only if UnderMaintenance = No :
| rest /servicesNS/-/search/saved/searches
| eval UnderMaintenance = "No"
| table title eai:acl.owner UnderMaintenance
| join UnderMaintenance
[| inputlookup StandardMaintenance.csv ]
| table title eai:acl.owner
Hope this helps !
Before each query create a variable (say it's called foo) that you set to true if maintenance is NO and that you do not set otherwise, as below:
... | eval foo=case(maintenance=="NO","true")
Then you put the below at the end of your query:
| eval foo=$foo$
This will make your query execute only if maintenance is NO

DynamoDB Local to DynamoDB AWS

I've built an application using DynamoDB Local and now I'm at the point where I want to setup on AWS. I've gone through numerous tools but have had no success finding a way to take my local DB and setup the schema and migrate data into AWS.
For example, I can get the data into a CSV format but AWS has no way to recognize that. It seems that I'm forced to create a Data Pipeline... Does anyone have a better way to do this?
Thanks in advance
As was mentioned earlier, DynamoDB local is there for testing purposes. However, you can still migrate your data if you need to. One approach would be to save data into some format, like json or csv and store it into S3, and then use something like lambdas or your own server to read from S3 and save into your new DynamoDB. As for setting up schema, You can use the same code you used to create your local table to create remote table via AWS SDK.
you can create a standalone application to get the list of tables from the local dynamoDB and create them in your AWS account after that you can get all the data for each table and save them.
I'm not sure which language you familiar with but will explain some API might help you in Java.
DynamoDB.listTables();
DynamoDB.createTable(CreateTableRequest);
example about how to create table using the above API
ProvisionedThroughput provisionedThroughput = new ProvisionedThroughput(1L, 1L);
try{
CreateTableRequest groupTableRequest = mapper.generateCreateTableRequest(Group.class); //1
groupTableRequest.setProvisionedThroughput(provisionedThroughput); //2
// groupTableRequest.getGlobalSecondaryIndexes().forEach(index -> index.setProvisionedThroughput(provisionedThroughput)); //3
Table groupTable = client.createTable(groupTableRequest); //4
groupTable.waitForActive();//5
}catch(ResourceInUseException e){
log.debug("Group table already exist");
}
1- you will create TableRequest against mapping
2- setting the provision throughput and this will vary depend on your requirements
3- if the table has global secondary index you can use this line (Optional)
4- the actual table will be created here
5- the thread will be stopped till the table become active
I didn't mention the API related to data access (insert ... etc), I supposed that you're familiar with since you already use them in local dynamodb
I did a little work setting up my local dev environment. I use SAM to create the dynamodb tables in AWS. I didn't want to do the work twice so I ended up copying the schema from AWS to my local instance. The same approach can work the other way around.
aws dynamodb describe-table --table-name chess_lobby \
| jq '.Table' \
| jq 'del(.TableArn)' \
| jq 'del(.TableSizeBytes)' \
| jq 'del(.TableStatus)' \
| jq 'del(.TableId)' \
| jq 'del(.ItemCount)' \
| jq 'del(.CreationDateTime)' \
| jq 'del(.GlobalSecondaryIndexes[].IndexSizeBytes)' \
| jq 'del(.ProvisionedThroughput.NumberOfDecreasesToday)' \
| jq 'del(.GlobalSecondaryIndexes[].IndexStatus)' \
| jq 'del(.GlobalSecondaryIndexes[].IndexArn)' \
| jq 'del(.GlobalSecondaryIndexes[].ItemCount)' \
| jq 'del(.GlobalSecondaryIndexes[].ProvisionedThroughput.NumberOfDecreasesToday)' > chess_lobby.json
aws dynamodb create-table \
--cli-input-json file://chess_lobby.json \
--endpoint-url http://localhost:8000
The top command uses describe table aws cli capabilities to get the schema json. Then I use jq to delete all unneeded keys, since create-table is strict with its parameter validation. Then I can use create-table to create the table in the local environent by using the --endpoint-url command.
You can use the --endpoint-url parameter on the top command instead to fetch your local schema and then use the create-table without the --endpoint-url parameter to create it directly in AWS.