Using filebeat 6.8 open source version, I'm trying to use the field rename feature. I'm not seeing any errors in startup or processing, but the field isn't getting renamed. The logs are JSON formatted. Am I missing something in my config, or is this combination not supported yet?
filebeat.yml
processors:
- rename:
fields:
- from: "a"
to: "b"
filebeat.inputs:
- type: log
enabled: true
json.keys_under_root: true
fields_under_root: true
sample log
{
"a": "blah"
}
Edit:
According to the official documentation, processors can be placed at top level or under an input.
So, perhaps what your configuration is missing is the file paths to prospect.
I'm running filebeat 7.6.2 and I use this feature without any issues, but I'm pretty sure it works on version 6.8 just as well since these fields are documented for this version. See here.
Have you tried:
filebeat.inputs:
- type: log
paths:
- /path/to/your/logs
enabled: true
json.keys_under_root: true
fields_under_root: true
processors:
- rename:
fields:
- from: "a"
to: "b"
Processors should be declared after inputs AFAIK. As for not seeing any errors, I'm not sure how far beyond simple yaml linting filebeat goes when it comes to validating your configuration file.
Related
My goal is to update all includes in my gitlab-ci.yml files. By default without any custom configuration renovate create MR with an include update based on gitlab-release.
But today, I have some includes only based on tag and there isn't any release associated to this tag. I looking for a solution to update also these includes.
To explain, if I have a release for myprojet named 1.2.3 and tag 1.2 and 1
include:
# Bash template
- project: "myproject"
ref: "1.2.2"
file: "templates/gitlab-ci.yml"
renovate detects there is a new release on 1.2.3 - It's OK
If I have :
include:
# Bash template
- project: "myproject"
ref: "1.1"
file: "templates/gitlab-ci.yml"
renovate don't detect tag named 1.2 for myproject
have you tried regex managers?
here's an example, change it for your needs:
"regexManagers": [
{
"fileMatch": ["(^|/)\\.?gitlab-ci\\.yml$"],
"matchStringsStrategy": "combination",
"matchStrings": [
"\\s\\sCHART_SOURCES_URL: \"(?<depName>.*?)\"\n",
"\\s\\sCHART_SOURCES_VERSION: \"(?<currentValue>.*?)\"\n"
],
"datasourceTemplate": "git-tags"
}
],
i believe you could also set enabled=false for the actual gitlab-ci manager using matchManagers in package rules
I am doing the DBT hello world tutorial found here, and I have created my first project on a windows machine. My profiles.yml file looks like this:
my-bigquery-db:
target: dev
outputs:
dev:
type: bigquery
method: service-account
project: abiding-operand-286102
dataset: xxxDBTtestO1
threads: 1
keyfile: C:\Users\xxx\.dbt\abiding-operand-286102-ed9e3c9c13cd.json
timeout_seconds: 300
when I execute dbt run I get:
Running with dbt=0.17.2 Encountered an error while reading profiles: ERROR Runtime Error
dbt encountered an error while trying to read your profiles.yml
file.
Profile my-bigquery-db in profiles.yml is empty
Defined profiles:
my-bigquery-db
target
outputs
dev
type
method
project
dataset
threads
keyfile
timeout_seconds
Any idea?
At first glance from both your code and the source walkthrough, this is just a yml config problem. YML is a markup language that is white-space sensitive. And by just looking at the example that you may have pulled from - it doesn't look appropriately white spaced to me.
I'm not sure if you can simply copy from the below but it might be worth a shot.
my-bigquery-db:
target: dev
outputs:
dev:
type: bigquery
method: service-account
project: abiding-operand-286102
dataset: xxxDBTtestO1
threads: 1
keyfile: C:\Users\xxx\.dbt\abiding-operand-286102-ed9e3c9c13cd.json
timeout_seconds: 300
Basically - your dbt profile.yml needs to be setup with the sections at certain levels (not unlike python indentation or any other white spacing scheme).
So I'm pretty new to CloudFormation and also to Serverless framework. I've been trying to work through some exercises (such as an automatic thumbnail generator) and then create some simple projects that I can hopefully generalize for my own purposes.
Right now I'm attempting create a stack/function that creates two S3 buckets and has the Lambda Function take a CSV file form one, perform some simple transformations, and place it in the other receiving bucket.
In trying to build off the exercise I've done, I created a Yaml file with the following code:
provider:
name: aws
runtime: python3.8
region: us-east-1
profile: serverless-admin
timeout: 10
memorySize: 128
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: "*"
custom:
assets:
targets:
- bucket1: csvbucket1-08-16-2020
pythonRequirements:
dockerizePip: true
- bucket2: csvbucket2-08-16-2020
pythonRequirements:
dockerizePip: true
functions:
protomodel-readcsv:
handler: handler.readindata
events:
s3:
- bucket: ${self:custom.bucket1}
event: s3:ObjectCreated:*
suffix: .csv
- bucket: ${self:custom.bucket2}
plugins:
- serverless-python-requirements
- serverless-s3-deploy
However, when i do a Serverless deploy from my command prompt, I get:
Serverless Warning --------------------------------------
A valid service attribute to satisfy the declaration 'self:custom.bucket1' could not be found.
Serverless Warning --------------------------------------
A valid service attribute to satisfy the declaration 'self:custom.bucket2' could not be found.
Serverless Error ---------------------------------------
Events for "protomodel-readcsv" must be an array, not an object
I've tried to make the events object in the protohandler-readcsv: by adding a - but I then get a bad indentation error that for some reason I cannot reconcile. But, more fundamentally, I'm not exactly sure why that item would need be an array anyway, and I wasn't clear about the warnings with the buckets either.
So sorry about a pretty newbie question about this, but running tutorials/examples online leaves a lot to try to figure out in trying to generalize/customize these examples.
custom:
assets:
targets:
- bucket1
I guess you need self:custom.assets.targets.bucket1, not sure if this nested assets will work.
Please check the example below is supposed to work.
service: MyService
custom:
deploymentBucket: s3_my_bucket
provider:
name: aws
deploymentBucket: ${self:custom.deploymentBucket}
stage: dev
I am using npm package https://www.npmjs.com/package/serverless-step-functions-offline for running step functions offline. However I get the output form serverless as
Function "SaveSlotDetails" does not presented in serverless.yml
I have follwed the steps exactly as per the documentation, but I am not able to run the step function locally
Below is my serverless.yml file content for the related context
custom:
stepFunctionsOffline:
SaveSlotDetails:CreateSubscription
functions: # add 4 functions for CRUD
createSubscription:
handler: handlers/subscriptions.create
name: CreateSubscription
events:
- http:
path: subscriptions # path will be domain.name.com/dev/subscriptions
method: post
cors: true
stepFunctions:
stateMachines:
SlotCheckingMachine:
name: ProcessSlotAvailabilityStateMachine
definition:
StartAt: SaveSlotDetails
TimeoutSeconds: 3600
States:
SaveSlotDetails:
Type: Task
Resource: "arn:aws:lambda:us-east-1:269266452438:function:CreateSlot"
Next: "SearchSubscriptions"
I have tried using both function names createSubscription and
CreateSubscription, but nothing helps. I checked issues previously
raised, but doesn't help much
I am tried using versions 2.1.2 and 2.1.1, but doesn't work. Any help would be appreciated
I'm been struggling to get api-server 1.2.2 to run with etcd secured with TLS.
I am upgrading from 1.1.2 to 1.2.2
In 1.1.2 I was using the --etcd-config flag and had a file that looked like:
{
"cluster": {
"machines": [
"https://XXX.XXX.XXX.XXX:2379",
"https://XXX.XXX.XXX.XXY:2379",
"https://XXX.XXX.XXX.XXZ:2379"
]
},
"config": {
"certFile": "/etc/ssl/etcd/etcd-peer.cert.pem",
"keyFile": "/etc/ssl/etcd/private/etcd-peer.key.pem",
"caCertFiles": [
"/etc/ssl/etcd/ca-chain.cert.pem"
],
"consistency": "STRONG_CONSISTENCY"
}
}
now this is no longer supported and I switched to using the flags:
--etcd-cafile="/etc/ssl/etcd/ca-chain.cert.pem"
--etcd-certfile="/etc/ssl/etcd/etcd-peer.cert.pem"
--etcd-keyfile="/etc/ssl/etcd/private/etcd-peer.key.pem"
--etcd-servers="https://XXX.XXX.XXX.XXX:2379, https://XXX.XXX.XXX.XXY:2379,https://XXX.XXX.XXX.XXZ:2379"
now I am getting this error:
F0421 00:54:40.133777 1 server.go:291] Invalid storage version or misconfigured etcd: open "/etc/ssl/etcd/etcd-peer<nodeIP>.cert.pem": no such file or directory
So, it seems like it cannot find the cert file.
The file paths and names are the same as before, and they are mounted with hostPath the exact same way as with v1.1.2, so I don't understand why api-server would not not find them.
I have been trying to figure what is going on with the file paths by simply switching the command in the pod from
- /hyperkube
- api-server
...
to
- /bin/sleep
- 60
but kubelet won't start this pod for some reason I don't understand.
Does it have to do with the yaml file name or something?
I don't understand what is happening why kubelet won't run with this command.
Any help with this would be greatly appreciated.
Thanks
UPDATE
I was able to get into the running container after replacing the command with /hyperkube scheduler
i can cat the files that apiserver is complaining about, so I don't understand why they're not found.
Well, the culprit was as simple as ""
--etcd-cafile="/etc/ssl/etcd/ca-chain.cert.pem"
--etcd-certfile="/etc/ssl/etcd/etcd-peer.cert.pem"
--etcd-keyfile="/etc/ssl/etcd/private/etcd-peer.key.pem"
--etcd-servers="https://XXX.XXX.XXX.XXX:2379, https://XXX.XXX.XXX.XXY:2379,https://XXX.XXX.XXX.XXZ:2379"
is WRONG
but this works:
--etcd-cafile=/etc/ssl/etcd/ca-chain.cert.pem
--etcd-certfile=/etc/ssl/etcd/etcd-peer.cert.pem
--etcd-keyfile=/etc/ssl/etcd/private/etcd-peer.key.pem
--etcd-servers=https://XXX.XXX.XXX.XXX:2379,https://XXX.XXX.XXX.XXY:2379,https://XXX.XXX.XXX.XXZ:237