Logstash can't communicate with elasticsearch through readonly rest elasticsearch plugin. - authentication

I am trying to connect logstash with elasticsearch through authentication but this configuration gives me the following error : [401] Forbidden by ReadonlyREST ES plugin {:class=>"Elasticsearch::Transport::Transport::Errors::Unauthorized", :level=>:error}
Configuration files are given below:
[Elasticsearch conf file]
http.cors.enabled: true
http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/
readonlyrest:
enable: true
response_if_req_forbidden: Forbidden by ReadonlyREST ES plugin
access_control_rules:
- name: "Logstash can write and create its own indices"
auth_key: logstash:logstash
type: allow
actions: ["indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create"]
indices: ["logstash-*", "<no_index>"]
- name: Kibana Server (we trust this server side component, full access granted via HTTP authentication)
auth_key: admin:pass3
type: allow
- name: Developer (reads only logstash indices, but can create new charts/dashboards)
auth_key: dev:dev
type: allow
kibana_access: ro+
indices: ["<no-index>", ".kibana*", "logstash*", "default"]
[logstash conf file]
input {
file {
path =>"/var/log/site.log"
start_position => beginning
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
user => "logstash"
password => "logstash"
}
}

Mention output in logstash config file like below :-
output {
elasticsearch {
hosts => ["localhost:9200"]
user => ["logstash"]
password => ["logstash"]
}

Related

Unable to validate the following destination configurations (s3 to SNS)

I am trying to setup an event Notification system on s3 to publish notifications to SNS when a file is being uploaded to s3. Here 's how I implemented it via CDK :
import * as sns from "monocdk/aws-sns";
import * as iam from "monocdk/aws-iam";
import {
GAMMA_ACCOUNT,
PROD_ACCOUNT,
UAT1_ACCOUNT,
UAT2_ACCOUNT,
PERFECT_MILE_ACCOUNT,
} from "../utils/constants/awsAccounts";
import { Construct } from "monocdk";
import * as s3 from "monocdk/aws-s3";
import * as s3n from "monocdk/aws-s3-notifications";
import { CommonResourceStackProps, Stage } from "../stack/CommonResourcesStack";
export class S3NotificationToSNSCustomResource extends Construct {
constructor(
scope: Construct,
id: string,
bucket: s3.IBucket,
stackProps: CommonResourceStackProps
) {
super(scope, id);
const topic = new sns.Topic(this, "Topic", {
displayName: "Sherlock-s3-Event-Notifications-Topic",
topicName: "Sherlock-s3-Event-Notifications-Topic",
});
const topicPolicy = new sns.TopicPolicy(this, "TopicPolicy", {
topics: [topic],
});
const s3ServicePrincipal = new iam.ServicePrincipal("s3.amazonaws.com");
topicPolicy.document.addStatements(
new iam.PolicyStatement({
sid: "0",
actions: ["sns:Publish"],
principals: [s3ServicePrincipal],
resources: [topic.topicArn],
conditions: {
StringEquals: {
"AWS:SourceOwner":
stackProps.stage == Stage.Prod
? PROD_ACCOUNT
: stackProps.stage == Stage.Gamma
? GAMMA_ACCOUNT
: stackProps.stage == Stage.UAT1
? UAT1_ACCOUNT
: UAT2_ACCOUNT,
},
ArnLike: { "AWS:SourceArn": bucket.bucketArn },
},
}),
new iam.PolicyStatement({
sid: "1",
actions: ["sns:Subscribe"],
principals: [new iam.AccountPrincipal(PERFECT_MILE_ACCOUNT)],
resources: [topic.topicArn],
})
);
bucket.addEventNotification(
s3.EventType.OBJECT_CREATED,
new s3n.SnsDestination(topic),
{ prefix: "output/reportingData/openItems/", suffix: "_SUCCESS" }
);
}
}
But, when I try to deploy this, I am getting the following error : An error occurred (InvalidArgument) when calling the PutBucketNotificationConfiguration operation: Unable to validate the following destination configurations
Can anyone help me with it?
I read this post(https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-destination-s3/) but its resolution is using the templates and I am implementing using the CDK package. Also I have added all the access policies to publish and subscribe.
aws:SourceAccount and aws:SourceOwner are condition keys which are not supported by all services. Amazon S3 notifications use aws:SourceAccount Refer - https://docs.aws.amazon.com/sns/latest/dg/sns-access-policy-use-cases.html#source-account-versus-source-owner

Deploying Synapse Workspace with Managed Vnet Enabled (Bicep), but cannot assign private endpoints in UI

Situation:
I am deploying a Synapse workspace instance in Bicep with Managed Virtual Network Enabled.
I can see the Managed Vnet Is enabled from the UI:
However, when I enter the workspace my integration runtimes are not enabled for virtual network access and I cannot create managed private endpoints.
I'm writing the following code for the bicep deployment:
resource synapse_workspace 'Microsoft.Synapse/workspaces#2021-06-01' = {
name: synapse_workspace_name
location: location
tags: {
Workload: '####'
Environment: envName
Classification: 'Confidential'
Criticality: 'Low'
}
identity: {
type: 'SystemAssigned'
}
properties: {
// Git Repo
workspaceRepositoryConfiguration: {
accountName: '#####'
collaborationBranch: 'main'
projectName: '####'
repositoryName: '#############'
rootFolder: '/synapse/syn-data-${envName}'
tenantId: '####################'
type: 'WorkspaceVSTSConfiguration'
}
defaultDataLakeStorage: {
resourceId: storage_account_id
createManagedPrivateEndpoint: true
accountUrl: ###################
filesystem: ################
}
encryption: {
cmk: {
kekIdentity: {
useSystemAssignedIdentity: true
}
key: {
name: 'default'
keyVaultUrl: '#########################'
}
}
}
managedVirtualNetwork: 'default'
connectivityEndpoints: {
web: 'https://web.azuresynapse.net?workspace=%2fsubscriptions%######################
dev: 'https://##############.dev.azuresynapse.net'
sqlOnDemand: '################-ondemand.sql.azuresynapse.net'
sql: '################.sql.azuresynapse.net'
}
managedResourceGroupName: guid('synapseworkspace-managed-resource-group-${envName}')
sqlAdministratorLogin: 'sqladminuser'
privateEndpointConnections: []
managedVirtualNetworkSettings: {
preventDataExfiltration: true
allowedAadTenantIdsForLinking: []
}
publicNetworkAccess: 'Disabled'
cspWorkspaceAdminProperties: {
initialWorkspaceAdminObjectId: '#########################'
}
trustedServiceBypassEnabled: false
}
}
I get no errors in the deployment regarding the virtual network or any associated settings, but I still get the default integration runtime set to "Public" and not "Managed Virtual Network".
Is this a limitation in Bicep or am I missing some parameter?
Any help would be great
Joao

not able to specify type for fields in elastic search index

I am using logstash, elastic search and Kibana.
input file is in .csv format
I first created the following mapping through the Dev Tools > console in Kibana:
PUT /defects
{
"mappings": {
"type_name":{
"properties" : {
"Detected on Date" :{
"type": "date"
},
"DEFECTID": {
"type": "integer"
}
}
}
}
}
It was successful. Then created a logstash conf file and ran it.
Here is my logstash.conf file:
input {
file {
path => ["E:/d drive/ELK/data/sample.csv"]
start_position => "beginning"
sincedb_path => "/dev/nul"
}
}
filter {
csv {
columns => ["DEFECTID","Detected on Date","SEVERITY","STATUS"]
separator => ","
}
}
output {
elasticsearch {
action => "index"
hosts => "localhost:9200"
manage_template => false
template_overwrite => true
index => "defects"
}
}
I created index pattern defects* in Kibana. when i look at the type of the fields, all are shown as string. Pls Let me know where i am missing

Logstash S3 input - filtering log types

I'm centralizing logs with ELK Stack (Elasticsearch, Logstash and Kibana). It works fine but..
There are some types of logs in my S3 bucket:
elasticbeanstalk-access-logs
error logs
tomcat7 access-logs
stacktrace logs
I'm using the S3 input plugin in my Logstash config file:
input {
s3 {
secret_access_key => "..."
access_key_id => "..."
region => "eu-central-1"
bucket => "bucket_name"
prefix => "resources/environments/logs/publish"
codec => "plain"
}
}
And I'm using some filter plugins:
filter {
if [type] == "access" {
mutate { replace => { type => "apache_access" } }
grok { match => { "message" => "%{COMBINEDAPACHELOG}" } }
date { match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] }
} else {
multiline {
#type => "all" # no type means for all inputs
pattern => "(^.+Exception: .+)|(^\s+at .+)|(^\s+... \d+ more)|(^\s*Caused by:.+)"
what => "previous"
}
grok {
match => [ "message", "(?m)%{TIMESTAMP_ISO8601:timestamp} \[%{HOSTNAME:thread}\] %{LOGLEVEL:severity} %{GREEDYDATA:message}" ]
overwrite => [ "message" ]
}
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}
}
The problem: there are 4 types. How can I use the 'if's for filtering the logs. I used "http://grokconstructor.appspot.com" to test my grok filter and it works for 1 type of log.
The solution should be something like this:
if [type] == "access" {
#my grok filter
} else if [type] == "stacktrace" {
#my grok filter
} else if [type] == "tomcat7" {
#my grok filter
} ...
Tomcat Cataline out log:
2016-04-07 15:27:28,459 [http-bio-8080-exec-33] ERROR v1.PaymentTxController - Cannot get property 'attrs' on null object
java.lang.NullPointerException: Cannot get property 'attrs' on null object
at com.b2boost.payment.provider.paybox.PayboxPaymentProviderService.createSubscriptionAndPay(PayboxPaymentProviderService.groovy:206)
at com.b2boost.payment.provider.paybox.PayboxPaymentProviderService$__tt__pay_closure9.doCall(PayboxPaymentProviderService.groovy:82)
at com.b2boost.commons.error.AppError.safe(AppError.groovy:53)
at com.b2boost.commons.error.AppError.safe(AppError.groovy:60)
at com.b2boost.payment.provider.paybox.PayboxPaymentProviderService.$tt__pay(PayboxPaymentProviderService.groovy:73)
at com.b2boost.payment.PaymentService$__tt__pay_closure8.doCall(PaymentService.groovy:52)
at com.b2boost.commons.error.AppError.safeWithEither(AppError.groovy:70)
at com.b2boost.commons.error.AppError.safeWithEither(AppError.groovy:64)
at com.b2boost.payment.PaymentService.$tt__pay(PaymentService.groovy:43)
at com.b2boost.users.api.v1.PaymentTxController$_save_closure1.doCall(PaymentTxController.groovy:49)
at com.b2boost.users.api.v1.BaseController.documentWithAuthorization(BaseController.groovy:101)
at com.b2boost.users.api.v1.PaymentTxController.save(PaymentTxController.groovy:45)
at grails.plugin.cache.web.filter.PageFragmentCachingFilter.doFilter(PageFragmentCachingFilter.java:177)
at grails.plugin.cache.web.filter.AbstractFilter.doFilter(AbstractFilter.java:63)
at com.odobo.grails.plugin.springsecurity.rest.RestTokenValidationFilter.processFilterChain(RestTokenValidationFilter.groovy:99)
at com.odobo.grails.plugin.springsecurity.rest.RestTokenValidationFilter.doFilter(RestTokenValidationFilter.groovy:66)
at grails.plugin.springsecurity.web.filter.GrailsAnonymousAuthenticationFilter.doFilter(GrailsAnonymousAuthenticationFilter.java:53)
at com.odobo.grails.plugin.springsecurity.rest.RestAuthenticationFilter.doFilter(RestAuthenticationFilter.groovy:108)
at grails.plugin.springsecurity.web.authentication.logout.MutableLogoutFilter.doFilter(MutableLogoutFilter.java:82)
at com.odobo.grails.plugin.springsecurity.rest.RestLogoutFilter.doFilter(RestLogoutFilter.groovy:63)
at com.brandseye.cors.CorsFilter.doFilter(CorsFilter.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Error log:
[Tue Apr 12 10:01:01 2016] [notice] Apache/2.2.29 (Unix) DAV/2 configured -- resuming normal operations
Stacktrace log
2015-11-13 16:02:28,524 [MonitoringThread-118] ERROR StackTrace - Full Stack Trace:
com.notnoop.exceptions.ApnsDeliveryErrorException: Failed to deliver notification with error code 8
at com.notnoop.apns.internal.ApnsConnectionImpl$2.run(ApnsConnectionImpl.java:189)
at java.lang.Thread.run(Thread.java:745)

Puppet. Change any config/text file in most efficient way

I am learning how to use Puppet. An now I am trying to change config file for nscd. I need to change such lines:
server-user nscd
paranoia yes
And let's suppose that full config looks as next:
$ cat /etc/nscd/nscd.conf
logfile /var/log/nscd.log
threads 4
max-threads 32
server-user nobody
stat-user somebody
debug-level 0
reload-count 5
paranoia no
restart-interval 3600
Previously I have wrote such module for replacing needed lines and it looks as follow:
include nscd
class nscd {
define line_replace ($match) {
file_line { $name:
path => '/etc/nscd/nscd.conf',
line => $name,
match => $match,
notify => Service["nscd"]
}
}
anchor{'nscd::begin':}
->
package { 'nscd':
ensure => installed,
}
->
line_replace {
"1" : name => "server-user nscd", match => "^\s*server-user.*$";
"2" : name => "paranoia yes", match => "^\s*paranoia.*$";
}
->
service { 'nscd':
ensure => running,
enable => "true",
}
->
anchor{'nscd::end':}
}
Is it possible to make the same in more efficient way? With arrays or like that?
I recommend you to use the inifile puppet module to easy manage INI-style files like this, but also you can take advantage of the create_resources function:
include nscd
class nscd {
$server_user_line = 'server-user nscd'
$paranoia_line = 'paranoia yes'
$defaults = {
'path' => '/etc/nscd/nscd.conf',
'notify' => Service["nscd"],
}
$lines = {
$server_user_line => {
line => $server_user_line,
match => "^\s*server-user.*$",
},
$paranoia_line => {
line => $paranoia_line,
match => "^\s*paranoia.*$",
}
}
anchor{'nscd::begin':}
->
package { 'nscd':
ensure => installed,
}
->
create_resources(file_line, $lines, $defaults)
->
service { 'nscd':
ensure => running,
enable => "true",
}
->
anchor{'nscd::end':}
}
So I wrote such code:
class nscd($parameters) {
define change_parameters() {
file_line { $name:
path => '/etc/nscd.conf',
line => $name,
# #name.split[0..-2] will remove last element,
# does not matter how many elements in line
match => inline_template('<%="^\\s*"+(#name.split[0..-2]).join("\\s*")+".*$" %>'),
}
}
anchor{'nscd::begin':}
->
package { 'nscd':
ensure => installed,
}
->
change_parameters { $parameters: }
->
service { 'nscd':
ensure => 'running',
enable => true,
hasrestart => true
}
->
anchor{'nscd::end':}
}
And class can be launched by passing list/array to class:
class { 'nscd':
parameters =>
[' server-user nscd',
' paranoia yes',
' enable-cache hosts yes smth',
' shared hosts yes']
}
Then each element from array goes to change_parameters function as $name argument after that inline_template module will generate regexp with ruby one line code.
And the same for each element from list/array.
But anyway I think better to use erb template for such changing.