How do you configure UDPInput to work with heka-flood udp test - udp

I am trying to test sending data to heka's UDPInput with no success. I decided to try to use the heka-flood tool to mimic UPD traffic also with no success. I am using 0.10 version of heka. My heka.toml :
[UdpInput]
address = "127.0.0.1:4880"
net = "udp"
splitter = "udp_splitter"
decoder = "ProtobufDecoder"
set_hostname = true
# I have also tried not setting this as well
[udp_splitter]
type = "HekaFramingSplitter"
[ProtobufDecoder]
[LogOutput]
type = "LogOutput"
message_matcher = "Logger == 'UdpInput'"
encoder = "PayloadEncoder"
and my flood.toml:
[udp_proto]
ip_address = "127.0.0.1:4880"
sender = "udp"
pprof_file = ""
encoder = "protobuf"
num_messages = 1000
corrupt_percentage = 0.0001
signed_percentage = 0.00011
variable_size_messages = false
ascii_only = true
max_message_size = 32000
If I add another input, like say a log tailer and add it to the message matcher for the LogOutput, those messages end up being logged out. I never see anything from the UpdInput. What am I doing wrong?

Related

Can I bind a Lambda Layer directly to a static ARN instead of a zip file

I want to use an AWS provided Layer in a Lamba function. In Terraform what is the preferred way to bind it? Also, can the ARN be bound directly to the Layers property of the module, bypassing the need for defining the layer?
resource "aws_lambdas_layer" "lambda_layer"{
#filename = "python32-pandas.zip"
layer_name= "aws-pandas-py38-layer"
arn = "arn:aws:lambda:us-east-1:xxxxxx:AWSSDKPandas-Python38:1" #? Is this valid
}
module "lambda_test" {
source = "git::https://git.my-custom-aws-lambda.git"
application = var.application
service = "${var.service}-test"
file_path = "lambda_function.zip"
publicly_accessible = false
data_classification = "confidential"
handler = "lambda_function.lambda_handler"
runtime = "python3.8"
tfs_releasedefinitionname = ""
tfs_releasename = "0"
vpc_enabled = true
vpc_application_tag = "aws-infra"
promote = true
rollback = false
create_cwl_group = true
cwl_prefix = "my-project"
create_cwl_subscription = false
#Could layers an arn?
layers = [aws_lambda_layer_version.lambda_layer.arn]
timeout = 600 ####10 mins
memory_size = 1024 #### 1GB
environment = {
variables = {
destination_bucket_name = "us-east-1-my-sbx-${terraform.workspace}"
}
}
}
Doh! The layers property is an [array]. Minor lapse of reading comprehension on my part :/
The solution is to bind the layers to an array of ["arns"] pointing to the aws or custom arn(s).
layers = ["arn:aws:lambda:us-east-1:336392948345:layer:AWSSDKPandas-Python39:1"]

Requested module experienced an error while loading - Server - Data:11

Yeah So, I'm coding a Roblox game and this script gets an error every time, I even restarted roblox studio to try fixing it but it didn't work and I tried messing around with the code but I couldn't figure it out, can someone please help?
Script:
local PetModule = require(ServerModules.PetModule)
Module Code:
local module = {}
local ReplicatedStorage = game:GetService("ReplicatedStorage")
local Pet = ReplicatedStorage.Pet
function module.EquipPet(Player, PetName)
local PetModel = Pet:FindFirstChild(PetName)
if PetModel then
PetModel = PetModel:Clone()
PetModel.Parent = workspace.Pet:FindFirstChild((Player.Name))
if Player then
local Character = Player.Character
if Character then
if not Character.HumanoidRootPart:FindFirstChild("PetAttachments") then
local PetAttachments = Instance.new("Folder")
PetAttachments.Name = "PetAttachments"
PetAttachments.Parent = Character.HumanoidRootPart
local PetAttachments = Character.HumanoidRootPart:FindFirstChild("PetAttachments")
if PetAttachments then
local att0 = Instance.new("Attachment")
att0.Name = "Attachment1"
att0.Position = PetModel:FindFirstChild("AttachmentPosition").Value
att0.Parent =Character.HumanoidRootPart
local att1 = Instance.New("Attachment")
att1.Name = "Attachment2"
att1.Parent = PetModel.PrimaryPart
local AlignPosition = Instance.new("AlignPosition")
AlignPosition.Attachment0 = att0
AlignPosition.Attachment1 = att1
AlignPosition.RigidityEnabled = false
AlignPosition.MaxForce = PetModel.MaxForce.Value
AlignPosition.Responsiveness = PetModel.Responsiveness.Value
AlignPosition.Parent = PetModel.PrimaryPart
local AlignOrientation = Instance.new("AlignOrientation")
AlignOrientation.Attachment0 = att0
AlignOrientation.Attachment1 = att1
AlignOrientation.RigidityEnabled = false
AlignOrientation.MaxTorque = PetModel.MaxForce.Value
AlignOrientation.Responsiveness = PetModel.Responsiveness.Value
AlignOrientation.Parent = PetModel.PrimaryPart
game:GetService("RunService").Heartbeat:Connect(function()
att0.Position = PetModel.AttachmentPosition.Value
AlignPosition.MaxForce = PetModel.MaxForce.Value
AlignOrientation.MaxTorque = PetModel.MaxForce.Value
AlignPosition.Responsiveness = PetModel.Responsiveness
end)
end
end
end
end
end
function module.UnequipPet(Player)
end
function module.UnequipAllPet(Player)
end
return module
end
If anyone could help me fix this it would be great.
make sure to put "return module" at the end!
return module
is within the function. Try to put it free at the end of the code and...
I'm not sure, but I think it's because you're creating an event connection within module, I had this problem right now. I just stopped creating events within the Module and the problem stopped.
Try removing this from de Code.
game:GetService("RunService").Heartbeat:Connect(function()
att0.Position = PetModel.AttachmentPosition.Value
AlignPosition.MaxForce = PetModel.MaxForce.Value
AlignOrientation.MaxTorque = PetModel.MaxForce.Value
AlignPosition.Responsiveness = PetModel.Responsiveness
end)

OpenIO swift deny host headers

OpenIO 7.2.0.
I have an OpenIO with keystone (queens) auth cluster.
By default any user can configure his own acls and public url.
I would like to restrict user only for read and write in containers and objects.
Apparently deny_host_headers can do the job in proxy-server.conf but it not seems to be working -> nothing append.
I didn't find any "super admin" acls.
Any idea ?
My proxy-server.conf ->
# OpenIO managed
[DEFAULT]
use_stderr = False
bind_ip = ip
bind_port = port
workers = 72
max_clients = 1024
user = openio
log_facility = /dev/log
log_header = true
log_level = INFO
log_name = OIO,OPENIO,oioswift,0
eventlet_debug = false
sds_namespace = OPENIO
sds_proxy_url = http://ip:port
sds_default_account = openio
sds_connection_timeout = 5
sds_read_timeout = 35
sds_write_timeout = 35
sds_pool_connections = 500
sds_pool_maxsize = 500
sds_max_retries = 0
sds_tls = False
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk proxy-logging authtoken keystoneauth proxy-logging copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:gatekeeper]
use = egg:swift#gatekeeper
[filter:healthcheck]
use = egg:oioswift#healthcheck
[filter:proxy-logging]
use = egg:swift#proxy_logging
access_log_headers = false
access_log_headers_only =
[filter:cache]
use = egg:swift#memcache
memcache_servers = ip:port
memcache_max_connections = 10
oio_cache = False
oio_cache_ttl = 0
[filter:bulk]
use = egg:swift#bulk
#[filter:tempurl]
#use = egg:swift#tempurl
#[filter:swift3]
#use = egg:swift3#swift3
#force_swift_request_proxy_log = True
#s3_acl = True
#check_bucket_owner = True
#location = us-east-1
#max_bucket_listing = 1000
#max_multi_delete_objects = 1000
#max_upload_part_num = 10000
#log_s3api_command = False
#bucket_db_enabled = True
#bucket_db_prefix = s3bucket:
#storage_domain = s3.openio.io
#bucket_db_master_name = OPENIO-master-1
#bucket_db_sentinel_hosts = ip:port
#[filter:tempauth]
#use = egg:oioswift#tempauth
#user_demo_demo = DEMO_PASS .admin
[filter:copy]
use = egg:oioswift#copy
object_post_as_copy = False
[filter:container-quotas]
use = egg:swift#container_quotas
[filter:account-quotas]
use = egg:swift#account_quotas
[filter:slo]
use = egg:oioswift#slo
max_manifest_segments = 10000
concurrency = 10
[filter:dlo]
use = egg:swift#dlo
[filter:versioned_writes]
use = egg:oioswift#versioned_writes
allow_versioned_writes = True
[app:proxy-server]
use = egg:oioswift#main
object_post_as_copy = False
allow_account_management = True
account_autocreate = True
sds_chunk_checksum_algo =
deny_host_headers = x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control
[filter:authtoken]
auth_type = password
#username = swift
username = user
project_name = user
region_name = region
user_domain_id = domain
memcache_secret_key = memcache_secret_key
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
insecure = True
cache = swift.cache
delay_auth_decision = True
token_cache_time = 300
auth_url = http://ip:port
include_service_catalog = False
www_authenticate_uri = http://ip:port
memcached_servers = ip:port
password = password
revocation_cache_time = 60
memcache_security_strategy = ENCRYPT
project_domain_id = dommain
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = role
reseller_admin_role = role
delay_auth_decision = False in authtoken section in proxy-server.conf file do the job.
delay_auth_decision : delay_auth_decision defaults to False, but leaving it as false will prevent other auth systems, staticweb, tempurl, formpost, and ACLs from working. This value must be explicitly set to True.
Now only files owners can view/create/edit containers/objects -> ACLs and sharing won't works.

Apache Flume with 2 different interceptors on same source

I am trying to add 2 different interceptors on the same source and send the intercepted data to 2 different channels.
But, I was not able to configure the same. Couldn't find any documentation about the same. Also, I am having some issues with the channel selectors. Not sure how to select a channel with the different interceptors.
Here is my code so far:
a1.sources = syslog_udp
a1.channels = chan1 chan2
a1.sinks = sink1 sink2 //both are different kafka sinks
a1.sources.syslog_udp.type = syslogudp
a1.sources.syslog_udp.port = 514
a1.sources.syslog_udp.host = 0.0.0.0
a1.sources.syslog_udp.keepFields = true
a1.sources.syslog_udp.interceptors = i1 i2
a1.sources.syslog_udp.interceptors.i1.type = regex_filter
a1.sources.syslog_udp.interceptors.i1.regex = '<regex_string1>'
a1.sources.syslog_udp.interceptors.i1.excludeEvents = false
a1.sources.syslog_udp.interceptors.i2.type = regex_filter
a1.sources.syslog_udp.interceptors.i2.regex = '<regex_string1>'|'<regex_string2>'
a1.sources.syslog_udp.interceptors.i2.excludeEvents = false
a1.sources.syslog_udp.selector.type = multiplexing
a1.sources.syslog_udp.channels = chan1 chan2
a1.channels.chan1.type = memory
a1.channels.chan1.capacity = 200
a1.channels.chan2.type = memory
a1.channels.chan2.capacity = 200
Seems like there is no straight-forward setup for this.
A work-around for this kind of layout is to have a single/wider channel interceptor in one agent, pipe the output to an avro-sink and setup a new agent for the avro-source and set-up the new channel interceptor on that.

How to use regex_extractor selector and multiplexing interceptor together in flume?

I am testing flume to load data into hHase and thinking about parallel data loading with using flume's selector and inteceptor, because of speed gap between source and sink.
So, what I want to do with flume are
creating Event's header with interceptors's regex_extractor type
multiplexing Event with header to more than two channels with selector's multiplexing type
in one source-channel-sink.
and tried configuration as below.
agent.sources = tailsrc
agent.channels = mem1 mem2
agent.sinks = std1 std2
agent.sources.tailsrc.type = exec
agent.sources.tailsrc.command = tail -F /home/flumeuser/test/in.txt
agent.sources.tailsrc.batchSize = 1
agent.sources.tailsrc.interceptors = i1
agent.sources.tailsrc.interceptors.i1.type = regex_extractor
agent.sources.tailsrc.interceptors.i1.regex = ^(\\d)
agent.sources.tailsrc.interceptors.i1.serializers = t1
agent.sources.tailsrc.interceptors.i1.serializers.t1.name = type
agent.sources.tailsrc.selector.type = multiplexing
agent.sources.tailsrc.selector.header = type
agent.sources.tailsrc.selector.mapping.1 = mem1
agent.sources.tailsrc.selector.mapping.2 = mem2
agent.sinks.std1.type = file_roll
agent.sinks.std1.channel = mem1
agent.sinks.std1.batchSize = 1
agent.sinks.std1.sink.directory = /var/log/flumeout/1
agent.sinks.std1.rollInterval = 0
agent.sinks.std2.type = file_roll
agent.sinks.std2.channel = mem2
agent.sinks.std2.batchSize = 1
agent.sinks.std2.sink.directory = /var/log/flumeout/2
agent.sinks.std2.rollInterval = 0
agent.channels.mem1.type = memory
agent.channels.mem1.capacity = 100
agent.channels.mem2.type = memory
agent.channels.mem2.capacity = 100
But, it doesn't work!
when selector part is removed, there are some interceptor debugging message in flume's log.
but when selector and interceptor are together, there are nothing.
Is there any wrong expression or something I missed?
Thanks for reading. :)
I found it.
In the flume log, there are warning message as below.
2013-10-10 16:34:20,514 (conf-file-poller-0) [WARN - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateSources(FlumeConfiguration.java:571)] Removed tailsrc due to Failed to configure component!
so I had attached below line
agent.sources.tailsrc.channels = mem1 mem2
and then It works!!!!