How to increase request timeout for http4s - http4s

I have a request that is talking to a backend db that takes time to respond. And the http4s is throwing request timeout. I wanted to know if there is a property to increase the request timeout?
Thanks
Saad.

Server Timeouts
BlazeBuilder can easily be adjusted. The default implementation is
import org.http4s._
import scala.concurrent.duration._
BlazeBuilder(
socketAddress = InetSocketAddress.createUnresolved(LoopbackAddress, 8080),
serviceExecutor = DefaultPool, // #org.http4s.util.threads - ExecutorService
idleTimeout = 30.seconds
isNio2 = false,
connectorPoolSize = math.max(4, Runtime.getRuntime.availableProcessors() + 1),
bufferSize = 64*1024,
enableWebSockets = true,
sslBits = None,
isHttp2Enabled = false,
maxRequestLineLen = 4*1024,
maxHeadersLen = 40*1024,
serviceMounts = Vector.empty
)
We can utilize the default and change that, as the class has a copy method implemented.
import org.http4s._
import scala.concurrent.duration._
BlazeBuilder.copy(idleTimeout = 5.minutes)
You can then proceed with your server however you would like, adding your services and then serving.
Client Timeouts
BlazeClient takes a config class called BlazeClientConfig
The default is
import org.http4s._
import org.http4s.client._
BlazeClientConfig(
idleTimeout = 60.seconds,
requestTimeout = Duration.Inf,
userAgent = Some(
`User-Agent`(AgentProduct("http4s-blaze", Some(BuildInfo.version)))
),
sslContext = None,
checkEndpointIdentification = true,
maxResponseLineSize = 4*1024,
maxHeaderLength = 40*1024,
maxChunkSize = Integer.MAX_VALUE,
lenientParser = false,
bufferSize = 8*1024,
customeExecutor = None,
group = None
)
However we have a default config and as it exists as a case class you would probably be better modifying the default. Use PooledHttp1Client under most cases.
import scala.concurrent.duration._
import org.http4s.client._
val longTimeoutConfig =
BlazeClientConfig
.defaultConfig
.copy(idleTimeout = 5.minutes)
val client = PooledHttp1Client(
maxTotalConnections = 10,
config = longTimeoutConfig
)

Related

Can I bind a Lambda Layer directly to a static ARN instead of a zip file

I want to use an AWS provided Layer in a Lamba function. In Terraform what is the preferred way to bind it? Also, can the ARN be bound directly to the Layers property of the module, bypassing the need for defining the layer?
resource "aws_lambdas_layer" "lambda_layer"{
#filename = "python32-pandas.zip"
layer_name= "aws-pandas-py38-layer"
arn = "arn:aws:lambda:us-east-1:xxxxxx:AWSSDKPandas-Python38:1" #? Is this valid
}
module "lambda_test" {
source = "git::https://git.my-custom-aws-lambda.git"
application = var.application
service = "${var.service}-test"
file_path = "lambda_function.zip"
publicly_accessible = false
data_classification = "confidential"
handler = "lambda_function.lambda_handler"
runtime = "python3.8"
tfs_releasedefinitionname = ""
tfs_releasename = "0"
vpc_enabled = true
vpc_application_tag = "aws-infra"
promote = true
rollback = false
create_cwl_group = true
cwl_prefix = "my-project"
create_cwl_subscription = false
#Could layers an arn?
layers = [aws_lambda_layer_version.lambda_layer.arn]
timeout = 600 ####10 mins
memory_size = 1024 #### 1GB
environment = {
variables = {
destination_bucket_name = "us-east-1-my-sbx-${terraform.workspace}"
}
}
}
Doh! The layers property is an [array]. Minor lapse of reading comprehension on my part :/
The solution is to bind the layers to an array of ["arns"] pointing to the aws or custom arn(s).
layers = ["arn:aws:lambda:us-east-1:336392948345:layer:AWSSDKPandas-Python39:1"]

how to receive information in kotlin from a server in python (socketserver)

server
I have tried almost everything to receive text from python
I don't know where the problem comes from from the client or from the server
try:
llamadacod = self.request.recv(1024)
llamada = self.decode(llamadacod)
print(f"{color.A}{llamada}")
time.sleep(0.1)
if llamada == "conectado":
msg = "Hello"
msgcod = self.encode(msg)
print(f"{color.G}{msg}")
self.request.send(msgcod)
client
val thread = Thread(Runnable {
try{
val client = Socket("localHost",25565)
client.setReceiveBufferSize(1024)
client.outputStream.write("conectado".toByteArray())
val text = InputStreamReader(client.getInputStream())
recibir = text.toString()
client.outputStream.write("Client_desconect".toByteArray())
client.close()
I already solved it, the solution was very simple, you just had to ensure that both the server and the client would occupy the same way of communicating
client :
val input = DataInputStream(client.getInputStream())
id = input.readUTF()
server:
self.request.send(len(msg).to_bytes(2, byteorder='big'))
self.request.send(msg)

How to dynamically update parameters of an existing Airflow (1.9 version)Connection within code?

I have defined a SSH connection via Airflow Admin UI. However I am only defining a service account , host and port in the UI. I am retrieving the password in the first task instance and I need to update the SSH connection with the password in the second task instance and use it in the third task instance.
t1 : call an R function to retrieve password for svc account (stored
in xcom_push)
t2 : Update the SSH connection with this password (I am using
SSHHook) ssh02.password = password (retrieved via xcom_pull)
t3 : call a server using previously updated connection (ssh02)
Currently t1 and t2 work as expected ,however t3 fails since the password is not getting updated and it is looking for .ssh key file based authentication. Can someone please suggest how this can be implemented ?
Here is my code snippet :
from airflow import models
from airflow.contrib.operators.ssh_operator import SSHOperator
from airflow.operators.python_operator import PythonOperator
from airflow.operators.bash_operator import BashOperator
from datetime import datetime, timedelta
from airflow.contrib.hooks.ssh_hook import SSHHook
from airflow.models import Variable
from airflow.models import Connection
from airflow.settings import Session
from airflow.utils import db
from airflow.utils.db import provide_session
from airflow import DAG
import logging
import os
svcpassword = 'XXXX'
logging.getLogger().setLevel(logging.DEBUG)
ssh01 = SSHHook(ssh_conn_id='ssh_conn1')
ssh02 = SSHHook(ssh_conn_id='ssh_conn2')
default_args = {
'owner': 'user',
'depends_on_past': False,
'start_date': datetime.now(),
'email': ['abcd#gmail.com'],
'email_on_failure': True,
'email_on_retry': True,
'retries': 1,
'retry_delay':timedelta(minutes=1)
}
dag = DAG('dag_POC', default_args=default_args,
schedule_interval="#once")
path1 = '/home/user/R_samplescript'
t1 = SSHOperator(
task_id='SSHTask',
command='Rscript '+path1+'.R',
ssh_hook=ssh01,
params={},retries =1 ,
do_xcom_push = True,
dag = dag
)
def create_new_connection(**kwargs):
ti = kwargs['ti']
pwd = ti.xcom_pull(task_ids='SSHTask')
password = str(pwd).replace("\\n","\n")
password = password[password.find(' ')+1 : ]
password = password.strip()
svcpassword = password
db.merge_conn( models.Connection(
conn_id='ssh_conn2', conn_type='SSH',
host='server_name', port='XXXX',login =
'account_name',password = svcpassword))
t2 = PythonOperator(
task_id='Create_Connection',
python_callable=create_new_connection,
provide_context=True,
dag=dag
)
t3 = SSHOperator(
task_id='RemoteCallTest',
command="R command",
ssh_hook = SSHHook().get_conn('ssh_conn2'),
do_xcom_push = False,
retries = 1,
dag=dag
)
t1 >> t2 >> t3
You need to leverage the session wrapper to persist changes to the db
#provide_session()
def set_password(session=None):
conn = MyHook().get_conn(conn_id)
conn.set_password(my_password)
session.add(conn)
session.commit()

Why does cacheTimeout setting of renderView() in ColdBox application have no effect?

I'm developing a ColdBox application with modules and wanted to use it's caching functionality to cache a view for some time.
component{
property name="moduleConfig" inject="coldbox:moduleConfig:mymodule";
...
function widget(event, rc, prc){
var viewData = this.getData();
return renderView(
view = "main/widget",
args = viewData,
cache = true,
cacheSuffix = ":" & moduleConfig.entryPoint,
cacheTimeout = 2
);
}
}
I tried to set the default caching config by adding the following info to my Cachebox.cfc and removed the cacheTimeout from the code above:
cacheBox = {
defaultCache = {
objectDefaultTimeout = 1, //two hours default
objectDefaultLastAccessTimeout = 1, //30 minutes idle time
useLastAccessTimeouts = false,
reapFrequency = 5,
freeMemoryPercentageThreshold = 0,
evictionPolicy = "LRU",
evictCount = 1,
maxObjects = 300,
objectStore = "ConcurrentStore", //guaranteed objects
coldboxEnabled = false
},
caches = {
// Named cache for all coldbox event and view template caching
template = {
provider = "coldbox.system.cache.providers.CacheBoxColdBoxProvider",
properties = {
objectDefaultTimeout = 1,
objectDefaultLastAccessTimeout = 1,
useLastAccessTimeouts = false,
reapFrequency = 5,
freeMemoryPercentageThreshold = 0,
evictionPolicy = "LRU",
evictCount = 2,
maxObjects = 300,
objectStore = "ConcurrentSoftReferenceStore" //memory sensitive
}
}
}
};
Though that didn't have any influence on the caching. I've also tried to add the config above to my Coldbox.cfc.
Even if I create a completely new test app via CommandBox via coldbox create app MyApp, then only set the the caching in Cachebox.cfc to one minute, set viewCaching = true in Coldbox.cfc, and set event.setView( view="main/index", cache=true ) in the Main.cfc, it doesn't work as expected.
No matter what I do, the view is always cached for at least 5 minutes.
Is there something I am missing?
Make sure you have enabled view caching in your ColdBox configuration. Go to the /config/ColdBox.cfc file and add this key:
coldbox = {
// Activate view caching
viewCaching = true
}
Also, did you mistype the name of the CFC you changed for the caching above? Those changes should be in the /config/CacheBox.cfc file, not in /config/ColdBox.cfc.
Obviously, also the the reapFrequency in the /config/ColdBox.cfc needs to be set to a smaller value in order to let the cache entry be removed earlier.
Though, as the documentation states:
The delay in minutes to produce a cache reap (Not guaranteed)
It is not guaranteed that the cached item it really removed after that time, so it can be that the cache is empty after 3 minutes even when reapFrequency is set to 1.

How do you configure UDPInput to work with heka-flood udp test

I am trying to test sending data to heka's UDPInput with no success. I decided to try to use the heka-flood tool to mimic UPD traffic also with no success. I am using 0.10 version of heka. My heka.toml :
[UdpInput]
address = "127.0.0.1:4880"
net = "udp"
splitter = "udp_splitter"
decoder = "ProtobufDecoder"
set_hostname = true
# I have also tried not setting this as well
[udp_splitter]
type = "HekaFramingSplitter"
[ProtobufDecoder]
[LogOutput]
type = "LogOutput"
message_matcher = "Logger == 'UdpInput'"
encoder = "PayloadEncoder"
and my flood.toml:
[udp_proto]
ip_address = "127.0.0.1:4880"
sender = "udp"
pprof_file = ""
encoder = "protobuf"
num_messages = 1000
corrupt_percentage = 0.0001
signed_percentage = 0.00011
variable_size_messages = false
ascii_only = true
max_message_size = 32000
If I add another input, like say a log tailer and add it to the message matcher for the LogOutput, those messages end up being logged out. I never see anything from the UpdInput. What am I doing wrong?