Aerospike TTL updates - aerospike

I am trying to update .put() into existing records in Aerospike using the python client.
I was excited to discover that you can actually put into existing records and not update the TTL as shown here (meta params): https://www.aerospike.com/apidocs/python/client.html?highlight=ttl#aerospike.Client.put . I used to do this by reading the TTL and then re-setting it on the new put.
However, I discovered that when you use the aerospike.TTL_DONT_UPDATE option, the TTL resets to the cold-start-evict-ttl that is set on the namespace. How do I avoid that and just maintain the previous TTL unchanged?
Crossposted to Aerospike forums: https://discuss.aerospike.com/t/aerospike-ttls-on-put/4993

Not posting sample code is to a 'bug report' the equivalent of not washing your hands after using the toilet. It also would help if you simply provided the version of your server and the version of your Python client.
ttl-test.py
from __future__ import print_function
import aerospike
import sys
from aerospike import exception as e
from time import sleep
config = { 'hosts': [ ('127.0.0.1', 3000) ] }
try:
client = aerospike.client(config).connect()
except ex.ClientError as e:
print("Error: {0} [{1}]".format(e.msg, e.code))
sys.exit(1)
key = ('test', 'demo', 'foo')
try:
# Write a record
client.put(key, {
'name': 'Hey Joe',
'age': 66
}, meta={'ttl': 4})
except ex.RecordError as e:
print("Error: {0} [{1}]".format(e.msg, e.code))
client.put(key, {'z': 26}, meta={'ttl': aerospike.TTL_DONT_UPDATE})
try:
(key, meta) = client.exists(key)
print(meta)
except ex.RecordError as e:
print("Error: {0} [{1}]".format(e.msg, e.code))
sleep(5)
try:
(key, meta) = client.exists(key)
print(meta)
except ex.RecordError as e:
print("Error: {0} [{1}]".format(e.msg, e.code))
client.close()
And now to test it:
$ python ttl-test.py
{'gen': 2, 'ttl': 4}
None
Seems to work. Note that I'm using a server version >= 3.10.1

Related

How does Python Eventlet concurrently request?

The client uses eventlet with the following code.
import eventlet
import urllib.request
urls = [
"http://localhost:5000/",
"http://localhost:5000/",
"http://localhost:5000/",
]
def fetch(url: str) -> str:
return urllib.request.urlopen(url).read()
pool = eventlet.GreenPool(1000)
for body in pool.imap(fetch, urls):
print("got body", len(body), body)
The server side is built using fastapi, so why can we distinguish if the client is really concurrently requested, we add a delay
from fastapi import FastAPI
import uvicorn
import time
app = FastAPI()
#app.get('/')
def root():
time.sleep(3)
return {"message": "Hello World"}
if __name__ == '__main__':
uvicorn.run("api:app", host="0.0.0.0", port=5000)
Start the server first, then run the client code, using Linux's time command to see how long it takes.
got body 25 b'{"message": "Hello World"}'
got body 25 b'{"message": "Hello World"}'
got body 25 b'{"message": "Hello World"}'
python -u "013.py" 0.25s user 0.01s system 2% cpu 9.276 total
As you can see, it took 9 seconds instead of 3 seconds, so this ``eventlet` is not parallel, am I using it wrong? What is the correct way to open it?
Translated with www.DeepL.com/Translator (free version)

Example for using Locust with Boto3

I am trying to use Locust together with Boto3 for load testing of S3, in particular for PutObject.
I found a tutorial and another github project, but both have been written for the older locustio package and seem incompatible with the current version locust 2.5.1.
https://medium.com/#allankp/populating-dashboards-with-boto3-and-locust-ff38b113349a
https://github.com/twosigma/locust-s3
I have made a short attempt at adapting this code to newer versions but this seems to require a good understanding of locust which I do not possess yet.
Does anyone have a simple beginner's example to share? Or would you rather recommend using the old locustio package or another tool altogether?
Many thanks.
Helped by this example in the official docs and Cyberwiz's answer, I created this bare-bones solution for my specific problem.
import time
import boto3
from locust import User, task, constant, events
class BotoClient:
def __init__(self):
self.s3_client = boto3.client(
's3',
aws_access_key_id="ACCESS_KEY",
aws_secret_access_key="SECRET_KEY"
)
def send(self, f, bucket_name, object_name):
request_meta = {
"request_type": "upload file",
"name": "S3",
"start_time": time.time(),
"response_length": 0,
"response": None,
"context": {},
"exception": None,
}
start_perf_counter = time.perf_counter()
try:
self.s3_client.upload_fileobj(f, bucket_name, object_name)
except Exception as e:
request_meta['exception'] = e
request_meta["response_time"] = (time.perf_counter() - start_perf_counter) * 1000
events.request.fire(**request_meta)
class BotoUser(User):
abstract = True
def __init__(self, env):
super().__init__(env)
self.client = BotoClient()
wait_time = constant(1)
class MyUser(BotoUser):
wait_time = constant(1)
#task
def send_request(self):
with open("FILE_NAME", 'rb') as f:
self.client.send(f, "BUCKET_NAME", "OBJECT_NAME")
The changes needed to adapt the code to version 1+ of Locust are very small.
Quoting the documentation changelog:
Locust class renamed to User
We’ve renamed the Locust and HttpLocust classes to User and HttpUser. The locust attribute on TaskSet instances has been renamed to user.

Deep Security API - Intrusion Prevention Rules - Error

I have a Python API query to gather all the Intrusion Prevention Rules and the ID of the computers associated with each but I get an error after around 14000 records which is :
An exception occurred when calling ComputerIntrusionPreventionRuleDetailsApi.lis
t_intrusion_prevention_rules_on_computer: (500)
Reason:
HTTP response headers: HTTPHeaderDict({'X-Frame-Options': 'SAMEORIGIN', 'X-XSS-P
rotection': '1;mode=block', 'Cache-Control': 'no-cache,no-store', 'Pragma': 'no-
cache', 'X-DSM-Version': 'Deep Security/12.0.296', 'Content-Type': 'application/
json', 'Content-Length': '35', 'Date': 'Fri, 16 Oct 2020 14:04:02 GMT', 'Connect
ion': 'close'})
HTTP response body: {"message":"Internal server error"}
My Script is the following :
# -*- coding: utf-8 -*-
from __future__ import print_function
import sys, warnings
import pymssql
import datetime
import deepsecurity
import json
import requests
import urllib3
from deepsecurity.rest import ApiException
from urllib3.exceptions import InsecureRequestWarning
from pprint import pprint
urllib3.disable_warnings(InsecureRequestWarning)
if not sys.warnoptions:
warnings.simplefilter("ignore")
configuration = deepsecurity.Configuration()
configuration.host = "Server/api/"
# Authentication
configuration.api_key['api-secret-key'] = 'Key'
# Initialization
# Set Any Required Values
conn = pymssql.connect("localhost","" ,"", "DeepSecurity")
cursor = conn.cursor()
cursor2 = conn.cursor()
api_instance = deepsecurity.ComputerIntrusionPreventionRuleDetailsApi(deepsecurity.ApiClient(configuration))
api_instance2 = deepsecurity.ComputersApi(deepsecurity.ApiClient(configuration))
api_version = 'v1'
overrides = False
try:
recorddt = datetime.datetime.now()
api_response2 = api_instance2.list_computers(api_version, overrides=overrides)
for y in api_response2.computers:
api_response = api_instance.list_intrusion_prevention_rules_on_computer(y.id,api_version,overrides=overrides)
for x in api_response.intrusion_prevention_rules:
strCVE=(x.cve)
clean_cve=str(strCVE).replace("['", "").replace("']", "").replace("'", "")
cursor.executemany("INSERT INTO ip_rules VALUES (%d, %s, %s ,%s,%s) ", [(x.id,x.name,clean_cve,recorddt,y.id)])
conn.commit()
except ApiException as e:
print("An exception occurred when calling ComputerIntrusionPreventionRuleDetailsApi.list_intrusion_prevention_rules_on_computer: %s\n" % e)
I guess it happened while looping (list_intrusion_prevention_rules_on_computer) with different computer id (as y.id).
Deep Security Manager seems to be able to identify the exception and return 500 Internal server error (and with header information). So, you might want to check if any exceptions in server0.log where you might get some cues.
You also want to identify which computer(s) failed to get prevention rules assigned and retry again.

ContextualVersionConflict while using snowflake-sqlalchemy

I am trying to write a pandas DataFrame to snowflake using df.to_sql() but getting an error:
ContextualVersionConflict: (idna 2.10 (/mnt/shared//conda), Requirement.parse('idna<2.10'), {'snowflake-connector-python'})
my code
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
try:
headers = {'content-type': 'application/x-www-form-urlencoded'}
data = {
'grant_type': 'password',
'scope': 'SESSION:ROLE-ANY',
'username': sf_schema,
'password': password,
'client_id': snf_url
}
response = requests.post(oauth_url, data=data, headers=headers, verify=False, proxies=proxyDict)
except Exception as e:
print(e)
oauth_token = str(json.loads(response.text)['access_token']).strip()
engine = create_engine(URL(
account=account,
role=role,
user=sf_schema.lower() + "#<domain>.COM",
warehouse=warehouse.upper(),
database=database.upper(),
schema=sf_schema.upper(),
authenticator="oauth",
token=oauth_token))
connection = engine.connect()
try:
df.to_sql(sf_table_name, con=engine, index=False, chunksize=15000, if_exists='replace', method='multi')
connection.close()
engine.dispose()
except snowflake.connector.errors.ProgrammingError as e:
print(e)
print('Error {0} ({1}): {2} ({3})'.format(e.errno, e.sqlstate, e.msg, e.sfqid))
connection.close()
engine.dispose()
I am able to read data from snowflake to pandas using snowflake.connector. So, connection is not an issue here
The problem here is that snowflake-connector-python requires a lower version of the idna package. You can reinstall the correct version of idna with:
pip install 'idna==2.9' --force-reinstall
I don't know whether this solves your issue or not, but I would recommend leveraging the write_pandas() function instead of the to_sql() function, or the to_sql() function with the pd_writer method.
Both will leverage a PUT and COPY INTO to get the data into Snowflake, and avoid INSERT statements. These should be much faster (and possibly avoid the error you are getting).
https://docs.snowflake.com/en/user-guide/python-connector-pandas.html#writing-data-from-a-pandas-dataframe-to-a-snowflake-database

How do you reply to a RabbitMQ RPC client with multiple messages?

I'm trying to use RabbitMQ in an RPC environment where each remote call will take a significant amount of time, producing results continually. I want the results to be delivered to the client as they are generated.
I started with the standard tutorial RPC example, then modified it to use "Direct Reply-to". I publish all the intermediate results back to an "anonymous exclusive callback queue", with out acknowledging the original request. When processing is complete, I send a final message back to the client and then acknowledge the original request. But the client is only seeing the first intermediate message. My client happens to be in PHP and my server is in Python, but I suspect that is not relevant. Does anyone have the magic to make this work? I can post code, but is pretty basic stuff by the cookbook.
Answering my own question. The following worked:
php client:
#!/usr/bin/php
<?php
require_once __DIR__ . '/vendor/autoload.php';
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
class RpcClient {
private $connection;
private $channel;
private $callback_queue;
private $response;
private $corr_id;
public function __construct() {
$this->connection = new AMQPStreamConnection(
'localhost', 5672, 'guest', 'guest'
);
$this->channel = $this->connection->channel();
list($this->callback_queue, ,) = $this->channel->queue_declare(
"", false, false, true, false
);
# For direct reply-to, need to consume amq.rabbitmq.repy-to, a special queue name
# Unclear what happens to the declare above
$this->channel->basic_consume(
$this->callback_queue, '', false, true,
false, false, array($this, 'onResponse')
);
}
# This is going to be called once for each message coming back
public function onResponse($rep) {
if ($rep->get('correlation_id') == $this->corr_id) {
$response = json_decode($rep->body, true);
echo print_r($response['line'], true);
if ($response['type'] == 'final') {
$this->response = $rep->body;
}
}
}
public function call($message_array) {
$this->response = null;
$this->corr_id = uniqid();
$jsonm = json_encode($message_array);
$msg = new AMQPMessage(
$jsonm,
array(
'correlation_id' => $this->corr_id,
### Not sure which of the next two lines is the correct one... if either....
##'reply_to' => 'amq.rabbitmq.reply-to' # This is when using direct reply-to
'reply_to' => $this->callback_queue
)
);
$this->channel->basic_publish($msg, '', 'ansiblePB_rpc_queue');
while (!$this->response) {
$this->channel->wait();
}
return intval($this->response);
}
}
$ansiblepb_rpc = new RpcClient();
$response = $ansiblepb_rpc->call(array('userID' => 'jb1234',
'user_display_name' => 'Joe Bloe',
'limit' => '24000'));
echo ' [.] Got ', $response, "\n";
?>
Python server:
#!/usr/bin/env python
""" 1 """
import glob
import json
import platform
import os
import re
import shutil
import subprocess
import time
import yaml
import pika
class RMQmultireply(object):
""" Generic class to support ansible_playbook on a Rabbit MQ RPC queue"""
def __init__(self, channel, method, props):
#""" Constructor.... duh """
self.channel = channel
self.method = method
self.props = props
def run(self, userID, username, limit):
""" Run the main guts of the service """
cmd = ['/home/dhutchin/devel/rmq/multilineoutput']
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for line in proc.stdout.readlines():
intermediate_json_result = json.dumps({'type': 'intermediate', 'line': line})
self.channel.basic_publish(exchange='',
routing_key=self.props.reply_to,
properties=pika.BasicProperties(
correlation_id=self.props.correlation_id),
body=str(intermediate_json_result))
#self.channel.basic_ack(delivery_tag=self.method.delivery_tag)
proc.wait()
return proc.returncode
def on_request(channel, method, props, jsonstring):
""" Request has just come in to run ansible_playbook """
playbook = RMQmultireply(channel, method, props)
# fork and exec a playbook
# Recieve each line of output and send them as received back
# to the requestor.
# .run does not return until playbook exits.
# Use "Direct Reply-to" mechanism to return multiple messages to
# our client.
request = yaml.load(jsonstring) # Yes, yaml works better than JSON
returncode = playbook.run(request['userID'], request['user_display_name'], request['limit'])
final_json_result = json.dumps({'type': "final", 'line': '', 'rc': returncode})
channel.basic_publish(exchange='',
routing_key=props.reply_to,
properties=pika.BasicProperties(correlation_id=
props.correlation_id),
body=str(final_json_result))
# Acknowlege the original message so that RabbitMQ can remove it
# from the ansiblePB_rpc_queue queue
channel.basic_ack(delivery_tag=method.delivery_tag)
def main():
""" Its kinda obvious what this does """
try:
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='localhost'))
except Exception:
print "pika.BlockingConnection.... failed... maybe RabbitMQ is not running"
quit()
channel = connection.channel()
channel.queue_declare(queue='ansiblePB_rpc_queue')
channel.basic_qos(prefetch_count=1)
# auto_ack is turned off by default, so we don't need to specify auto_ack=False
channel.basic_consume(queue='ansiblePB_rpc_queue', on_message_callback=on_request)
print " [x] Awaiting RPC requests"
channel.start_consuming()
if __name__ == '__main__':
main()