In Redis mode, the data sent is different from the received data - ignite

In Redis mode, the sent data is different from the received data. The same problem with memcached. Here is an example of the code for Redis:
import pickle
import redis
REDIS = {
'host': 'localhost',
'port': 6379,
'db': 0
}
IGNITE = {
'host': 'localhost',
'port': 11211,
'db': 0
}
def test_connection(redis_connection):
d = {
'a': 1,
'b': 'AS213dfdsfфывфывфывфа',
'c': None,
}
pickle_dumps = pickle.dumps(d)
print(pickle_dumps)
redis_connection.set('foo', pickle_dumps)
print(redis_connection.get('foo'))
print('-----------------------')
test_connection(redis.StrictRedis(**IGNITE))
test_connection(redis.StrictRedis(**REDIS))
That's how I run the Ignite:
docker run -p 11211:11211 -it -e "CONFIG_URI=https://raw.githubusercontent.com/apache/ignite/master/examples/config/redis/example-redis.xml" apacheignite/ignite
Output:
b'\x80\x03}q\x00(X\x01\x00\x00\x00aq\x01K\x01X\x01\x00\x00\x00bq\x02X \x00\x00\x00AS213dfdsf\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd0\xb0q\x03X\x01\x00\x00\x00cq\x04Nu.'
b'\xef\xbf\xbd\x03}q\x00(X\x01\x00\x00\x00aq\x01K\x01X\x01\x00\x00\x00bq\x02X \x00\x00\x00AS213dfdsf\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd0\xb0q\x03X\x01\x00\x00\x00cq\x04Nu.'
-----------------------
b'\x80\x03}q\x00(X\x01\x00\x00\x00aq\x01K\x01X\x01\x00\x00\x00bq\x02X \x00\x00\x00AS213dfdsf\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd0\xb0q\x03X\x01\x00\x00\x00cq\x04Nu.'
b'\x80\x03}q\x00(X\x01\x00\x00\x00aq\x01K\x01X\x01\x00\x00\x00bq\x02X \x00\x00\x00AS213dfdsf\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd1\x8b\xd0\xb2\xd1\x84\xd0\xb0q\x03X\x01\x00\x00\x00cq\x04Nu.'
-----------------------
How can I fix this?

I can see that \x80 is replaced by \xef\xbf\xbd.
The latter is Unicode representation of Replacement character
I expect this is the result of writing \x80 (a control character) into Unicode stream inside Ignite. I could find a question about Pickle with a similar problem.
It also advices against using Pickle in network environment. Why don't you use e.g. JSON?

Related

Redis performance with a Digital Ocean managed instance

I'm using nodeJS to connect my a Digital Ocean droplet (Ubuntu 20.04) to a Digital Ocean managed Redis instance. I'm using the ioredis npm library.
Consider the simple trivial code below. This code works perfectly with the public network name albeit taking around 400ms. If I use the private network name the entire script hangs. I've also tried the private IP 10...* but that doesn't work either.
Does anyone have any experience here or insight as to how to connect directly with the VPC? Is there a specific way to use the private network name?
const Redis = require("ioredis");
(async () => {
// Spin up a redis client
const redis = new Redis({
host: "db-redis-**************-0.b.db.ondigitalocean.com",
port: *****,
username: "******",
password: "**********",
tls: {
key: "",
cert: "",
},
});
console.time("Total time to write/read a 10 character string to redis");
// Generate a random string
const generateRandomString = (length = 6) =>
Math.random().toString(20).substr(2, length);
// Save data to the redis server with a TTL of 2 miniutes
redis.set("redisTest", generateRandomString(10), "EX", 120);
// Now read it back
await redis.get("redisTest", function (err, result) {
if (err) {
console.error(err);
} else {
console.log("Data retrieved: ", result);
}
});
// Done
console.log("Done.");
console.timeEnd("Total time to write/read a 10 character string to redis");
})();
If using the private network address hangs during opening the connection, it's likely because your Droplet is not in the same VPC as your Redis database. In your case, it turned out that the Droplet and Redis were in different regions, so moving them to the same region (and ensuring they're in the same VPC within that region) should resolve the issue.

How to use `ioredis` to connect to Redis instance (AWS elasticcache) across ssh tunnel with SSL?

This seems to be something about ioredis and its support for TLS. This is all on a mac, Catalina, etc.
I have an elasticcache Redis instance running, inside a VPC. I tunnel to it with ssh,
ssh -L 6379:clustercfg.my-test-redis.amazonaws.com:6379 -N MyEC2
The following doesn't work with node 12.9, ioredis 4.19.4
> const Redis = require("ioredis");
> const redis = new Redis('rediss://127.0.0.1:6379');
[ioredis] Unhandled error event: Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's altnames: IP: 127.0.0.1 is not in the cert's list:
at Object.checkServerIdentity (tls.js:287:12)
<repeated ... many times>
This doesn't work either:
> const Redis = require("ioredis");
> const redis = new Redis('redis://127.0.0.1:6379');
> redis.status
'connect'
> redis.set('fooo','barr').then(console.log).catch(console.error)
Promise { <pending> }
> redis.status
'connect'
Is there a way to let me do this with ioredis? This is just for debugging. If the first form is correct, is there a setting to allow "non-strict" validation of the cert or something?
This works (on a mac)
% openssl s_client -connect localhost:6379
set "fred" "Mary"
+OK
get "fred"
$4
Mary
This works (with redis installed via pip3)
#!/usr/bin/env python3
import redis
r = redis.Redis(host='127.0.0.1', ssl=True, port=6379)
r.set('foo', 'bar')
print(r.get('foo'))
While I wouldn't recommend this for production, you said this was for debugging.
You need to disable the server identity check. You can do that by overriding the function in the configuration with a noop:
const Redis = require("ioredis");
const redis = new Redis('rediss://127.0.0.1:6379', {
tls: {
checkServerIdentity: () => undefined,
}
});

Unable to create neptune cluster using btoto3

We are not able to create neptune cluster using this python boto3 library, and boto3 function is given below
**Functions is:**
import boto3
client = boto3.client('neptune')
response = client.create_db_cluster(
AvailabilityZones=[
'us-west-2c', 'us-west-2b',
],
BackupRetentionPeriod=1,
DatabaseName='testdcluster',
DBClusterIdentifier='testdb',
DBClusterParameterGroupName='default.neptune1',
VpcSecurityGroupIds=[
'sg-xxxxxxxxx',
],
DBSubnetGroupName='profilex',
Engine='neptune',
EngineVersion='1.0.1.0',
Port=8182,
Tags=[
{
'Key': 'purpose',
'Value': 'test'
},
],
StorageEncrypted=False,
EnableIAMDatabaseAuthentication=False,
DeletionProtection=False,
SourceRegion='us-west-2'
)
Error message is also given below
**error message :**
when calling the CreateDBCluster operation: The parameter DatabaseName is not valid for engine: neptune
could you please help to fix this ?
Rather than using DatabaseName just use DBClusterIdentifier and that will become the name of your cluster. The DatabaseName parameter is not needed when creating a Neptune cluster.

how to define amazon-cloudwatch boto3.client in python user-defined-functions

I'm working on a Python 3 script designed to get S3 space utilization statistics from AWS CloudFront using the Boto3 library.
I started with the AWS CLI and found I could get what I'm after with a command like this:
aws cloudwatch get-metric-statistics --metric-name BucketSizeBytes --namespace AWS/S3 --start-time 2017-03-06T00:00:00Z --end-time 2017-03-07T00:00:00Z --statistics Average --unit Bytes --r
from datetime import datetime, timedelta
import boto3
seconds_in_one_day = 86400 # used for granularity
cloudwatch = boto3.client('cloudwatch')
response = cloudwatch.get_metric_statistics(
Namespace='AWS/S3',
Dimensions=[
{
'Name': 'BucketName',
'Value': 'foo-bar'
},
{
'Name': 'StorageType',
'Value': 'StandardStorage'
}
],
MetricName='BucketSizeBytes',
StartTime=datetime.now() - timedelta(days=7),
EndTime=datetime.now(),
Period=seconds_in_one_day,
Statistics=[
'Average'
],
Unit='Bytes'
)
print(response)
If I execute above code it is returning json output ,but I want to define function from cloudwatch on wards entire code ..and make it parameterized,but the problem is when i define a function
It returns. Error code saying that response variable not defined...
Pls suggest how to use that in function

RabbitMQ and Pika

I'm using python lib pika, fow work with rabbitmq.
RabbitMq runnning and listen 0.0.0.0:5672, I try connect to him from another server, and I get exception:
socket.timeout: timed out
Python code using from official doc RabbitMQ(Hello, World)
I was try disable iptables.
But if I run script with host "localhost", all good work.
My /etc/rabbitmq/rabbitmq.config
[
{rabbit, [
{tcp_listeners,[{"0.0.0.0",5672}]}
]}
].
Code:
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(host='192.168.10.150', port=5672, virtual_host='/', credentials=pika.credentials.PlainCredentials('user', '123456')))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True)
message = "Hello World!"
channel.basic_publish(exchange='',
routing_key='task_queue',
body=message,
properties=pika.BasicProperties(
delivery_mode = 2, # make message persistent
))
print " [x] Sent %r" % (message,)
connection.close()
Since you are connecting from another server, you should check your machine`s firewall settings