Can django-redis use dbsize? - redis

django-redis source: https://github.com/jazzband/django-redis/tree/master/django_redis
my problem is I can not find method to get number of keys in Redis database, it call dbsize. Methods that available such as set, get, add, delete, delete_pattern, delete_many, clear, get_many, set_many, incr, decr, has_keys, keys, iter_keys, ttl, pttl, persist, expire, expire_at, pexpire, pexpire_at, lock, close, touch.
How can I used dbsize method of redis command in django-redis library?
environment:
django version : 3.2.10
django-redis: 5.2.0

I found the solution of the question
from django_redis import get_redis_connection
REDIS = get_redis_connection("default") # default is alias of redis
REDIS.dbsize() # get number of keys in the currently-selected database
this solution can use native redis command but cannot use method of django-redis plugin
WARNING: Not all pluggable clients support this feature.

Related

How can I configure a specific serialization method to use only for Celery ping?

I have a celery app which has to be pinged by another app. This other app uses json to serialize celery task parameters, but my app has a custom serialization protocol. When the other app tries to ping my app (app.control.ping), it throws the following error:
"Celery ping failed: Refusing to deserialize untrusted content of type application/x-stjson (application/x-stjson)"
My whole codebase relies on this custom encoding, so I was wondering if there is a way to configure a json serialization but only for this ping, and to continue using the custom encoding for the other tasks.
These are the relevant celery settings:
accept_content = [CUSTOM_CELERY_SERIALIZATION, "json"]
result_accept_content = [CUSTOM_CELERY_SERIALIZATION, "json"]
result_serializer = CUSTOM_CELERY_SERIALIZATION
task_serializer = CUSTOM_CELERY_SERIALIZATION
event_serializer = CUSTOM_CELERY_SERIALIZATION
Changing any of the last 3 to [CUSTOM_CELERY_SERIALIZATION, "json"] causes the app to crash, so that's not an option.
Specs: celery=5.1.2
python: 3.8
OS: Linux docker container
Any help would be much appreciated.
Changing any of the last 3 to [CUSTOM_CELERY_SERIALIZATION, "json"] causes the app to crash, so that's not an option.
Because result_serializer, task_serializer, and event_serializer doesn't accept list but just a single str value, unlike e.g. accept_content
The list for e.g. accept_content is possible because if there are 2 items, we can check if the type of an incoming request is one of the 2 items. It isn't possible for e.g. result_serializer because if there were 2 items, then what should be chosen for the result of task-A? (thus the need for a single value)
This means that if you set result_serializer = 'json', this will have a global effect where all the results of all tasks (the returned value of the tasks which can be retrieved by calling e.g. response.get()) would be serialized/deserialized using the json-serializer. Thus, it might work for the ping but it might not for the tasks that can't be directly serialized/deserialized to/from JSON which really needs the custom stjson-serializer.
Currently with Celery==5.1.2, it seems that task-specific setting of result_serializer isn't possible, thus we can't set a single task to be encoded in 'json' and not 'stjson' without setting it globally for all, I assume the same case applies to ping.
Open request to add result_serializer option for tasks
A short discussion in another question
Not the best solution but a workaround is that instead of fixing it in your app's side, you may opt to just add support to serialize/deserialize the contents of type 'application/x-stjson' in the other app.
other_app/celery.py
import ast
from celery import Celery
from kombu.serialization import register
# This is just a possible implementation. Replace with the actual serializer/deserializer for stjson in your app.
def stjson_encoder(obj):
return str(obj)
def stjson_decoder(obj):
obj = ast.literal_eval(obj)
return obj
register(
'stjson',
stjson_encoder,
stjson_decoder,
content_type='application/x-stjson',
content_encoding='utf-8',
)
app = Celery('other_app')
app.conf.update(
accept_content=['json', 'stjson'],
)
You app remains to accept and respond stjson format, but now the other app is configured to be able to parse such format.

Targeting multiple grains in Salt minions using API

In our infrastructure, we set multiples grains on the minion including an "environment" and "app" grain.
When we use the cli, we can get the correct minions using:
salt -C "G#app:middle_tier_1 and G#environment:dev" test.ping
But if we try to use the cherrypy api, only got a result if set only one target like:
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":"G#app:middle_tier_1"}]
or
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":"G#environment:dev"}]
with multiples one, don't get any
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":"G#app:middle_tier_1 and G#environment:dev"}]
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":["G#app:middle_tier_1","G#environment:dev"]}]
According with the documentation, i can use a list in the tgt paramenter.
I have looked their documentation fairly extensively and have found no examples of this type of minion targeting.
Is this even possible, and if so, how would I go about doing it?
Extra info:
salt-master 2018.3.2 (Oxygen)
salt-api 2018.3.2 (Oxygen)
Thanks in advance!
If you want to use multiple grains, the tgt_type is compound not grains.
See: https://docs.saltstack.com/en/latest/ref/clients/#salt-s-client-interfaces, https://docs.saltstack.com/en/latest/topics/targeting/compound.html

How to set Neo4J config keys in gremlin-scala?

When running a Neo4J database server standalone (on Ubuntu 14.04), configuration options are set for the global installation in etc/neo4j/neo4j.conf or possibly $NEO4J_HOME/conf/neo4j.conf.
However, when instantiating a Neo4j database from Java or Scala using Apache's Neo4jGraph class (org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph), there is no global installation, and the constructor does not (as far as I can tell) look for any configuration files.
In particular, when running the test suite for my application, I end up with many simultaneous instances of Neo4jGraph, which ends up throwing a java.net.BindException: Address already in use because all of these instances are trying to communicate over a small range of ports for online backup, which I don't actually need. These channels are set with config options dbms.backup.address (default value: 127.0.0.1:6362-6372) and dbms.backup.enabled (default value: true).
My problem would be solved by setting dbms.backup.enabled to false, or expanding the port range.
Things that have not worked:
Creating /etc/neo4j/neo4j.conf containing the line dbms.backup.enabled=false.
Creating the same file in my project's src/main/resources directory.
Creating the same file in src/main/resources/neo4j.
Manually setting the configuration property inside the Scala code:
val db = new Neo4jGraph(dataDirectory)
db.configuration.addProperty("dbms.backup.enabled",false)
or
db.configuration.addProperty("neo4j.conf.dbms.backup.enabled",false)
or
db.configuration.addProperty("gremlin.neo4j.conf.dbms.backup.enabled",false)
How should I go about setting this property?
Neo4jGraph configuration through TinkerPop is accomplished by a pass-through of configuration keys. In TinkerPop 3.x, that would mean that all Neo4j keys prefixed with gremlin.neo4j.conf that are provided via Configuration object to Neo4jGraph.open() or GraphFactory.open() will be passed down directly to the Neo4j instance. You can see examples of this here in the TinkerPop documentation on high availability configuration.
In TinkerPop 2.x, the same approach was taken however the key prefix was instead blueprints.neo4j.conf.* as discussed here.
Manipulating db.configuration after the database connection had already been opened was definitely futile.
stephen mallette's answer was on the right track, but this particular configuration doesn't appear to pass through in the way his linked example does. There is a naming mismatch between the configuration keys expected in neo4j.conf and those expected in org.neo4j.backup.OnlineBackupKernelExtension. Instead of dbms.backup.address and dbms.backup.enabled, that class looks for config keys online_backup_server and online_backup_enabled.
I was not able to get these keys passed down to the underlying Neo4jGraphAPI instance correctly. What I had to do, instead, was the following:
import org.neo4j.tinkerpop.api.impl.Neo4jFactoryImpl
import scala.collection.JavaConverters._
val factory = new Neo4jFactoryImpl()
val config = Map(
"online_backup_enabled" -> "true",
"online_backup_server" -> "0.0.0.0:6350-6359"
).asJava
val db = Neo4jGraph.open(factory.newGraphDatabase(dataDirectory,config))
With this initialization, the instance correctly listened for backups on port 6350; changing "true" to "false" disabled backup listening.
Using Neo4j 3.0.0 the following disables port listening for me (Java code)
import org.apache.commons.configuration.BaseConfiguration;
import org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph;
BaseConfiguration conf = new BaseConfiguration();
conf.setProperty(Neo4jGraph.CONFIG_DIRECTORY, "/path/to/db");
conf.setProperty(Neo4jGraph.CONFIG_CONF + "." + "dbms.backup.enabled", "false");
graph = Neo4jGraph.open(config);

Using Redis as Cache and C# client

I'm new to Redis and trying to figure out a simple way to use Redis as a local cache for my C# app. I've downloaded and ran the redis-server from https://github.com/MSOpenTech/redis/releases
I can successfully store a key value and retrieve it as follows:
var redisManager = new PooledRedisClientManager("localhost:6379");
using (var redis = redisManager.GetClient())
{
redis.Set("mykey_1", 15, TimeSpan.FromSeconds(3600));
// get typed value from cache
int valueFromCache = redis.Get<int>("mykey_1"); // must be =
}
I want to limit the amount of memory Redis uses on my server and I also want redis to automatically purge values when memory fills. I tried the maxmemory command but in the redus-cli program maxmemory is not found.
Will Redus automatically purge old values for me? (I assume not) and if not, is there a way that I can make the default behavior of redis do that with the Set method I'm using below?
If I'm heading down the wrong path, please let me know.
The answer to your question is described here: What does Redis do when it runs out of memory?
Basically, you set the maxmemory from the config file, and not from the redis-cli. You can also specify a maxmemory-policy, which is a set of procedures that redis executes when it runs out of the specified memory. According to that config file, there are a total of 6 policies that Redis is using when it runs out of memory:
volatile-lru -> remove the key with an expire set using an LRU algorithm
allkeys-lru -> remove any key according to the LRU algorithm
volatile-random -> remove a random key with an expire set
allkeys-random -> remove a random key, any key
volatile-ttl -> remove the key with the nearest expire time (minor TTL)
noeviction -> don't expire at all, just return an error on write operations
You can set those behaviours using the maxmemory-policy directive that you find in the LIMITS section of redis.conf file (above the maxmemory directive).
So, you can set an expire time to every key that you store in Redis (a large expire time) and also set a volatile-ttl policy. In that way, when Redis runs out of memory, the key with the smallest TTL (which is also the oldest one) is removed, according to the policy that you've set.

getting a "need project id error" in Keen

I get the following error:
Keen.delete(:iron_worker_analytics, filters: [{:property_name => 'start_time', :operator => 'eq', :property_value => '0001-01-01T00:00:00Z'}])
Keen::ConfigurationError: Keen IO Exception: Project ID must be set
However, when I set the value, I get the following:
warning: already initialized constant KEEN_PROJECT_ID
iron.io/env.rb:36: warning: previous definition of KEEN_PROJECT_ID was here
Keen works fine when I run the app and load the values from a env.rb file but from the console I cannot get past this.
I am using the ruby gem.
I figured it out. The documentation is confusing. Per the documentation:
https://github.com/keenlabs/keen-gem
The recommended way to set keys is via the environment. The keys you
can set are KEEN_PROJECT_ID, KEEN_WRITE_KEY, KEEN_READ_KEY and
KEEN_MASTER_KEY. You only need to specify the keys that correspond to
the API calls you'll be performing. If you're using foreman, add this
to your .env file:
KEEN_PROJECT_ID=aaaaaaaaaaaaaaa
KEEN_MASTER_KEY=xxxxxxxxxxxxxxx
KEEN_WRITE_KEY=yyyyyyyyyyyyyyy KEEN_READ_KEY=zzzzzzzzzzzzzzz If not,
make a script to export the variables into your shell or put it before
the command you use to start your server.
But I had to set it explicitly as Keen.project_id after doing a Keen.methods.
It's sort of confusing since from the docs, I assumed I just need to set the variables. Maybe I am misunderstanding the docs but it was confusing at least to me.