I am using redis in a node application for caching data and now i want to access and modify stored data using a django application on the same server but i can't access to the data.
Django connection:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://127.0.0.1:6379/0",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
using keys * command in terminal:
$ redis-cli
127.0.0.1:6379> keys *
1) "sess:Ok0eYOko5WaV7njfX04qgqG1oYe0xiL1" -> this key is set in node
2) ":1:from-django" -> this key is set in django
Accessing keys in django application:
keys = cache.keys('*')
print(keys) # prints only one key => ['from-django']
I can't access first key that is set in node application and also django stored keys are prifixed with :1: by default!
I want to share all keys between node and django but they only access their own keys.
Any idea?
You can access all of data from any where, But you are working with Redis in Cache model! and all of Cache systems has own unique data structure, You must work with Redis in Database model and scan it yourself.
Use Python Redis package to access all of Redis in your application.
Related
I follow some tutorial on web to setup Spring Cache with redis,
my function look like this:
#Cacheable(value = "post-single", key = "#id", unless = "#result.shares < 500")
#GetMapping("/{id}")
public Post getPostByID(#PathVariable String id) throws PostNotFoundException {
log.info("get post with id {}", id);
return postService.getPostByID(id);
}
As I understand, the value inside #Cacheable is the cache name and key is the cache key inside that cache name. I also know Redis is an in-memory key/value store. But now I'm confused about how Spring will store cache name to Redis because looks like Redis only manages key and value, not cache name.
Looking for anyone who can explain to me.
Thanks in advance
Spring uses cache name as the key prefix when storing your data. For example, when you call your endpoint with id=1 you will see in Redis this key
post-single::1
You can customize the prefix format through CacheKeyPrefix class.
We use Azure Data Factory copy pipeline to transfer data from REST api's to a Azure SQL Database and it is doing some strange things. Because we loop over a set of API's that need to be transferred the mapping is empty from the copy activity.
But for one API the automatic mapping is going wrong, the destination table is created with all the needed columns and correct datatypes based on the received metadata. When we run the pipeline for that specific API, the following message is showed.
{ "errorCode": "2200", "message": "ErrorCode=SchemaMappingFailedInHierarchicalToTabularStage,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Failed to process hierarchical to tabular stage, error message: Ticks must be between DateTime.MinValue.Ticks and DateTime.MaxValue.Ticks.\r\nParameter name: ticks,Source=Microsoft.DataTransfer.ClientLibrary,'", "failureType": "UserError", "target": "Copy data1", "details": [] }
As a test we did do the mapping for that API manually by using the "Import Schema" option on the Mapping page. there we see that all the fields are correctly mapped. We execute the pipeline again using the mapping and everything is working fine.
But of course, we don't want to use a manually mapping because it is used in a loop for different API's also.
I am using the Orion Context Broker, an IoT Agent and Cygnus to handle and persist the data of several devices into a MongoDB. It's working, but I don't know if I'm doing it in the Fiware way, because after reading the documentation I am confused yet about some things:
I don't completely understand the difference between an Entity and an IoT Entity (or device?). My guess is that is a matter of how they provide context data and the nature of the entity modelled, but I would be grateful if someone could clarify it. I am especially confused because the creation of each entity type is different (it seems that I can't initialize an IoT entity at creation time, which I can when dealing with a regular Entity).
I can only persist the data of IoT Entities. Is it possible to have a Short Term History of a regular Entity?
I don't understand why the STH data is repeating attributes that have not changed. If I have an IoT Entity with two attributes, 'a' and 'b', and I modify both of them, a STH entry is created for each one, which is fine. However, if then I change the value of attribute 'b', two more registers are created: one for 'a' (which hasn't changed and is reflecting the same value that it already had) and one for 'b'. Could someone explain to me this behavior?
1. Entities vs IoT Entities
I assume that what you mean by an IoT entity is the entry made by the IoT Agent upon receiving a sensor reading from a provisioned device.
Logically there is no difference between an entity created and maintained by an IoT Agent and an entity created and maintained by any other service making NGSI request to the context broker.
Your so-called IoT Entity is merely a construct where an IoT agent does all the heavy lifting for you and converts the data coming from a device in a propitiatory format into the NGSI standard.
2. Short Term History of a regular Entity
To create Short Term History you will need a separate Generic Enabler such as STH-Comet or QuantumLeap. Both of these enablers receive updates from Orion using the subscriptions mechanism. If you set up your IoT data using one fiware-service header and set up your non-IoT data using another fiware-service you can easily set up a subscription to differentiate between the two.
e.g. the following subscription:
curl -iX POST \
'http://localhost:1026/v2/subscriptions/' \
-H 'Content-Type: application/json' \
-H 'fiware-service: iotdata' \
-H 'fiware-servicepath: /' \
-d '<body>'
Will only apply to entities with the iotdata service path, which would be created when you provision your IoT service.
3. Repeating attributes that have not changed.
The <body> of the subscription can be used to narrow down the conditions under which the historical data is persisted.
The entities, conditions and the attrs are the important part of the subject
subject": {
"entities": [
{
"idPattern": "Motion.*"
}
],
"condition": {
"attrs": [
"count"
]
}
},
"notification": {
"http": {
"url": "http://quantumleap:8668/v2/notify"
},
"attrs": [
"count"
],
"metadata": ["dateCreated", "dateModified"]
},
"throttling": 1
}'
The subscription defined above will only fire if the count attribute is changed and only persist the count attribute. If you do not limit your attrs then multiple lines will be persisted to the database. Similarly if you do not limit the condition then multiple entries of count will be persisted when other attributes are updated.
There is 3 Hashes in my redis database:
set:recentbooks
set:badbooks
set:funnybooks
All hashes contain book Ids as key.
I want to remove the book that has 234 Id from all hashes.
How can I do this:
Lua Scripting
Pipeline
Other?
Using the ServiceStack redis client API, you could pipeline your delete requests like thus:
var client = new RedisClient("localhost", 6379);
using (var pipeline = client.CreatePipeline())
{
pipeline.QueueCommand(r => r.RemoveEntryFromHash("set:recentbooks", "234"));
pipeline.QueueCommand(r => r.RemoveEntryFromHash("set:badbooks", "234"));
pipeline.QueueCommand(r => r.RemoveEntryFromHash("set:funnybooks", "234"));
// All deletes will be sent at once.
pipeline.Flush();
}
Using a LUA script, it's easy:
EVAL "redis.call('HDEL',KEYS[2],KEYS[1]);
redis.call('HDEL',KEYS[3],KEYS[1]);
redis.call('HDEL',KEYS[4],KEYS[1]);"
4 234 set:recentbooks set:badbooks set:funnybooks
I've never used ServiceStack, but with the info above you have what's required to invoke the redis client in ServiceStack to delete the keys.
You can also write the lua script in a file, then call it like this with parameters:
redis-cli> EVAL "$(cat myscript.lua)" 4 234 set:recentbooks set:badbooks set:funnybooks
User has DisplayName and it is unique for Users.
I want to Create User but firstly I have to check display name (DisplayName could not be duplicated for Users)
I've checked ServiceStack examples and I could not see Transactional Insert/Update with validation check.
How can I perform it. I dont want to write "Validation Tasks" for redis db.
I dont want inconsistency in db.
The ServiceStack.Redis client does have support for Redis's WATCH and transactions where these Redis commands:
WATCH mykey
test = EXIST mykey
MULTI
SET mykey $val
EXEC
Can be accomplished with:
var redis = new RedisClient();
redis.Watch("mykey");
if (!redis.ContainsKey("mykey")) return;
using (var trans = redis.CreateTransaction()) {
trans.QueueCommand(r => r.Set("mykey", "val"));
trans.Commit();
}
Is possible to perform redis transactions. More information here
WATCH mykey
test = EXIST mykey
MULTI
SET mykey $val
EXEC
Using PHP have um better example: here