Implementing following stream - redis

I am developing an app for photo sharing and having follow system so whosoever follow x user then x users photo will come in his following .
I am storing my data in redis as following
sadd rdis_key+user_id photo_id
set redis_key+photo_id+data data_of_photo
sadd redis_key+follow+user_id follower_id
Now I want to get directly all photo_id of followers without looping.

This is a simple fan-out problem which you can not easily do with Redis directly.
You can do it with Lua but YOU WILL block Redis during the action.
I have an open source project which does the same thing but I do it in code as someone creates a new post. I would imagine this is just like a new photo.
https://github.com/pjuu/pjuu/blob/master/pjuu/posts/backend.py#L252
I use sorted sets though and use the unix timestamp as the score so they are always in order.
As User1 creates a new photo you look up a list of their followers. If you are using a sorted set you can get this via:
followers = zrange followers:user1 0 -1
then simply loop over all entries in that list:
for follower in followers: zadd feed:user2 <timestamp> <photo_id>
This way this new post is now pushed out to all users that are follow user1.
If you want this done on the fly then bad news: You will need some relational data and a way to query in the values which you can't do. SQL, Mongo, Couch, etc...
This is only pseudo code as you did not mention which language you use.
EDIT: As per question this is to be done on the Redis side
local followers = redis.call('zrange', KEYS[1], 0, -1)
for key, value in pairs(followers) do
redis.call('zadd', 'items:'..value, ARGV[1], ARGV[2])
end
return true
This will take a key of the users followers to iterate over. A zset score and value and will add these to the items for each user. You will need to change it to suit your exact needs. If you want to use sets you will need to use sscan or something. Zsets are easier though and in order.

Related

How to tag a key in REDIS so later I can remove all keys that match this tag?

Today we save data like that:
redisClient->set($uniquePageID, $data);
and output the data like that:
redisClient->get($uniquePageID)
But now we need to remove the data base on a userID. So we need something like that:
redisClient->set($uniquePageID, $data)->tag($userID);
So we can remove all the keys that related to this userID only, for example:
redisClient->tagDel($userID);
Does REDIS can solve something like that?
Thanks
There's no built-in way to do that. Instead, you need to tag these pages by yourself:
When setting a page-data pair, also put the page id into a SET of the corresponding user.
When you want to remove all pages of a given user, scan the SET of the user to get the page ids of this user, and delete these pages.
When scanning the SET, you can use either SMEMBERS or SSCAN command, depends on the size of the SET. If it's a big SET, prefer SSCAN to avoid block Redis for a long time.
I used HSET and HDEL to store and delete items like this:
$this->client = new Predis\Client(array...);
$this->client->hset($key, $tag, $value);
$this->client->hdel($key, $tags)
and if you want to delete every item KEY no matter tag or value you can use del key, it works with any data type including hset
$this->client->del($key);

Add Redis expire to whole bunch of namespaced key?

Say I have a namespaced key for user + id:
lastMessages
isNice attribute
So - it goes like this :
>lpush user:111:lastMessages a
>lpush user:111:lastMessages b
>lpush user:111:lastMessages c
ok
let's add the isNice prop:
>set user:111:isNice 1
so : let's see all keys for 111 :
> keys user:111*
result :
1) "user:111:isNice"
2) "user:111:lastMessages"
Ok , BUT
I want to expire the namespaced entry at it's whole ! ( so when timeout - all the keys should go at once. I don't want start managing each namespaced key and time left because not all props are added at the same time - but i want all props to be dead at the same time...)
Question :
Does it mean that I have to set expire for each namespaced key entry ?
if not , what is the correct way of doing it ?
Yes, the way you have it set up, these are all just separate keys. You can think of the namespace as an understanding you have with all the people who will access the Redis store
Okay guys, here's the deal. We're all going to use keys that look like this:
user:{user_id}:lastMessages
That way, we all understand where to look to get user number 325's last messages.
But really, there's nothing shared between user:111:lastMessages and user:111:isNice.
The fix
A way you can do what you're describing is using a hash. You will create a hash whose key is user:111 and then add fields lastMessages and isNice.
> hset user:111 lastMessages "you are my friend!"
> hset user:111 isNice true
> expire user:111 1000
Or, all at once,
> hmset user:111 lastMessages "you are my friend!" isNice true
> expire user:111 1000
Here is a page describing redis' data types. Scroll down to where it says 'Hashes' for more information.
Edit
Ah, I hadn't noticed you were using a list.
If you don't have too many messages (under 20, say), you could serialize them into JSON and store them as one string. That's not a very good solution though.
The cleanest way might just be to set two expires.

Use an AppReceiptId to verify a user's identity in a Windows Store App?

I want to be able to use the AppReceiptId from the result of CurrentApp.GetAppReceiptAsync() and tie it to a username in my backend service, to verify that the user has actually purchased the app.
I know I'm supposed to use CurrentAppSimulator in place of CurrentApp, but CurrentAppSimulator.GetAppReceiptAsync() always returns a different, random value for AppReceiptId. This makes it difficult to test with my service.
Is there a way to make it always return the same value, other than just using a hardcoded one? I'm worried that when I replace CurrentAppSimulator with CurrentApp and submit it to the store, it won't behave the way I expect it to. In the real world, the AppReceiptId won't ever change, right?
The Code I use to get AppReceiptId:
var receiptString = await CurrentAppSimulator.GetAppReceiptAsync();
XmlDocument doc = new XmlDocument();
doc.LoadXml(receiptString);
var ReceiptNode = (from s in doc.ChildNodes
where s.NodeName == "Receipt"
select s).Single();
var AppReceiptNode = (from s in ReceiptNode.ChildNodes
where s.NodeName == "AppReceipt"
select s).Single();
var idNode = (from s in AppReceiptNode.Attributes
where s.NodeName == "Id"
select s).Single();
string id = idNode.NodeValue.ToString();
id will always be some random Guid.
CurrentApp.GetAppReceiptAsync().Id is a unique ID for the actual purchase. Although it does technically represent a unique purchase made by a single Windows ID, it doesn't represent the user themselves and I don't think there's any guarantee on the durability of that ID.
Would you be better suited using the Windows Live SDK to track the actual user identity across devices?
At any rate, to answer your original question, no I don't believe there's any way to make it return the same ID all the time. The only logical place for that functionality would be in the WindowsStoreProxy.xml file, and I don't see anything in the schema that would allow you to specify this information.

NHibernate AliasToBeanResultTransformer & Collections

I would like to return a DTO from my data layer which would also contain child collections...such as this:
Audio
- Title
- Description
- Filename
- Tags
- TagName
- Comments
- PersonName
- CommentText
Here is a basic query so far, but i'm not sure how to transform the child collections from my entity to the DTO.
var query = Session.CreateCriteria<Audio>("audio")
.SetProjection(
Projections.ProjectionList()
.Add(Projections.Property<Audio>(x => x.Title))
.Add(Projections.Property<Audio>(x => x.Description))
.Add(Projections.Property<Audio>(x => x.Filename))
).SetResultTransformer(new AliasToBeanResultTransformer(typeof(AudioDto)))
.List<AudioDto>();
Is this even possible, or is there another reccomended way of doing this?
UPDATE:
Just want to add a little more information about my scenario...I want to return a list of Audio items to the currently logged in user along with some associated entities such as tags, comments etc...these are fairly straight forward using MultiQuery / Future.
However, when displaying the audio items to the user, i also want to display 3 other options to the user:
Weather they have added this audio item to their list of favourites
Weather they have given this audio the 'thumbs up'
Weather the logged in user is 'Following' the owner of this audio
Favourites : Audio -> HasMany -> AudioUserFavourites
Thumbs Up : Audio -> HasManyToMany -> UserAccount
Following Owner : Audio -> References -> UserAccount ->
ManyToMany -> UserAccount
Hope this makes sense...if not i'll try and explain again...how can I eager load these extra details for each Audio entity returned...I need all this information in pages of 20 also.
I looked at Batch fetching, but this appears to fetch ALL thumbs ups for each Audio entity, rather than checking if only the logged in user has thumbed it.
Sorry for rambling :-)
Paul
If you want to fetch your Audio objects with both the Tags collection and Comments collections populated, have a look at Aydende Rahien's blog: http://ayende.com/blog/4367/eagerly-loading-entity-associations-efficiently-with-nhibernate.
You don't need to use DTOs for this; you can get back a list of Audio with its collections even if the collections are lazily loaded by default. You would create two future queries; the first will fetch Audio joined to Tags, and the second will fetch Audio joined to Comments. It works because by the time the second query result is being processed, the session cache already has the Audio objects in it; NHibernate grabs the Audio from the cache instead of rehydrating it, and then fills in the second collection.
You don't need to use future queries for this; it still works if you just execute the two queries sequentially, but using futures will result in just one round trip to the database, making it faster.

Advice on polling for new documents in RavenDB

I want to poll for new documents in my Raven DB. What is the recommended way of doing this? Can I use the IndexTimestamp or can I rely on the order of the documents?
I guess I want to either do it in two steps:
1. Check if there is anything new, if so:
1.1. Get the latest X documents.
Or in one step: Get the latest X documents and have it return those or tell me that there's nothing new according some argument I sent.
FYI: I have no corresponding CLR objects to the documents.
I would not poll for it, but I would use the Changes API included with RavenDB to just get the continuous stream of documents from RavenDB.
Check out the Changes API here http://ravendb.net/docs/2.0/client-api/changes-api
I personally would use the Changes API with some kind of Message Bus (RabbitMQ) to make sure every change is processed and resilient.
If you still want to poll, just create an index with your date time and sort in descending order.
var result = session.Query<Orders>()
.OrderByDescending(x => x.Created)
.Take(10)
.ToList();
If you need to process every document, you might want to create marker documents that include the id of the document you get and make sure they have not been processed.
To do that:
marker id : polling/processed/order/1
Index:
from o in orders
let processed = LoadDocument("polling/processed/" + o.Id)
select new {
WasProcessed = processed != null,
Created = o.Created
}
A few options for you, hope that helps :)