I was doing some first steps with gundb and it looks nice. But I'm having trouble to come up with a solution I would need for an application I'm planning.
It is actually a pretty common use case, there should be a group of users that are allowed to write posts, but all users should be able to read them.
In the documentation there is written a lot about how to heandle read access, but I couldn't find anything about how to handle write access to some data.
Is there a code example somewhere for this? And how does gun handle write permissions in general (some documentation / explanation)?
#gwildu restricted write access with public read is currently easier/beta in SEA (Security, Encryption, Authorization - GUN's permissions framework) than doing private read access (although this is perfectly possible already directly using SEA's lower level utility API, docs here). Although we're working on improving ease-of-use, abstractions/API, and other for it.
So the first thing you need to know is summarized in this fairly short guide:
https://gun.eco/docs/Auth
That will cover introducing you to GUN's cryptographic concepts as well as how easy it is to create and login into a user.
The next step, you'll see is that writes to the gun.user() object have the same API as gun but automatically defaults it to be write-restricted (to that user, nobody else can tamper with it) and public-read.
So as of today (2018/6/27) you can use this to achieve what you want. All you need to know is the which users you want a write to come from, and then subscribe to it. With a bit of extra code, you can do this today (hit up the https://gitter.im/amark/gun for help!)
In the future, you can expect an API like this:
user.get('thingOthers').get('canWriteTo').trust(alice)
Now when you subscribe to the data, you'd get back realtime updates from those who you have allowed to write to the data:
user.get('thingOthers').get('canWriteTo').on(data => console.log(data))
Depending upon your use case (and possibly a slightly different API, but hopefully not), this would also work for others reading your data.
For instance, assume you were user Bob (so Bob added Alice as write-access), and some 3rd party viewer, say Carl wants to read Bob's data, if they were to:
bob.get('thingsOthers').get('canWriteTo').on(data => console.log(data))
It would subscribe to realtime updates from only those allowed to write to the data.
Why is there a difference here, what is the nuance you should be aware of?
A) A person may want to trust what somebody else says but NOT trust somebody else to speak on their behalf.
B) If you do trust somebody to speak on your behalf, viewers (like Carl) now must trust an authorizer (Bob) that Alice is permitted to say something. This could be come dangerously centralized!!! (From a trust/permission model, even if the algorithms are P2P/decentralized underneath, which GUN is.)
So while (A) may be a little bit more complicated nuance to understand or manage, it may be the better route. (B) is a little bit easier to think about, but could lead to centralized authority (as in, all Carls in the world just trust Bob to make decisions for them, when it would be better if Carls [using (A)] to decide who they think should have authority.)
So for instance, Bob added Alice as write-access, but Carl could too! That way if Bob and Carl ever disagree on who should have write access, they don't have to trust each other! They would both see what they allow, but not what the other allows.
Obviously, for a lot of applications, it makes sense for everybody to be "on the same page" and have all Carls' just trust Bob as determining who can write data. So (B) is still a good option, just know the implications, and know there are alternative models for trust!
Ok, my last question had no answers, so I've the doubt that I'm walking on the wrong way.
I'm developing some Web REST Api for a mobile application, and regarding REST best pratices I don't know how to face a many-to-many relationship.
I have two tables, Wallets and Cateories, between these tables there is a many to many relationship since a category may be associated to different wallets and a wallet may own different categories.
Actually this database is used by a non rest website:
when a user creates a new category, he choose from the list of his own wallets which wallets to connect it to, and with this single POST call the category is created and conneted to the wallets.
I don't think that replicating this behaviour is compliant to REST best pratices.
My first idea was to "expose" the connection between categories and wallets with this form:
http://localhost:8000/categories/77/wallets/4
but I had the problem I wrote on my previous question, and I don't think this is the right way.
Anyone has a valid method to manage a many-to-many realtionship according with REST best pratices?
Thanks in advance.
Namespacing wallets by a category is fine, as in /categories/77/wallets/4. You can also consider a more concise scheme like /categories/77/4 or /wallets/77/4 if there are only wallets in a category.
However, you don't have to namespace. Your wallets presumably have their own IDs, so you could also just expose them as /wallets/4.
Is it worth the effort? I think it can be a good practice if your URLs are also on a public website (in which case you would probably want to support slug IDs as well, e.g. /categories/luxury/wallets/acme). If not, you should be aware it will be a little more configuration work on the server-side and a little more work for clients (clients will have to be aware of 2 IDs instead of 1).
I'm a total noob when it comes to cryptography but I believe this falls under the "zero knowledge" category.
I have two associated pieces of information:
tag - Known by both parties. Unique per scenario.
identity - Known by only one party. Potentially associated to multiple tags. Comes from a pool known by both parties.
I need a way to prevent the party with the association to change the value of the identity. There are around one hundred concurrent associations per scenario. The pool of potential identities can be relatively small, even smaller than the number of tags.
The most primitive option would be to hash the tag and identity together but with such a small pool of potential identities I fear it would be trivial to brute force the hash...
During the scenario more and more of these associations will become public. At least at that point I should be able to confirm that the other party did not modify the association. I don't really have to confirm this before then because unrevealed associations are not relevant. I just need to prevent the knowing party to pick and choose on revealing.
Is such a thing even possible? How could it be done? How difficult would it be to implement?
You shouldn't implement this yourself.
you have two parties, Alice and Bob. ALice as the (identity, tag) pair. Bob only has the tag.
Alice wants to prove to Bob she has an identity in that pair that she did not change but she does not want to reveal that identity to BoB.
What you want is a "Signature scheme with efficient protocols". I know of no API's that expose this functionality. However, these are widely used in anonymous credential systems that can be used for your purposes.
Thankfully, there are two systems that support this type of thing. One is IBM Idemix which uses the above technique and is where you should look first. The other is Microsoft's U-Prove.
In an ideal RESTful API that supports multiple accounts, should each resource have it's unique identifier across the entire system, or it is OK if that identifier is unique for the specific account that it belongs to.
Are there any pros and cons for each scenario?
To give an example.
Would this be fine from the REST principles?
http://api.example.com/account/1/users/1
...
http://api.example.com/account/50/users/1
or would this approach be recommended?
http://api.example.com/account/1/users/{UNIQUE_IDENTIFIER}
...
http://api.example.com/account/50/users/{ANOTHER_UNIQUE_IDENTIFIER}
You reveal valid user numbers by always having the first user as 1. Someone then knows that any account will also have a user 1. I'm not saying that you should hide user IDs just through obscurity but why make it easy for someone to find the user IDs in another account?
All that really matters is that each resource has a unique identifier. Both of your examples accomplish that, so you seem to be okay (RESTfully speaking)
I don't see any compelling reason to use one over the other. I'd choose whatever makes more sense for your implementation.
Since, from the perspective of an external system using your REST API, the entire address should be considered to be the "identifier" for that resource object, so your first example is fine.
In my community, every user should only have one account.
So I need a solution to verify that the specific account is the only one the user owns. For the time being, I use email verification. But I don't really need the users' email adresses. I just try to prevent multiple accounts per person.
But this doesn't work, of course. People create temporary email addresses or they own several addresses, anyway. So they register using different email addresses and so they get more than one account - which is not allowed.
So I need a better solution than the (easy to circumvent) email verification. By the way, I do not want to use OpenID, Facebook Connect etc.
The requirements:
verification method must be accessible for all users
there should be no costs for the user (at least 1$)
the verification has to be safe (safer than the email approach)
the user should not be demanded to expose too much private details
...
Do you have ideas for good approaches? Thank you very much in advance!
Additional information:
My community is a browser game, namely a soccer manager game. The thing which makes multiple accounts attractive is that users can trade their players. So if you have two accounts, you can buy weak players for excessive prices which no "real" buyer would pay. So your "first account" gets huge amounts of money while the "second account" becomes poor. But you don't have to care: Just create another account to make the first one richer.
You should ask for something more unique than an email. But there is no way to be absolutly sure a player don't own two account.
The IP solution is not a solution, as people playing from a compagny/school/3G will have the same IP. Also, Changing IP is easy (reset the router, proxy, use your 3G vs wifi)
Some web site (job-offer, ...) ask you for an official ID number (ID, passport, social security, driver licence, visa (without the security number, so peolple will feel safe that you won't charge them), ...)
This solution got a few draw back:
minor don't always have an ID / visa
pepole don't like to give away this kind of info. (in fact, depending where you live: in spain for example, it is very common to ask for ID number)
people own more than one visa.
it is possible to generate valide ID/visa number.
Alternative way:
ask for a fee of 1$
to be allow to trade more than X players / spend more than X money.
people that pay the fee got some advantage : less ads, extra players, ...
paying a fee, will limitate creation of multiple account.
fee can be payed using taxed phone number (some compagny provide international system)
the payment medium could be use as an ID (visa number)
put some restriction in new account (like SO).
eg: "you have to play at least 1 hour before trading a player"
eg: "you have to play at least 3 hour before trading more than 3 players"
Use logic to detect multiple account
use cookie to detect multiple account
check last connection time of both player before a transaction. (if player A logout 1 minute before player B login : somethings is going on)
My recommandation :
Use a mix of all thoses methode, but keep the user experience fluide without "form to fill now to continue"
Very interesting question! The basic problem here is multi-part -
Opening an account is trivial (because creating new email IDs is trivial).
But the effect of opening an account in the game is NOT trivial. Opening a new account basically gives you a certain sum of money with which to buy players.
Transferring money to another account is trivial (by trading players).
Combining 1 & 2, you have the problem that new players have an unfair advantage (which they would not have in the real world). This is probably okay, as it drives new users to your site.
However adding 3 to the mix, you have the problem that new players are easily able to transfer their advantage to the old players. This allows old users to game the system, ruining fun for others.
The solution can be removing either 1,2,3.
Remove 1 - This is the part you are focusing on. As others have suggested, this is impossible to do with 100% accuracy. But there are ways that will be good enough, depending on how stringent your criterion for "good enough" is. I think the best compromise is to ask the user for their mobile phone numbers. It's effective and allows you to contact your users in one more way. Another way would be to make your service "invite only" - assuring that there is a well defined "trail" of invites that can uniquely identify users.
Remove 2 - No one has suggested this which is a bit surprising. Don't give new users a bunch of money just for signing up! Make them work for it, similar to raising seed capital in the real world. Does your soccer simulation have social aspects? How about only giving the users money once their "friend" count goes above a certain number (increasing the number of potential investors who will give them money)?
Remove 3 - Someone else has already posted the best solution for this. Adopt an SO like strategy where a new user has to play for 3 hours before they are allowed to transfer players. Or maybe add a "training" stage to your game which forces a new player to prove their worth by making enough money in a simulated environment before they are allowed to play with the real users.
Or any combination of the above! Combined with heuristics like matching IP addresses and looking for suspicious transactions, it is possible to make cheating on the game completely unviable.
Of course a final thing you need to keep in mind is that it is just a game. If someone goes to a lot of trouble just to gain a little bit of advantage in your simulation, they probably deserve to keep it. As long as everyone is having fun!
I know this is probably nothing you have expected, but...
My suggestion would be to discourage people from creating another account by offering some bonus values if they use the same account for a longer period, a kind of loyalty program. For some reason using a new account gives some advantages. Let's eliminate them. There are a lot of smart people here, so if you share more details on the advantages someone could come up with some idea. I am fully convinced this is on-topic on SO though.
We have implemented this by hiding the registration form. Our customers only see the login form where we use their mobile number as username and send the password by text message.
The backend systems match the mobile number to our master customer database which enforces that the mobile number is unique.
Here is an idea:
Store UUID in a cookie at clients. Each user login store the UUID from Cookie in relation to the account entity in the databse.
Do the same with the IP adresses instead of UUID.
After that write a program interface for your game masters that:
Show up different account names but same IP (within last x hours)
Show up different account names but same UUID (nevertheless how long ago)
Highlight datasets from the two point above where actions (like player transfers) happened which can be abused by using multiple accounts
I do not think you should solve that problem by preventing people having two or more accounts. This is not possible and ineffective. Make it easier to find that evil activities and (automatically temporarly) ban these people.
It's impossible to accomplish this with a program.
The closest you can do is to check the ip address. But it can change, and proxies exist.
Then you could get the computer MAC address, but a network card can be changed. And a computer too.
Then, there is one way to do this, but you need to see the people face to face. Hand them a piece of paper with a unique code. They can only subscribe if they have the code.
The most effective solution might be the use of keystroke biometrics. A person can be identified by the way the person writes a sentence.
This company provides a product which can be used to implement your requirements: http://www.psylock.com/en
I think 1 account per email address should be good enough for your needs. After all, account verification doesn't have to end right after signup.
You can publish the IP address of the computer each message was posted from to help your users detect when someone is using multiple accounts from the same computer, and you can use a ranking system to discourage people from using temporary accounts.
Do your game dynamics allow for you to require that both users be online for a trade to occur? If so, you can verify the IP addresses of both users involved in a trade, which would be the same unless the user was paying for multiple internet connections and accessing two accounts from separate machines.
Address the exact scenario that you're saying is a problem.
Keep track of the expected/fair trade value of players and prevent blatantly lope-sided trades, esp. for new accounts. Assume the vast majority of users in your system are non-cheaters.
You can also do things like trickle in funds/points for non-trading actions/automatically overtime, etc.
Have them enter their phone number and send a text message to it. Then, keep a unique of all the cell phone numbers. Most people have one cell phone, and aren't going to ask their friend to borrow it just to create a second account.
http://en.wikipedia.org/wiki/List_of_SMS_gateways
I would suggest an approach using two initiatives:
1) Don't allow brand new accounts to perform trades. Accounts must go through a waiting period and prove that the account is legitimate by performing some non-trade actions.
2) Publicize the fact that cheaters will be disqualified and punished. Periodically perform searches for accounts being used to dump bad players and investigate. Ban/disqualify cheaters and publicize the bans so that people know the rules are being enforced.
No method would be foolproof but the threat of punishment should minimize cheating.
actually you can use fingerprintjs to track every user, use js encrypt the fingerprint in browser and decrypt in server