Is there any limit of issuing passes using a PassType ID Certificate? - passbook

We have requirement to create Event passes. Once a Passtype ID certificate is created, is there any limit on signing of passes using PassType ID Certificate?

There is no limit to the number of passes that can be created for a single PassID Certificate.
However, the certificate plays a significant role in how passes are grouped together in a users's Passbook.
If you anticipate issuing passes for multiple events and your users will hold multiple passes for different events, then you may wish to consider using a separate certificate for each event, to prevent passes for different events being stacked on top of one another.

Related

How to prevent usage of signatures that were given by fake websites after signing eth message?

There're plenty of sites, where you have to sign their 'sign in' message in order to get JWT from them. For example, https://www.cryptokitties.co uses such login system. It verifies the signature on the back-end and sends JWT back if address matches. It works good, but such approach disturbs me in the matter of security.
Assume, that someone has created absolutely identical to cryptokitties fake website. User hasn't noticed that domain is different, signs the same message ("To avoid digital cat burglars, sign below to authenticate with CryptoKitties") and at this point he already provided scammer with his signature and address, as message was the same, therefore it will work on original website. So basically you can loose your account by signing the same message on the completely different site. The saddest part, is that you cannot reset the private key, which means that your account has gone for good.
I'm not an expert, but it seems to me like a huge hole in security.
The solution I'm thinking about, is to encrypt the signature on the client before sending it on the back-end. With such approach, back-end will only send you a JWT if you've signed a message on our front-end. So, firstly back-end decrypts the signature and then verifies the message and address. It will skip signatures which were created on other sites as the decryption will fail.
So far we eliminated fake websites problem. But there is another one: attacker can intercept an already encrypted signature and use it on our site. And once again there is no way to reset the signature, it'll remain the same. So what I came up with is, signature must be disposable, it can be used only once. Before signing a message client requests from the back-end special random number linked with according wallet. Based on this number we build signature message like this: "To avoid digital cat burglars, sign below to authenticate with CryptoKitties #564324". Firstly, back-end decrypts the signature, verifies the address and then checks whether specified random number exists in database. Once login is succeeded, the random number is deleted from the database. Now, even if user looses his signature, it can't be used by attacker, because it's already expired.
What do you think? Does described approach make sense?
You have the right idea with "signature must be disposable". The concept is called a nonce (a value used to protect private communications by preventing replay attacks).
Your following logic is correct as well, except that you don't need to delete the nonce from the database, but rather rotate it. I.e. update the value to a new pseudo-random (or at least hard to guess) value.

lightning network, identity cahnnel

I want to make a service that allows to bind private user's channels to a user's account.
The user opens a private channel with my node. But I don't know which user exactly.
To identify the channel, I plan to ask the BOLT11 request from user, with a unique identifier in the description, like an SMS code.
BOLT11 specifies the target address of the payment. I will find a route for this request and thus determine the user's channel.
Is such a scheme safe? Can a fraudster create a BOLT11 request for a channel that does not belong to him? Сan you suggest a better identification scheme?
Can a fraudster create a BOLT11 request for a channel that does not belong to him?
Bolt 11 states:
The recovery ID allows public-key recovery, so the identity of the payee node can be implied
I'm not sure how every implementation matches the specification here. I would assume that all of them perform signature verification, but they might not expose public-key recovery functionality. Your idea of finding a path without actually paying the invoice might work.
However it seems that what you need is actually to identify an existing private channel between your node and the user's. Private channels should be included as routing hints in the invoice, so it might even be easier to get it from there by just decoding the invoice.
An alternative option would be to ask the user to pay an invoice generated by your node, a msat would be enough. For sure she won't be able to craft a payment with a node she doesn't control.
The safest way would be if the user signes a message with one of the keys that he uses to sign commitment transactions. This would certainly bind his identity to the channel. However current implementations don't offer that api. But that does not mean that this would not be possible

How to design a secure token authentication protocol using a 6-digit number?

I have a security number generator device, small enough to go on a key-ring, which has a six digit LCD display and a button. After I have entered my account name and password on an online form, I press the button on the security device and enter the security code number which is displayed.
I get a different number every time I press the button and the number generator has a serial number on the back which I had to input during the account set-up procedure.
I would like to incorporate similar functionality in my website. As far as I understand, these are the main components:
Generate a unique N digit aplha-numeric sequence during registration and assign to user (permanently)
Allow user to generate an N (or M?) digit aplha-numeric sequence remotely
For now, I dont care about the hardware side, I am only interested in knowing how I may choose a suitable algorithm that will allow the user to generate an N (or M?) long aplha-numeric sequence - presumably, using his unique ID as a seed
Identify the user from the number generated in step 2 (which decryption method is the most robust to do this?)
I have the following questions:
Have I identified all the steps required in such an authentication system?, if not please point out what I have missed and why it is important
What are the most robust encryption/decryption algorithms I can use for steps 1 through 3 (preferably using 64bits)?
Your server has a table of client IDs and keys. Each client also knows its own key.
The server also maintains a counter for each client, initialised to zero. Each client maintains a counter, also initialised to zero.
When the button is pressed on the client, it generates a HMAC of the current counter value, using its key as the HMAC key. It generates an alphanumeric code from the HMAC output and displays that to the user (to send to the server). The client increments its counter value.
When an authentication request is recieved by the server, it repeats the same operations as the client, using the stored key and counter for that client. It compares the alphanumeric code it generated with the one recieved from the client - if they match, the client is authenticated. If they do not match, the server increments its counter for that client and repeats the process, for a small number of repetitions (say, ~10). This allows the server to "catch up" if the client counter has been incremented without contacting the server.
If the counter rolls over to zero, the server should not accept any more authentication requests for that client ID, until it is issued a new key.
There are extensions to this basic protocol: For example, instead of a counter, you can use synchronised clocks on the server and client (with the value changing every N seconds instead of every button press).
What you're describing is called an HOTP, or HMAC-based One Time Password. Implementation is described in this RFC, and unless you have a compelling reason not to, I'd strongly suggest implementing it as-is, since it's been vetted by cryptographers, and is believed secure. Using this will also give you compatibility with existing systems - you should be able to find HOTP-compatible tokens and software apps, like Google Authenticator for Android.

Best way to seamlessly & silently authenticate with a second webapp while logged in to a first?

Third party app (A) needs to link users to our app (B) and log them in behind the scenes.
Both apps work independently with their own auth systems. Users share a common unique ID, but have different authentication tokens (username/password/key etc) at each app.
The two complicating factors are as follows:
One app B user may associate with two app A users (e.g. both accounts at app B would redirect and login to the same app A account)
The app B user may not actually have any existing auth tokens, only their personal record and user ID, but we still want to be able to log them in if they are coming from app A.
My first thoughts were OAuth - but I don't think it will work as some users don't have app B accounts and thus won't be able to log in to grant app A access (see point 2 above).
The simplest way I have come up with is:
Each app has a pre-shared key e.g. "LOLS"
Common hash algo generates indepentent identical tokens e.g. hash(PSK + UID)
App B stores hashed tokens for each user
App A sends POST with UID and hashed token to App B, which uses it to identify and auth against a user
The problem with this is that it's hideously insecure. Anyone with knowledge of the pre-shared key (any system admin) and a user's ID (once again, any system admin) would be able to authenticate as ANY user, which is unacceptable.
Does anyone have any solutions? I'd prefer existing standards but am open to customised implementations. We can't really do much to app B other than to get them to use whatever API we provide.
I've faced situation similar to this many times. There have been a variety of solutions we've explored, here's one of them.
You produce a webservice for them to call. This could be something you lock down however you like, including by limiting access to their IP address at the firewall. They post the UID to your webservice, which inserts into a table on your end and hands back some sort of random token (we randomly generated a guid). Your table associates the token with the UID (in plaintext) they sent and a datestamp.
Their application sends the random token to you instead of the UID, you use it to look up the UID, and use the timestamp to make sure the random tokens are expired after a minute or so. Even if someone looks through your table somehow to get the list of UID's recently attempted, it doesn't let them authenticate unless they can pull it off real fast!

When working with most APIs, why do they require two types of authentication, namely a key and a secret?

I have been working with APIs and I've always wondered why you have to use a key and a secret?
Why do you need two types of authentication?
When a server receives an API call, it needs to know two things: Who is making the call, and whether or not the call is legitimate.
If you just had one item ("key"), and included it with every call, it would answer both questions. Based on the "key" the server knows who you are, and because only you know the key it proves that the call is actually coming from you. But including the key with every call is bad security practice: if someone can read even one of your messages in transit, your key is compromised, and someone can pretend to be you. So unless you're using HTTPS, this approach doesn't work.
Instead, you can include a digital signature with every call, signed with some "secret" number. (The "secret" number itself is not sent). If an attacker manages to read your message, they won't be able to figure out this "secret" number from the signature. (This is how digital signatures work: they are one-way).
But this doesn't solve the identification question: In the latter case, how does the server know who is making the call? It could try to verify the signature against the "secret" of every single user, but of course this would be very time-consuming.
So, here's what we do: Send both a "key" (that identifies the user), and a signature created using the "secret" number (that proves that the message is legitimate). The server looks up the user based on the key, and then validates the signature using that user's "secret" number.
This is a bit like when you write a check: It has an account number on it (to identify you) and your signature (to prove that you're you). Having just the account number wouldn't prove that you actually wrote the check. Having just the signature without the account number would force the bank to compare your check against all of its signatures for all of its accounts, which would obviously be inefficient.