ADFS 2.0 Token-Signing certificate grace period - adfs2.0

My ADFS token-signing (and token-decrypting) certificate is in the process of auto-rolling over - the secondary cert got generated last night and now shows in the ADFS console. The option to promote it to Primary (right-click on the cert, "Set as Primary") is greyed out, I assume because AutoCertificateRollover is enabled.
I know I have 5 days of a grace period, at the end of which the Secondary will be promoted to Primary. My question is, does the secondary cert actually get used during this 5 day stretch, or does it start getting used at the end, when it gets promoted? We have a few RPs that we need to update with the new CER manually, and I want to know whether this can happen now (inside the 5 day grace period) or a the end, when the secondary gets promoted. The former would be nice, because if it is the latter, that would mean I will have to update the RPs as soon as it rolls over, otherwise, if I am not mistaken, there will be an outage.
Thanks!

Both certs will be in the metadata and WIF allows you to have more than one cert. in the web.config.
So the old one will be used until the switchover, then the new one will be used.

Related

Optimization for GetSecret with Azure Keyvault

Our main goal for now is optimising the a processing service.
The service has a system-assigned managed identity with accespolicies that allow to get a secret.
This service makes 4 calls to a keyvault. The first one takes a lot longer than the others. I'm scratching my head, because the Managed Identity token takes 91µs to obtain. Application Insights image
I changed the way the tokens were obtained. The program only obtains it once and keeps using that same token for other round trips. I did this by making the CredentialClass AddScoped.
I assume the question is why the first call takes more time. If yes, just throwing in a couple of generic reasons which might contribute:
HTTPS handshake on the first call might take time
The client might need to create a connection pool
In case of Azure Key Vault, the first call does two round-trips AFAIK, first for auth, the second for the real payload

Azure IoT Hub blocked for two consecutive days, will not let me change to a paid tier

I have a free tier subscription to Azure IoT hub with only two edge devices connected to it, one of them mostly off. Yesterday, it looks like my hub recorded a slew of messages--within 45 minutes (5 to 5:45 pm PST), 25K messages were recorded by the hub. A few related issues.
I'm not sure what these messages were. I'll add message storage for the future, but wondering if there's a way to debug this.
Ever since then, I haven't been able to use the IoT hub. I get a "message count exceeded" error. That made sense till around 5 pm PST today (same day UTC), but not sure why it is still blockign me after that.
I tried to change my F1 to a basic tier to basic, but that wasn't allowed because I am apparently "not allowed to downgrade"
Any help with any of these?
1.I'm not sure what these messages were. I'll add message storage for the future, but wondering if there's a way to debug this.
IoT Hub operations monitoring enables you to monitor the status of operations on your IoT hub in real time. You can use it to monitor Device identity operations,Device telemetry,Cloud-to-device messages,Connections,File uploads and Message routing.
2.Ever since then, I haven't been able to use the IoT hub. I get a "message count exceeded" error. That made sense till around 5 pm PST
today (same day UTC), but not sure why it is still blockign me after
that.
IoT Hub Free edition enables you to transmit up to a total of 8,000 messages per day, and register up to 500 device identities. The device identity limit is only present for the Free Edition.
3.I tried to change my F1 to a basic tier to basic, but that wasn't allowed because I am apparently "not allowed to downgrade".
You cannot switch from Free to one of the paid editions. The free edition is meant to test out proof-of-concept solutions only.
Confirming the earlier answer, the only solution is to delete the old hub and create a new free one, which is simple enough.
I still haven't figured out what those specific error messages were, but I do notice that when there are errors such as CA certificate auth failures, lots of messages get sent up. I'm still working with MSFT support on the CA certificate signing issues, but this one is a side effect.
For future reference, look at yoru hub's metrics, and note that (i) quote gets reset midnight UTC, but (ii) net violations do not.

Cryptography: Verifying Signed Timestamps

I'm writing a peer to peer network protocol based on private/public key pair trust. To verify and deduplicate messages sent by a host, I use timestamp verification. A host does not trust another host's message if the signed timestamp has a delta (to the current) of greater than 30 seconds or so.
I just ran into the interesting problem that my test server and my second client are about 40 seconds out of sync (fixed by updating ntp).
I was wondering what an acceptable time difference would be, and if there is a better way of preventing replay attacks? Supposedly I could have one client supply a random text to hash and sign, but unfortunately this wont work as in this situation I have to write messages once.
A host does not trust another host's message if the signed timestamp has a delta (to the current) of greater than 30 seconds or so.
Time based is notoriously difficult. I can't tell you the problems I had with mobile devices that would not or could not sync their clock with the network.
Counter based is usually easier and does not DoS itself.
I was wondering what an acceptable time difference would be...
Microsoft's Active Directory uses 5 minutes.
if there is a better way of preventing replay attacks
Counter based with a challenge/response.
I could have one client supply a random text to hash and sign, but unfortunately this wont work as in this situation I have to write messages once...
Perhaps you could use a {time,nonce} pair. If the nonce has not been previously recorded, then act on the message if its within the time delta. Then hold the message (with {time,nonce}) for a windows (5 minutes?).
If you encounter the same nonce again, don't act on it. If you encounter an unseen nonce but its out of the time delta, then don't act on it. Purge your list of nonces on occasion (every 5 minutes?).
I'm writing a peer to peer network protocol based...
If you look around, then you will probably find a protocol in the academic literature.

Sometimes DataCache.GetObjectsInRegion() return an emply list while objects are present in the region

I'm using AppFabric caching in a WCF service hosted in WAS.
I must do something wrong because sometimes GetObjectsInRegion() return an empty list while objetcs are indeed present in the region.
Unfortunately, I'm not able to identify the context in which the problem is reproductible.
It seems though that if the web service is restarted, existing regions are seen empty for the service.
Im sure that this is not tied to a timeout problem.
I'll update the question if there is any progress on my side.
Any help appreciated.
This one was a bug on my side.
I was not explicitely setting expiration timeout in some circumstances. The cache cluster was configured with default expiration settings. The TTL is 10 minutes. Objetcs were automatically removed from the cache.
The takeway is : Always set an expiration timout when putting objetcs in the cache.

NServiceBus Distributor: Preventing extra entries in StorageQueue after Client restart

For simplicity, I'll refer to both the distributor's ControlInputQueue and it's StorageQueue as the same. I understand how the distributor's client notifies of it's availabilty by writing an entry to the ControlInputQueue and how the distributor moves the entry to it's StorageQueue to track which clients are available to do work. It's just easier to explain if I treat them as the same. So...
I've created a proof of concept to demonstrate the behavior of the NServiceBus distributor. As expected, when a client starts up, it adds an entry to the distributor's StorageQueue. When a message comes in to the distributor (via it's InputQueue), the distributor removes an entry from it's StorageQueue, and forwards the message to the indicated client. The client performs it's work, and then adds an entry back to the distributor's StorageQueue. Thus there is at most one entry (per client) in the distributor's StorageQueue.
My problem occurs when a client is shut down, either manually or unexpectedly (like the server explodes). The client's entry still exists in the Distributor's StorageQueue, so as far as the distributor knows, that client is still available. This is fine, except that when the client starts up again, it adds another entry to the StorageQueue. So now there are two entries in the StorageQueue for a single client.
Is there any way to ensure that the distributor only ever has one StorageQueue entry for any given client?
In the interests of providing an "official" answer to this question... Per Andreas' comment above, it seems that there isn't a way to prevent these duplicate entries in NServiceBus v2.6, but there is in v3.0. So the solution is to upgrade. ;-)