what would cause the same UUID to be generated over and over again? - flask-sqlalchemy

We are using the Flask development server which I know we need to change. It seems that when two requests come in very close to each other bad things happen. On one of those occasions we got this sqlalchemy.exc.InvalidRequestError upon commit.
The response object had already been created and assigned a UUID:
class ServiceResponse(db.Model):
service_response_id = db.Column(
db.String(32),
unique=True,
nullable=False,
primary_key=True,
default=lambda: uuid.uuid4().hex
)
status_code = db.Column(db.SmallInteger, nullable=False)
The problem is that all subsequent requests kept assigning the same UUID to the corresponding response object. This caused integrity errors because a response object with the same UUID already existed in the DB. After I deleted the offending row from the DB the same value was generated one more time, didn't cause an integrity error anymore, and since then different values have been generated as expected.
I'm aware that the type could be changed to UUID but my question is why is uuid.uuid4() generating the same value over and over again for different requests?

Related

ASP.NET Core clear session issue

I have an application where I save some information on the session that later I assign to the model when I save it to the DB.
For example I have the following model saved by User1:
...
MyModel model = new MyModel();
model.name = mypostedModel.name;
model.type = HttpContext.Session.GetString("UniqueTypeForThisUser");
...
After I save the model in my DB, at the end of the post method, I clear the session with this line:
HttpContext.Session.Clear();
Let's say at the same time there's a User2 creating a new model and I have saved another value in the session with a unique key for User2. Same way as before, at the end of the post method I clear the session with the Clear() method.
Does this clear session method clear the session for all users, or only for one user. If for example User1 saves the model first and clears the session for all users, then the User2 will get his session variable cleared (lost) and will assign a null value to my 'type' column for the model.
For the documentation this was not clear for me. Thanks
You Can remove specific keys
HttpContext.Session.Remove("YourSessionKey");
The session object that you can access for example through HttpContext.Session is specific to a single user. Everything you do there will only affect the user that belongs to this session and there is no mix between sessions of other users.
That also means that you do not need to choose session configuration key names that are somewhat specific to a user. So instead of using GetString("UniqueTypeForThisUser"), you can just refer to the values using a general constant name:
var value1 = HttpContext.Session.GetString("Value1");
var value2 = HttpContext.Session.GetString("Value2");
Each user session will then have these values independently. As a result, calling Session.Clear() will also only clear the session storage for that session that is specific to its user.
If you actually do need different means for storing state, be sure to check out the docs on application state. For example, things that should be stored independently of the user can be stored using an in-memory cache.
Does this clear session method clear the session for all users, or only for one user.
The HttpContext is the one for the current request. Since every user has a different request, it follows that clearing the session on the current request only clears it for that request's user, not all users.

How to list all object's versions from a bucket and their respective metadata (x-amz-meta-version)

to achieve what's in the title I am trying to do a couple steps (using python & AWS SDK), which I will list after I mention the error I am getting is 412 "At least one of the preconditions you specified did not hold" in the second iteration of the method get_object, when I send it the parameters: bucket, key and IfMatch (it fails in this line).
List all object's versions with the following code
s3 = boto3.client('s3')
response = s3.list_object_versions(
                    Bucket='my-bucket',
                    Prefix='file.exe'
                )
obj_versions = response["Versions"]
This totally works, but I need the versions I set in metadata (x-amz-meta-version), to get each object's metadata version I am trying to do the following:
obj_info = []
for obj_version in obj_versions:
    obj = s3.get_object(
            Bucket='my-bucket',
            Key='file.exe',
            IfMatch=obj_version['ETag']
        )
    obj_info.append(obj['Metadata']['version'])
And that's it, at the moment it only works until the second iteration oddly enough, it always fails with a 412 "At least one of the preconditions you specified did not hold" in the s3.get_object (IfMatch) line. I know for sure the error is in the precondition IfMatch, but I have no idea what's wrong... I have printed every ETag it's receiving and they're all valid, it should be able to get the object.
​
Thank you for reading my post.
s3.get_object() tries retrieve the current version of the object unless you include VersionId of another version, so the behavior you are observing is correct -- the ETag of the current version does not match the etag value that you're passing, and thus the precondition fails. ETag isn't a lookup key, and IfMatch isn't a selector (it's a conditional request -- "don't give me the object unless the precondition matches") and in any event, multiple versions of an object can have the same ETag if the object versions have identical payload (depending on the type of encryption you are using on the bucket -- the standard only requires etags to differ if payload differs; it does not technically require them to match if the payload matches).
Note also that if you only want the metadata, then for both cost and performance reasons you should use s3.head_object() to avoid fetching the object payload.

Adding authenticated attributes using MS CryptoApi

I'm struggling adding authenticated attributes (OCSP data) to my message using CryptoApi. I first used CryptoApi's simplified message functions, but now switch to the low-level message functions, thinking that I would be able to control the message structure better. But I am once again stuck. My process is as follows:
Initialize CMSG_SIGNER_ENCODE_INFO and CMSG_SIGNED_ENCODE_INFO structure
I create a CRYPT_ATTRIBUTE for the ocsp date and specifies it in the CMSG_SIGNER_ENCODE_INFO structure
I then call CryptMsgCalculateEncodedLength to get the size
CryptMsgOpenToEncode with CMSG_SIGNED as the message type
CryptMsgUpdate, to insert my content into the message
CryptMsgGetParam with CMSG_CONTENT_PARAM to get the encoded blob
CryptMsgClose, I'm done with the message for now.
I open the message again to get the CMSG_ENCRYPTED_DIGEST, which is sent to a TSA and the result is added as an unaunthenticated attribute using CryptMsgControl.
I'm using this to sign signature tags in Adobe. So when there is no authenticated attributes, I receive three green check from Adobe:
The document has not been modified...
The document is signed by the current user
The signature includes an embedded timestamp (and the timestamp is validate)
But as soon as the authenticated attribute is added the signer's identity is invalidated and the timestamp data in incorrect. The CMSG_COMPUTED_HASH_PARAM when authenticated attributes are added and when not, differs. Should this not be the same? Since the document digest is of the content of the document and not of the authenticated attribute.
Is there another way to add authenticated attributes? I've tried to add it as a signer using CryptMsgControl, but that did not help either...
how about this step on adding the authenticated attributes for signing, example time stamping,
CryptEncodeObject(PKCS_7_ASN_ENCODING, szOID_RSA_signingTime, &curtime, pTime, &szTime);
pTime = (BYTE *)LocalAlloc(GPTR, szTime);
CryptEncodeObject(PKCS_7_ASN_ENCODING, szOID_RSA_signingTime, &curtime, pTime, &szTime);
time_blob.cbData = szTime;
time_blob.pbData = pTime;
attrib[0].pszObjId = szOID_RSA_signingTime;
attrib[0].cValue = 1;
attrib[0].rgValue = &time_blob;
CosignerInfo.cAuthAttr = 1;
CosignerInfo.rgAuthAttr = attrib;
and that Cosigner params is from CMSG_SIGNER_ENCODE_INFO CosignerInfo;

ActiveRecord (Rails 3.0.1): API Can't Handle Too Many Requests?

I have an API that services a web-based plugin for processing email. The API is responsible for two things:
Creating SessionIDs so the plugin can setup a dynamic link; and
Once an email is sent, for receiving that SessionID, the email recipients and subject line, to store the information into a new session.
Imagine the scenario where the plugin sends a request to the API:
PUT http://server.com/api/email/update/<SessionID> -d "to=<address1,address2>&subject=<subject>"
In testing this works fine: the data is saved normally. However, the plugin can't help but send that request several times a second, bombarding my server with identical requests. The result is that I get my EmailSession object saving multiple copies of the recipients.
In terms of my database schema, I have an EmailSession model, which has_many EmailRecipients.
Here's the relevant part of the update method in my API's controller:
#email_session = EmailSession.find_or_create_by_session_id(:session_id => params[:id], :user_id => #user.id)
if opts[:params][:cm_to].blank? == false
self.email_recipients.destroy_all
unless opts[:params][:cm_to].blank?
opts[:params][:cm_to].strip.split(",").each do |t|
self.email_recipients << EmailRecipient.create(:recipient_email => t)
end
end
end
Admittedly, the "find_or_create" dynamic method is new to me, and I wonder if there's something about that screwing up the works.
The symptoms I'm seeing include:
ActiveRecord errors complaining about attempts to save a non-unique key into the database (I have an index on the SessionId)
Duplicate recipients ending up in the EmailRecipients collection
In the case of multiple users employing the plugin, I get recipients from other emails ending up in the wrong email session collections.
I've attempted to employ delayed_job to attempt to serialize these requests somehow. I haven't had much luck with it thanks to various bugs in the current release. But I'm wondering if there's a more fundamental problem with my approach to this solution? Any help would be appreciated.
I'm still not sure I understand what you're doing, but here's my advice.
First off I don't think you are using find_or_create_by properly. This method has slightly confusing semantics (which is why 3.2 introduces some clearer alternatives) but as it stands it isn't using the user_id to find the record (although it is setting user_id if a record is created). I don't think this is what you wanted. Instead use find_or_create_by_session_id_and_user_id
This can still raise a duplicate key error since in between find_or_create checking and it creating the record there is time for someone else to create the record. If you weren't doing anything other than creating email session rows the  rescuing this duplicate key error and then retrying should take of that: on the retry you'll find the row that blocked your insert.
However when you then go on to add recipients you still have a potential issue because 2 things could be trying to remove recipients and add them to the same email session at the same time. This might be a good usecase for pessimistic locking. 
begin
EmailSession.transaction do
session = EmailSession.lock(true).find_or_create_by_bla_bla(...)
# use the session object here, add recipients etc.
end
rescue ActiveRecord::StatementInvalid => e
end
What is happening here is that when the email session is retrieved from the db, the row is locked (even if it doesn't exist yet - effectively you can lock the gap where the record would go). This means that anyone else wanting to add recipients or do any other manipulation has to wait for the lock to be released. Locks last as long as the transaction in which they occur lasts  so all your work should happen in here (even if in the second part you are not actually changing the email session object any more).
You may end up with deadlocks - I don't know what else is going on in your app but you should be prepared for them if you are using transactions. That's what the rescue block is for: if the error message looks like a deadlock then you should probably retry some limited number of times.
Locks are (at least on MySQL) row level locks: as long as you have an index on session_id,user_id then just because one of your instance has one email session object locked doesn't stop another instance from using another one.

nhibernate 'save' -> 'get' problem

HEllo,
I'm using nhibernate and have problems regarding user registration on my site.
When user wants to register I create new user record in the database and immidiatelly after that the system is logging the user in.
Well here lies the problem... When creating user record I'm using
NHibernateSession.Save(entity); //does not saves user object immediately to the database. It's kept in the session.
And when I want to log the user in, i load the user by his user name, and then I get null user object.
Why am I getting a null object and how can I make it work?
Thanks
Ok, I just tested this :
ISession session = s.CreateSession();
User user = new User();
user.Number = 122;
user.UserName = "u";
user.Id = 1;
session.Save(user);
User user1 = session.CreateCriteria<User>().Add(Restrictions.Eq("UserName", "u")).UniqueResult<User>();
session.Flush();
First the Select is being executed from the CreateCriteria and then on Flush the insert. So that's why it's not finding anything.
I also tested with Get<User>(1) and it returns the entity passed to the Save method - no query is executed.
Still - why query the database since you have the entity right there ?
Also, you say you use Get and then say you want to load by the UserName - is the UserName the primary key ? Get tries to load by the primary key.
If your Save and Get are done from different sessions then the Get will return null because the object only exists in the other sessions internal cache until it is flushed.
I'm not sure if an L2 cache would make a difference (I don't if L2 cache is written at Save or Flush).