Records insertion with primary key while load testing on locust - primary-key

I have to test load on POST API on multiple users. API uses the primary constrains while inserting the records in tables. Locust will be facing issue with the primary constrains approach.

I'm not 100% sure I understand exactly what you need still, but it sounds like you need each Locust user to use different data in POST requests you're doing. The easiest way would just be to generate the data randomly. It could be based on some sort of pattern, or if it has to be absolutely unique you could generate a UUID (or do a combination of the two like locust-user-{time stamp}-{UUID} so you can tell in your system back end or elsewhere that it's test data).
But Locust just runs whatever Python code you give it and automates running it concurrently. In most cases, if you write a simple Python script to do what you want successfully, you can just drop it into a Locust task and it should work. You can do whatever you need to in order to get unique or otherwise different data for your POST requests and have Locust users do that in your tests.

Related

Hitting same API to get different number at the same time using concurrency?

I have created a web API to generate a sequence number every time that API hits it generates a sequence number. Now, what I need is to make it concurrent for multiple users so that when multiple users at the same time hit that API then the API generates a different number every time.
This depends a lot on what you are trying to do and for what reason.
Generating unique sequence numbers is very difficult in an environment where you can have multiple users hitting that endpoint at the same time.
If you are trying to give them an ID to use for some sort of data insert then I suggest you don't offer integers. Instead offer GUIDs.
The issue with inserting data based on this kind of mechanism is that sometimes no data is actually inserted for various reason. users change their mind, end up requesting another id, or the subsequent call simply fails so you end up with holes in your data.
Instead offer them back GUIDs and if the call finally comes in then use it.

Async Message Testing

Here is the problem I am facing with respect to Asynchronous Testing. The Problem statement is as below
I get a big batch of xml with data of multiple candidates. We do some validations and split that big xml into multiple xml's for each candidate. Each and every xml is persisted to the file structured database wih a Unique Identifier. A Unique identifier is generated for each of the messages that got persisted to the database. Each of those unique identifier's are hosted on to the Queue for subscription.
I am working on developing the automation test framework. I am looking for a way to let the test class know that unique idenifier has been subscribed by the next step in Data processing.
I have read information regarding the above problem. Most of which specifies Thread sleeps and timers. The problem what would happen is when we run the large number of test cases, it takes enoromously large amount of time.
Have read Awaitility. Had some hopes on it. Any ideas and anyonehas faced a similar situation. Please help.
Thanks
DevAutotester
You could use Awaitility to wait until all id's exists in the db or queue (if I understand it correctly) and then continue to do the validation afterwards. You will have to provide a supplier to Awaitility that checks that all IDs are present. Awaitility will then wait for this statement to be true.
/Johan

validate grails domain classes against a database

What's the best way to validate that the grails domain classes are in sync with a database? It's legacy database and i can't build it from the domain classes. An interesting idea here which implies fetching one row of each of the domains. However, it doesn't feel like a complete solution mainly because the test database against which I validate may not be so data rich as to have data in all tables.
Thanks in advance for taking time to read/reply.
That's a nice approach and must work even for empty tables - if a table is empty, you have no legacy data to worry about validating, right? Or, if you want to test Grails constraints for compatibility with DB constraints, create a new instance of the class and try to save() it in transaction - and always roll the transaction back.
If the database is small, I'd even go and remove max:1 from list() - to validate every record, because only some of the records may violate constraints.
I'd also replace println "${it}" with assert it.validate().
One last optimization, I'd limit the classes tested only to those that I know can violate some constraints. This will save a good part of such a test - and the test is going to take a plenty of time, you know - reading all the database with GORM overhead.

Redis and Object Versioning

How are people coping with changes to redis object schemas - adding or removing properties from objects?
Sharing from my own experience (one year old project with thousands of user requests per second).
Usually, there were two scenarios for me:
Add new information to existing structures (like, "email" field to a user)
Remove or change existing values in existing structures (like, change format of some field)
Drop stuff from the database
For 1 I keep following simple strategy: degrade gracefully, e.g. if user doesn't have email record - treat it as empty email. Worked all the time.
For 2 and 3 it depends, whether data can be changed/calculated/fixed before releasing or after. I run a job on database that does all the work for me, for few millions of keys it takes considerable time (minutes). If that job can be run only after I release the new code - then degrading gracefully helps a lot, I simply release and then run the job.
PS: If you affect a lot of keys in redis then it is very important to use http://redis.io/topics/pipelining Saves a lot of time.
Take a list of all affected (i.e. you want to fix them in any way) keys or records in pipeline
Do whatever you want on them. If it's possible try to queue writing operations into pipeline too
Send queued operations to redis.
It is also very important for you to make indexes of your structures. I keep sets with ids. Then I simply iterate over SMEMBERS(set_with_ids).
It is much, much better than iterating over KEYS command.
For extremely simple versioning, you could use different database numbers. This could be quite limiting in cases where almost everything is the same between two versions but it's also a very clean way to do it if it will work for you.

Versioning data in SQL Server so user can take a certain cut of the data

I have a requirement that in a SQL Server backed website which is essentially a large CRUD application, the user should be able to 'go back in time' and be able to export the data as it was at a given point in time.
My question is what is the best strategy for this problem? Is there a systematic approach I can take and apply it across all tables?
Depending on what exactly you need, this can be relatively easy or hell.
Easy: Make a history table for every table, copy data there pre update or post insert/update (i.e. new stuff is there too). Never delete from the original table, make logical deletes.
Hard: There is an fdb version counting up on every change, every data item is correlated to start and end. This requires very fancy primary key mangling.
Just add a little comment to previous answers. If you need to go back for all users you can use snapshots.
The simplest solution is to save a copy of each row whenever it changes. This can be done most easily with a trigger. Then your UI must provide search abilities to go back and find the data.
This does produce an explosion of data, which gets worse when tables are updated frequently, so the next step is usually some kind of data-based purge of older data.
An implementation you could look at is Team Foundation Server. It has the ability to perform historical queries (using the WIQL keyword ASOF). The backend is SQL Server, so there might be some clues there.