Selenium- Properties file content keeps shuffling after each run - selenium

I've organized the application.properties file content in a specific sequence but every time I run scripts, content shuffles. Is there a way to stop this from happening? I want to read or write the config key values but the locations of config keys should not change.

Related

upload same file with different name each time in Load Runner

I need to upload an excel file in load runner HTTP/HTML script with unique filename each time. The file must present in the directory. Copying files and renaming them will be more manual task. Can anyone tell me is there any automated way to do this? or Load Runnner itself can perform such tasks? Thank you.
On each of your load generators make sure that you have a ram drive for the file i/o for the new files. You are going to have ten, perhaps hundreds or thousands on your load generator. You do not want contention for the read/write heads of a physical hard drive acting as a drag anchor on the performance of your entire load generator. It is for this same reason that logging is minimized during test execution.
Include the base file as part of your virtual user
Use appropriate language API for making a copy of the file from the virtual user directory to the ramdrive on your virtual user generator with a new name. It might recommend a name which includes virtual user number and iteration number at the end to ensure uniqueness across your virtual user population.
Upload your file from the ramdrive as the source
Delete your newly created file to return to the same initial condition as the beginning of the iteration.
As you will be engaging a large amount of file i/o for the virtual users it is highly recommended that you monitor the load generators just as you would monitor your application under test. If you are new to LoadRunner and performance testing then this is an excellent opportunity for your mentor/trainer to guide you on a monitoring strategy.
Assuming the upload is done using a html form..
Use web_submit_data() with the FilePath argument.
but first lets create some parameters to get a real unique filename (very importent)
create a parameter VUSERID which outputs the current vuser id.
get/save the current timestamp
web_save_timestamp_param("TIMESTAMP", LAST);
and here is the request:
web_submit_data("i1",
"Action=https://{LR_SERVER}/{LR_JUNCTION}/upload",
"Method=POST",
"EncType=multipart/form-data",
"Snapshot=t1.inf",
"Mode=HTML",
ITEMDATA,
"Name=FIELDNAME", "Value={VUSERID}{TIMESTAMP}_LOADTEST.xlsx", "File=yes", "FilePath=REALFILEPATH.xlsx", "ContentType=WHATEVERCONTENTTYPE", ENDITEM,
LAST);
The Value={VUSERID}{TIMESTAMP}_LOADTEST.xlsx will be the new (unique) filename. (It is unique for each user and iteration! very importent)
The FilePath points to the real file and its content will be uploaded.

Is there an easy way to temporarily turn off parts of the scenario?

My aim is to temporarily turn off some of the Text Sinks for a specific batch run. My motive is that I want to save processing time and disk space. My wider aim is to easily switch not only between different text sinks but also parameter files, data loaders, etc.
A few things I've tried:
manually put the xml-files linked to the text sinks in a different folder --> this creates an error message (that possibly can be ignored?) and does not serve my wider aim of having different charts/data loaders/displays/etc.
create a completely new scenario-tree by copying the .rs folder and creating a new Run Configuration for that .rs folder --> if I want to change the parameters in all the scenarios at once, then I need to do it manually
try to create a new scenario.xml file (i.e., scenario2.xml) in the hope this would turn up as an alternative in the scenario tree --> nothing turned up in the GUI
Thus: Is there another easy way to temporarily turn off parts of the scenario?
What we've done in the past is create different scenarios for each type of run (your second option). Regarding the parameters in the scenario folders, you could potentially run a script to copy the version you want to all the scenario folders so you don't have to manually adjust each one.

Amazon S3: How to safely upload multiple files?

I have two client programs which are using S3 to communicate some information. That information is a list of files.
Let's call the clients the "uploader" and "downloader":
The uploader does something like this:
upload file A
upload file B
upload file C
upload a SUCCESS marker file
The downloader does something lie this:
check for SUCCESS marker
if found, download A, B, C.
else, get data from somewhere else
and both of these programs are being run periodically. The uploader will populate a new directory when it is done, and the downloader will try to get the latest versions of A,B,C available.
Hopefully the intent is clear — I don't want the downloader to see a partial view, but rather get all of A,B,C or skip that directory.
However, I don't think that works, as written. Thanks to eventual consistency, the uploader's PUTs could be reordered into:
upload file B
upload a SUCCESS marker file
upload file A
...
And at this moment, the downloader might run, see the SUCCESS marker, and assume the directory is populated (which it is not).
So what's the right approach, here?
One idea is for the uploader to first upload A,B,C, then repeatedly check that the files are stored, and only after it sees all of them, then finally write the SUCCESS marker.
Would that work?
Stumbled upon similar issue in my project.
If the intention is to guarantee cross-file consistency (between files A,B,C) the only possible solution (purely within s3) is:
1) to put them as NEW objects
2) do not explicitly check for existence using HEAD or GET request prior to the put.
These two constraints above are required for fully consistent read-after-write behavior (https://aws.amazon.com/about-aws/whats-new/2015/08/amazon-s3-introduces-new-usability-enhancements/)
Each time you update the files, you need to generate a unique prefix (folder) name and put this name into your marker file (the manifest) which you are going to UPDATE.
The manifest will have a stable name but will be eventually consistent. Some clients may get the old version and some may get the new one.
The old manifest will point to the old “folder” and the new one will point the new “folder”. Thus each client will read only old files or only new files but never mixed, so cross file consistency will be achieved. Still different clients may end up having different versions. If the clients keep pulling the manifest and getting updated on change, they will eventually become consistent too.
Possible solution for client inconsistency is to move manifest meta data out of s3 into a consistent database (such as dynamo db)
A few obvious caveats with pure s3 approach:
1) requires full set of files to be uploaded each time (incremental updates are not possible)
2) needs eventual cleanup of old obsolete folders
3) clients need to keep pulling manifest to get updated
4) clients may be inconsistent between each other
It is possible to do this single copies in S3. Each file (A B C) will have prepended to it a unique hash or version code [e.g. md5sum generated from the concatenation of all three files.]
In addition the hash value will be uploaded to the bucket as well into a separate object.
When consuming the files, first read the hash file and compare to the last hash successfully consumed. If changed, then read the files and check the hash value within each. If they all match, the data is valid and may be used. If not, the downloaded files should be disgarded and downloaded again (after a suitable delay)..
This will catch the occassional race condition between write and read across multiple objects.
This works because the hash is repeated in all objects. The hash file is actually optional, serving as a low-cost and fast short cut for determining if the data is updated.

How to delete large file in Grails using Apache camel

I am using Grails 2.5. We are using Camel. I have folder called GateIn. In this delay time is 3minutes. So Every 3minutes , it will look into the folder for file. If the file exists, it will start to process. If the file is processed within 3 minutes, file get deleted automatically. Suppose my file takes 10minutes,file is not deleted.Again and again, it process the same file. How to make file get deleted whether it is small or bulk file. I have used noop= true to stop reuse of file. But i want to delete the file too once it is preocessed. Please give me some suggestion for that.
You can check the file size using camel file language and decide what to do next.
Usually, in this kind of small interval want to process a large size of file, it will be better to have another process zone (physical directory), you have to move the file after immediately consuming it to that zone.
You can have a separate logic or camel route to process the file. After successful process, you can delete or do appropriate step according to your requirement. Hope it helps !!

Infinispan Configurations Using property file

Is it possible to load values for infinispan-config.xml file from some property file so that we can get rid of hard coded values. If possible then can somebody show me the way how i load property file in infinispan-config.xml file because there is no Pre defined tag for configuration.
This is possible by setting respective system properties.
For example here is one specific Infinispan configuration file which is using this approach: https://github.com/infinispan/infinispan/blob/master/core/src/test/resources/configs/string-property-replaced.xml
and here is a test which is working with that file: https://github.com/infinispan/infinispan/blob/master/core/src/test/java/org/infinispan/config/StringPropertyReplacementTest.java
This looks to be the most straightforward way how to achieve this.
The last thing which needs to be done is to simply read all lines in your configuration file and put them correctly to system properties.