I need to change the default time of automatic cache clear for Phantomjs if there is such a feature. Any idea?
Should be the feature that you are looking for:
https://github.com/ariya/phantomjs/issues/10357
page.clearMemoryCache()
Each PhantomJS process has its own in-memory cache, so there is no need to clear it between script executions. You can let PhantomJS save the cache in disk, so that it persists accross executions. See the --disk-cache option.
There is no way to clear the cache during a script execution.
localStorage on the other hand is persisted every time and you cannot turn it off. So you may need to add the following snippet before exiting PhantomJS.
page.evaluate(function(){
localStorage.clear();
});
Related
My Cypress test cases are working fine when I run from my system pointing to QA. But the scheduled builds from CI are failing randomly because sometimes the page is taking more time to load.
I've tried cy.wait(1500) -> It works sometimes and fails sometimes. So, I was wondering is there a command in cypress that waits till all components in the page is loaded. Instead of I try different values inside cy.wait() which in turn fails someday?
By default, Cypress has smart waits for all elements to load and the page to render. This section confirms that: https://docs.cypress.io/guides/core-concepts/introduction-to-cypress.html#Cypress-is-Not-Like-jQuery
Instead of inserting cy.wait() in each and every step (Best practice is to minimise the use of this), you can include a maximum timeout on your cypress.json file:
"defaultCommandTimeout": 30000,
"pageLoadTimeout": 60000,
"requestTimeout": 60000,
If it still fails, then something is wrong with your test environment and might be a good time for a dev to check its performance as you don't want an environment that loads that long especially if it is a replica of production / live site.
I have the IDEA Ultimate 2018.1 with flowtype (flow-bin) configured and all the checkboxes selected. I followed this guide: https://www.jetbrains.com/help/idea/2017.2/flow-type-checker.html
The type checking needs much time to be executed. I change something in my code (reverting a wrong annotation, or creating a wrong one), and I need to wait around 30 seconds to get the correct annotation, this is, IDEA triggers the flow server to analyse the files and modify the editor accordingly. That is quite a lot.
Can I trigger that type checking analysis manually inside IDEA to get the editor updated? Or can I change the auto-running interval?
As Kraus noticed, my version of flow-bin was old.
I was using the version 0.26.0 instead of the new 0.74.0, mainly because when I updated flow I was not using flow-bin but flow...
Thanks. Now IDEA and flow are fast.
I was having issues with submitting a document into Solr on Google Cloud and read somewhere that the issue should be resolved by committing.
I couldn't figure out how to commit on Solr(noob) and pressed a button called reload. The error went away, but I'm afraid I messed something else up. Can anyone explain what reload does compared to commit, or confirm if reload was fine?
No, reload isn't fine if you want a commit.
The reload command tells solr to update some core based on a new configuration (solrconfig, schema and another config files). Even if it work in your case, it's not meant for the purpose of commit.
The commit command tells solr that the data sent to it should be searchable ASAP. I guess it's what you're looking for.
For this you can configure automatic commits and/or soft commits in solrconfig.xml. There's also a URL you can call to achieve this, which is something like this: http://localhost:8983/solr/mycollection/update?commit=true
I recommend you to read this docs:
Commit
Reload
I have a very large tar file(>1GB) that needs to be checked out and is a precondition for executing any tests.
I cannot have dedicated build server for my tests since tests are going to be executed on slave machines which are disposable.
Checking out a file(>1GB) is not optimal since in this case test execution time would increase because of precondition.What is the best optimal way of solving this problem?
I would dedicate a location on the slaves for that file.
Then in your tests, check if the file is in that location. If not, check it out and move it there. Since this location is outside your normal work area it won't get cleaned, and the file will stay there for the next test execution to use, and you won't need to check it out again.
Of course if the file changes you have to clear those caches. A first option would be to do this manual, alternative you can create a hash of the file and keep that hash in the cash and in your version control. You would then compare only the hashes, and only if those change you would check out the file.
Of course this requires that you have the ability to checkout all the rest of your code without the big file. How to do that obviously depends on the version control system in use.
I am testing DiskCache with S3Reader2 for an imagecollection and it caches very nice and seemless.
However I have a hard time finding any info on clearing the diskcache for an image - which I need after image-update.
Is that possible via querystring? Or via C# Code?
Thanks
S3Reader2 supports invalidation in V4+ with a sliding expiration window.
You can also take the easy approach and append "&cachebreaker=x" when your underlying file changes. This has the benefit of working immediately and across CDNs.