How can I automate my spider runs using scrapyd? - scrapy

I know this probably seems ridiculous. I have given up on a windows scrapyd implementation and have set up a ubuntu machine and got everything working just great. I ahve 3 projects each with their own spider. I can run my spiders from the terminal using:
curl http://localhost:6800/schedule.json -d project=myproject -d spider=spider2
Everything seems to work in the web UI as well with the scraped items from when I run the above code showing up in the correct places.
I want to run project 1 every day at 12:00am, project 2 every second day at 2:00am and project 3 every 2 weeks at 4:00am. Please help me to learn how to do this.
Is scrapyd even an appropriate solution for this task?

Scheduled Tasks seems to do the trick. I have to say I'm left wondering if it was really worth the effort of setting up a ubuntu machine for the sole purpose of running scrapyd when I could have just scheduled the scrapy crawl commands with schtasks in windows.

Related

Runner got struck after running the test cases I am using 1.1.0 version [duplicate]

We have currently about 200 test features. We start to face something strange, most of the times tests are just stuck and would not proceed when we run mvn test command as the following:
mvn clean test -Dcucumber.options="--tags $tags" -Dtest=TestRunner -Dkarate.env=$env
Some tests would run as it was perfectly fine. But at some point the rest will just stuck as it it hangs.
We run the tests in parallel using 10 threads.
It stucks like this
Anybody experienced similar things? Any ideas what could possibly went wrong?
Thanks
This should be fixed in 0.9.5.RC3 - it is stable to use for API testing, so I recommend you upgrade.
If anyone faces this problem for any other version of Karate, please understand that the best (and possibly only) way to troubleshoot or solve this - is to follow this process: https://github.com/karatelabs/karate/wiki/How-to-Submit-an-Issue
I actually have the same problem as you but I can't comment because of reputation, my project works with Gradle and I'm using IntelliJ IDEA and JDK1.8(at another moment before all this I tried Jetbrains SDK11 but had the same problem, I downgraded to java 8 and everything worked again) on this ocassion I did as peter said and upgraded to 0.9.5.RC4 but still when I execute some of my features they never end, for example, I'm currently working on a very simple feature that calls another feature for login, it works for many other features but for this one it appears to get to the end of its execution and never go back to the caller feature, as I was running out of options I made a new simple project copied the resources folder I store my features in and my run parallel class and tried again but it behaves in the same way, the execution never ends.
I'll upload an image with my screen while it executes as you can see it's been executing for 15 minutes
projectView

Karate Tests Stuck on Running Forever

We have currently about 200 test features. We start to face something strange, most of the times tests are just stuck and would not proceed when we run mvn test command as the following:
mvn clean test -Dcucumber.options="--tags $tags" -Dtest=TestRunner -Dkarate.env=$env
Some tests would run as it was perfectly fine. But at some point the rest will just stuck as it it hangs.
We run the tests in parallel using 10 threads.
It stucks like this
Anybody experienced similar things? Any ideas what could possibly went wrong?
Thanks
This should be fixed in 0.9.5.RC3 - it is stable to use for API testing, so I recommend you upgrade.
If anyone faces this problem for any other version of Karate, please understand that the best (and possibly only) way to troubleshoot or solve this - is to follow this process: https://github.com/karatelabs/karate/wiki/How-to-Submit-an-Issue
I actually have the same problem as you but I can't comment because of reputation, my project works with Gradle and I'm using IntelliJ IDEA and JDK1.8(at another moment before all this I tried Jetbrains SDK11 but had the same problem, I downgraded to java 8 and everything worked again) on this ocassion I did as peter said and upgraded to 0.9.5.RC4 but still when I execute some of my features they never end, for example, I'm currently working on a very simple feature that calls another feature for login, it works for many other features but for this one it appears to get to the end of its execution and never go back to the caller feature, as I was running out of options I made a new simple project copied the resources folder I store my features in and my run parallel class and tried again but it behaves in the same way, the execution never ends.
I'll upload an image with my screen while it executes as you can see it's been executing for 15 minutes
projectView

How to run e2e tests automatically?

I really don't know how to ask question to Google about this, so I excuse me that it is naive.
Our team is developing SPA application in ReactJS. We also do back-end programming for NodeJS. Our project recently got more e2e tests. They are written using webdriver.io packages. Everything works as expected but circa 30 tests run about 50 minutes. It is too long to pause developer work and force him to run tests.
We came with the idea that now when we have so many tests, we need to run them on separate computer (other than a developer's laptop, further I call it e2e-laptop).
So I programmed a bash script and installed Ubuntu on a e2e-laptop. My idea is, that developer who wants to run e2e test logs in on e2e-laptop with ssh, runs specified script with arguments (eg: --rev= specific git revision the tests should run on, --email= where to send Allure report) and logs out. After tests are done he gets Allure report in his mailbox.
This all sounds to me OK, but not very well. It works - it is like a dirty MVP. But what I really would like to give my team is the web browser based UI that gives the features my script has. I can imagine this software is hosted on e2e-laptop, every developer can open its webpage address in his local browser. Then after authorization, there are options: run all specs, run chosen specs, send report and more. It would be the best if that software could also allow simultaneous running of tests commissioned by multiple developers.
What software I need?
You need a continuous integration tool. https://stackify.com/top-continuous-integration-tools/
I recommend Jenkins.
I would first try to run your selenium tests headless in a docker container on your laptop. Once you are able to do that, use that same configuration in your docker container running in Bitbucket pipelines. It could actually be the same container and the same scripts. Then, developers can just make a branch and work with the tests on that branch. If only a certain subset of tests need to run, then the developer can make the necessary changes on his or her local branch to run those tests and push it up to Bitbucket. This should help with the configuration https://github.com/SeleniumHQ/docker-selenium.

Run a php script through terminal in prestashop

For now, we doing the site migration from Joomla to Prestashop. we created a PHP migration script to migrate all the categories, products, variant and combination. This script working fine with categories, products and variants. But while adding the combination, script goes to 504 gateway error, to fix this issue we increased the execution time and memory limit but its not working. So we tried to run our migration script through the terminal, but we don't know how to configure our script to run in the terminal. Kindly advice us.

gvim - omnicompletion on remote ftp crashes

Hello I have a problem using latest vim with omnicompletion while editing a file that is located remotely on ftp. :e ftp-address//it just hangs on "searching" and then outputs that it is missing a file after along time, the second time i try VIM crashes,
i've tried to look up on :help and googled but it does not seems to be a common problem. But I would like any suggestions on it.
While using it localy it works great.
thanks in advance.
to whom this may interest.
After searching and consulting I've found that this happened because of vim trying to find information on the server which took a long trip back and fourth for each file.
I've found another setup for windows who works well:
There is a program called winSCP.
it isn't the only one but this does it's jobs perfectly.
anyway it has a feature to sync any changes done in a local directory and upload it to the FTP server.
i used all vim features locally and it works fast and good :)