When I am scraping something I see an output to screen of everything that scrapy is pulling. Is there a way to not show this and instead just show some kind of status that the spider is running?
I think you mean the logs, maybe it is setup as DEBUG as default, so to just show important scrapy information I think you should set it to INFO, just go to settings.py and add:
LOG_LEVEL = 'INFO'
Related
This is a well known bug with 'solutions' all over the internet:
When creating a pywhatkit auto-chat on Whatsapp -->
Tab opens
Message populates
The 'send' button is greyed out (I know this is a common bug)
I know this is a common bug and everyone is trying to get around this with something like the below (pyautogui, pynput, etc) but it's simply not working for me.
My dog has gone missing and I need to build a WA chatbot asap, can anyone help?
Ex.
pyautogui.click()
time.sleep(2)
keyboard.press(Key.enter)
keyboard.release(Key.enter)```
time.sleep(5)
pyautogui.click()
time.sleep(2)
keyboard.press(Key.enter)
keyboard.release(Key.enter)
Expected the pynput to work.
I generate a Focus Keyphrase and Meta Description in Yoast SEO meta box for my Home Page. But when I exit the editor, if I go back to the Home Page, the Focus Keyphrase and Meta Description have disappeared.
Does anyone know how I can keep them?
Thanks
I spent easily 5 hours trying to track down and fix an issue with Yoast meta fields not saving properly. The only real clue I had was this console error:
Uncaught TypeError: Cannot read properties of undefined (reading 'registerPlugin') at window.wpseoScriptData.window.wpseoScriptData.featuredImage.i.registerPlugin (post-edit.js)
Based on this Yoast Github issue, it appears that the needed YoastSEO javascript app is not there when it's being called in /wp-content/plugins/wordpress-seo/js/dist/post-edit.js when the page is loading, and Yoast javascript app is not actually there, so it bonks.
So there may be something somewhere that is dequeuing the needed javascript that builds the YoastSEO object. One thing to check is to search your code base (excluding the Yoast plugin itself) for instances of wpseo, yoast, etc. If you find some code that is dequeuing/removing something, that might be it. If you have a plugin with results, deactivate if you can and check if that fixes anything.
I also noticed that the SEO Analysis drawer never loads and has the loading spinner running indefinitely. The javascript that controls the setting fields seems to work, but when you save it doesn't actually save, it just goes into the void.
All that said... I wasn't able to figure out what was wrong, but I did get it to start working again. Here's what worked for me...
After reading through a lot of different issue threads, I stumbled across some issues around security plugins blocking parts of the install process. I have my Wordfence settings dialed up pretty strict, so I deactivated it per recommendations while we go through re-installing Yoast.
After making sure that your security plugin(s) are deactivated, then deactivate and uninstall Yoast. Apparently you don't need to remove the plugin data, just uninstall. (This is also a good time to update other plugins if you haven't already).
Once you reinstall Yoast, you should have a button available to reindex the "indexables" that you need to click. It might be on the Dashboard, I don't remember. If you don't see it, you might go to Search Appearance > General > Rewrite tables > Force rewrite titles and Enable that, then save changes. I'm not 100% positive, but it seemed like after I did that, there was a panel in the General settings Dashboard or something, that said something along the lines of "Looks like some settings have changed, you should re-index your indexables now" with a button to start the process.
I also had to turn "Show SEO settings for Posts/Pages?" back on under Search Appearance > Content types. After I reinstalled, those were disabled. So maybe that helped Yoast know to load the javascript properly.
I'm not super familiar with Yoast, as you can probably tell, but after doing all this, it's working.
I believe you can also trigger the reindexing process from the CLI if you want, and that it's recommended if you have a large site with over 10k posts or whatnot.
Hope all this helps!
I need to simulate file download on chrome browser and below links points to solutions which is what I'm looking for.
http://ardesco.lazerycode.com/index.php/2012/07/how-to-download-files-with-selenium-and-why-you-shouldnt/
https://github.com/Ardesco/Ebselen/blob/master/ebselen-core/src/main/java/com/lazerycode/ebselen/customhandlers/FileDownloader.java
I'm not able to use this code as it requires attribute and in my case button has a reactid which triggers a end point call. Please refer to attached screen shot.
Can somebody please tell what changes I need to do to make code in above links work ?
Thanks in advance.
Regards,
Vikram
To be able to download a link to need to find a way to get the link. The best way to do that is to talk to the developer that wrote the code and find out how it works.
Clicking on the button will trigger some sort of JavaScript event, you need to know what that event is so that you can replicate it to get the download link.
Bear in mind that this is probably not a test that's worth performing in Selenium., it's probably a unit test in JavaScript land.
Since you can't get the link of the downloadable file from the html and verify the https status code because in your case - the downloading happens by javascript method
The only way to verify downloading in your case is to actually click the element and verify that the file was downloaded.
You also need to set the capability in ChromeDriver to download to default directory without asking.
Chrome Web Driver download files
I would like to test that my code does not exhibit a race condition where css files are served slowly. Is there a tool that can help me test this?
Fiddler has "simulate modem speeds", but I can't specify that only css files should be delayed.
Any other quick suggestions? (I don't want to spend too much time on setting this up, this should be a drop in tool that just does the job).
You can disable CSS styles with the Web Developer Toolbar (there are seperate Chrome and Firefox downloads).
If this isn't acceptable you could reference this StackOverflow question to put some JavaScript on the page:
How to load up CSS files using Javascript?
and wrap it in a
setTimeout(function() { // code here }, millisecondsDelay);
to delay the loading however long you'd like.
Fiddler Delayed Responses Extension seems to be what I want.
I need to create a screenshot of the page by providing a page URL to the command line tool. I found the following application: Convert HTML To Image. This tool is OK but want a more flexible application. I need to have ability to perform the following:
Go to the following page.
Click button.
Take a screenshot and save it.
I want to create an application that will test a site by going by URL, take a shots, and then send the images to the email.
Does anybody has an experience in solving such problems?
Watin can capture screenshots:
ie.CaptureWebPageToFile("c:\tmp\watin main page.jpg");
More info:
http://watin.sourceforge.net/releasenotes-1-2-0-4000.html
http://fwdnug.com/blogs/ddodgen/archive/2008/06/19/watin-api-capturewebpagetofile.aspx
I am a contributor to the WatiN project and the author of the WatiN Test Recorder. To do what you want, I'd suggest using something like csExWB2 (http://code.google.com/p/csexwb2/). The demo will give you the basic browser, and you can add screen shots where you like. Emailing is not covered, but that should be fairly easy.
I know this is very old post but i want to leave a message for visitor of this post.
PhantomJS is one option (http://www.phantomjs.org).
According to the WatiN features page:
Supports creating screenshots of webpages
I would direct you to more specifical documentation, but the documentation web doesn't work well with Firefox, so I can't search it.