Logging GIMP 2.10 commands for script development? - scripting

I'm trying to write some GIMP 2.10+ scripts as "actions" to help speed up my processes. I did this back in 2.4 era, then took a long break and now I'm very out-of-date.
Is it possible to log the GIMP commands (plus arguments) fired click-by-click to a file or window somewhere, as an assist to GIMPScript development? I of course have the procedure browser and "search and run a command" but it's very slow going when I'm not sure what I even need to look for in some cases.

Not as far as I know. It wouldn't be too useful anyway because the mapping being UI actions and API entries is usually not so direct (especially in 2.10 with all the GEGL-based plugins). But the procedure browser rarely deceived me. And in the dire cases you can always ask here.

Related

Listening for Events in a VB.Net Script?

I am in need of a solution, and I am not quite sure I have enough knowledge to properly ask the question, so please bear with me. I am working with a CAD application with it's own API which supports the .NET Framework 4.5. I wanted to develop some customized functionality for the application using VB.NET, but because of work restrictions I am not allowed to install custom programs or run custom executables. I am however allowed to utilize the CAD applications scripting environment (which also supports the .NET Framework). I am limited in what I can achieve with scripting because as far I know I can't listen for an event in a script, because the scripts run time ends so quickly. Is there a way to extend the run time of a script until certain events occur? If anybody is curious, the CAD application I am using is called Siemens NX. Any Ideas?
I do not know how far this answer help you.
When you do not have events to listen, try to depend on return variables and using assert statements (or at least if statements).
I mean, if something happens then only go to next, which is a very traditional way.
Also, if you want to prolong the run time of script you can use some thing like "sleep" or "delay" statements (may be milli seconds as input).
I mean if some thing is happening (control by a variable), "sleep" till that action is complete.
Or check for the action complete status in a infinite while loop and exit, when it is done.
In simple terms, traditional way of doing things helps in your situation.

Which of phantomjs or webdriver would be easier/more-appropriate for scraping my linkedin account?

I have a personal need to scrape/automate-access-to my linkedin account (copy my contacts, etc), and obviously the site is too ajaxy to just use wget, urllib, etc.
I cannot use the LinkedIn API, as it happens to restrict some use cases I'm interested in.
I am proficient in Python and Javascript. I've used webdriver in the past for small scraping projects, but it was long ago enough that there's probably a similar overhead in re-learning it vs learning phantomjs.
I am not planning to run any kind of high-volume cluster-based scraping operation, this is all going to be running on my local machine at some appropriate rate limit so as not to piss off linkedin. It's mostly just for personal convenience, automation, etc.
I've heard good things about phantomjs, but I'd like to understand what if any advantage it has over webdriver (or vice versa). I guess phantomjs is "headless", meaning it doesn't actually have to run a browser, which I guess makes it easier to write command line scripts or consume fewer resources or some other property that I would love to have explained to me!
I can appreciate the argument that webscraping programs should be javascript, since that's more of a browser-native language, but would love to hear if that's a primary reason why people are using phantomjs (or one of its cousins)
I've used both Selenium and Phantom/Casper in scraping jobs, and also used both in functional testing jobs. If I was going to do what you describe I would choose CasperJS. I would choose CasperJS over PhantomJS because:
Easier to describe the flow of steps. (You have to deal with all the async callbacks when using PhantomJS directly.)
SlimerJS can be swapped in to have it use Gecko (i.e. Firefox), with no additional effort. (I don't think this will matter with LinkedIn, but PhantomJS 1.9.x is based on a fairly old WebKit, so when sites use newer HTML5 features it can sometimes fail.)
Reasons to choose CasperJS over Selenium:
The flow of steps is quite easy to describe in CasperJS.
Selenium feels more like hard work. This might be because PHP is my preferred glue language, and since Selenium 2.0, PHP has been treated as an outsider. But also it has the philosophy of only allowing actions that a user could do in the browser with keyboard and mouse. This is sometimes not flexible enough.
Selenium breaks each time Firefox gets updated, and I have to install the latest version. Irritating. (PhantomJS and SlimerJS have their browser internally, so are cleanly independent of system updates to your desktop browser.)
As you are proficient in both Python and JavaScript, I would say none of the above are killer reasons. It doesn't really matter which you choose, the effort is going to be roughly the same.

Automation scripts: autoitscript vs ptfbpro

I try to use this 2 projects for primitive gui testing automation:
http://www.ptfbpro.com/
http://www.autoitscript.com/
And I can't make my choice.
Can somebody explain me: why(in 2 or 3 lines) he use one of them(or other please specify)?
I use AutoIt...
because it's free, well documented (not only) from inside of the Scite Editor and you can easily compile your script into a small executable or even create a complete GUI and there is a very good community in the forums and around here. And its Basic-Like Syntax is really easy to understand, there are functions and even a foreach-syntax, dynamic arrays and lots of additional functions from other users... There's good integration with other programming languages and from the use of so many WinAPI functions you lack of very little possibilities. It can automate IE usage without even displaying a browser window and send network packages, you can send Keystrokes like a user sitting in front of your screen and there's the AU3Record Tool which allows you to just record a Macro and replay it or save it as a script and then you can easily optimize it and edit it for your needs. Or use the AutoIt Window Info tool to see all the possible handlings for your application, you can interact with any kind of program output/display according to different algorithms you may invent.
Enough facts? ;-)
Go with Autoit3. It 's a lot more reliable, and you have a complete script language. Ptfbpro is only a tool (not free), nothing more. AUtoit3 has a lot of contributors that can help you in your process, Ptfbpro is dead.
If you want a script taht really do what you want, just go for AutoIt. Ptfbpro can't be used as a professional tool.
Autoit3 as well. You really can't beat it for being free and so easy to use.

Scripting Hardware Profile Creation?

Is there any way to Script Hardware Profile Creation?
I have to set up a ton of Laptops with the same 3 hardware profiles (LAN, WiFi, and Modem). It takes forever and seems pointless if I could only find a script.
Powershell or .bat or any language is fine.
Perhaps you can use SIKULI to automate this. SIKULI is an automation tool that can automate any task you can do through the GUI. Their demo on the main page looks similar to what you're trying to do...
EDIT: The problem here would be to run the script without installing anything, e.g. from a USB drive. Should be possible. Obviously this is unworkable otherwise... This gives some details on how that can be done.

Is there an equivalent of Don Libes's *expect* tool for scripting interaction with web pages?

In the bad old days of interactive console applications, Don Libes created a tool called Expect, which enabled you to write Tcl scripts that interacted with these applications, much as a user would. Expect had two tremendous benefits:
It was possible to script interactions that otherwise would have had to be repeated by hand, tediously. A classic example was dialup Internet access hell (from the days before PPP).
It was possible to write scripts to test one's own interactive applications, programmatically, as part of a regression suite.
Today most interactive applications are on the web, not on the console. Hence my question: is there any tool that provides the ability to interact with web pages and web forms programmatically, much as Expect provides the ability to interact with console applications programmatically?
(The closest thing I am aware of is Chickenfoot.)
You might be looking for Selenium
I've used Selenium RC in conjunction with Python to drive web page interactions programmatically. This has allowed me to write pretty extensive user tests in which forms and inputs are driven and their results are measured.
Check out the Selenium IDE on Firefox (as mentioned above). It allows you to record tests in the browser and play them back, either using the IDE itself, or the Remote Control app.
Perl Mechanize works pretty well for this exact issue.
HTTPS and some authentication issues are tricky at times. I will be posting couple questions about those in the future.
I did a ton of Expect work in a former life and always thought Don Libes' Expect book was one of the best-written and most enlightening technical books I'd ever seen.
Hands down I would say that Perl's WWW::Mechanize library is what you want. I note above that you were having trouble finding documentation. There is good documentation for it! Look up the module's distribution on search.cpan.org and see what all is packaged with it. There's a FAQ, Cookbook with examples, etc. Plus I've always been able to get help on the web. If you can't get it here, try at use.perl.org or perlmonks.org. WWW::Mechanize's author, Andy Lester, is present on Stack Overflow. (He's also an all around friendly and helpful guy.)
I believe WWW::Mechanize also has a program that is analogous to Expect's autoexpect program: you set up a proxy process running this program as a server, point your browser to it as a proxy, perform the actions you want to automate, and then the proxy program gives you a WWW::Mechanize program for you to use as a base for your project. (If it works like autoexpect, you will certainly want to make modifications from there.)
As mentioned above, WWW::Mechanize is a browser (to be more exact, it is a web client or http client) that happens to be programmable. The last time I looked, there was even work in progress to make it support JavaScript.
In addition to Selenium, if you're doing the Ruby/Rails thing, there's Webrat.