I have done data-driven automation testing using excel and XML.
Is there any other input file that we can use for data-driven in automation testing for selenium-webdriver.
Google docs spreadsheet is good. You could try that - multiple people will be able to manage the data in the spreadsheet at the same time.
Yes, can use any file for data.
For data, you have to write reading and writing code for that file format into your preferred language and then pass that data into selenium code.
You can design your framework of automation testing using various test framework that available for your preferred language like for java you can use JUnit, TestNG etc.
You can use any input file format that Java(or any other language that you have chosen) supports. In order to achieve this you need to write utility functions to read/write the data from/to the file.
For this purpose you can use Java in-built classes such as 'Scanner' etc.,. or you can use external libraries to get the work done for you.
Specific to your question other than Excel, I have used 'OpenCSV' parser library for Java to read and write data from a .csv file. This is a very simple and yet powerful library to work with CSV files.
You can go through this article for a better understanding - openCSV
Related
Is it possible to make a DLL plugin for the programming editor called EditPlus?
I wish to extend the editor like in EditPlug text editor or Notepad++ where you create a DLL which allows you to talk to the editor. Or another example is Total Commander where you create a DLL to talk to the program from your own code in delphi or visual c++, or any program that can create a windows DLL.
Or is there NO way to make a plugin in Editplus because they have not implemented a plugin system?
I do not see any kind of plugin architecture mentioned in EditPlus's feature list. If it does not expose a plugin API, then you cannot write a plugin for it. All you can do is create a DLL that is injected into EditPlus's address space by an external process and then uses OS API calls to directly manipulate EditPlus's UI and raw memory as needed.
EditPlus does not have a plugin system, so you cannot extend in any way.
EditPlus has no plugin system!
In EditPlus, You can only use Text Filter to do something like plugins do.
Text Filter can execute script file or executable file.
You can use perl, java, python, vbscript, javascript or command line application which support standard input and standard output to write and run as a text filter.
Text Filter only can change the text content in the editor area.
I'll be very glad if a real plugin system come with EditPlus.
See also:
Writing a text filter for EditPlus
Some Text Filters for EditPlus
yes it is possible. it's hard (..not so very hard..but still)
i explain in here the possibility to extend Editplus with php
https://stackoverflow.com/a/61254718/5781320
i see this cause i lookin' for other simplest methods than mine.(just by curiosity and fun) .. i wrote the fastest php framework in the world and I will glad to make it "talk" with editplus
There is possible to compile servers in PUREBASIC.com to interact with applications Android in b4a=open source now (b4x.com) that suppose to interact with google speech recognition so yes is possible to talk from your phone to the server dll or exe to interact with editplus,total commander,and many other stuff.
I did it myself cause i was curious if had delay.
EditPlus hasn't the direct possibility , Notepad++ does and on https://www.purebasic.fr/english/viewtopic.php?f=12&t=65680&hilit=notepad+plugin with the same purebasic i use on that notepad++ version work that pugin can be modiffied how you like to be .On actual version of notepad++ i tryed myself doesn't work : the plugin is obsolete and is rejected so witch method you choose is hard to implement the system you need int this dynamic expansion of open source / or developing.
What open source software should I use to write scripts to test for no errors on a site?
Could I / we write something better ourselves if there a limited number of goals outlined?
- yet flexible enough to take on new rules etc.
The only consistent response we want is no errors, period.
I know Java, ASP and scripting languages if that helps.
Thanks!
Selenium is one good Website automated testing tool. It allows macros as well as hand-written scripts. Also has support for Firefox browser.
Understanding of Java should suffice.
You can check it out at http://seleniumhq.org/
Another good open source alternative is HTMLUnit http://htmlunit.sourceforge.net/
Again this requires knowledge of Java
You might want to consider the robotframework, combined with the selenium2library keyword library. It lets you write very human-readable tests and gives very nice reports. It integrates nicely with jenkins. Robotframework is written in python, and can be extended with python. It allows you to create data driven tests, BDD-style tests, or more traditional procedural tests.
I'm looking for info on how to write SQL scripts to automate the creation of a versioned feature class in ArcSDE I want to be able to automate the process itself as well as put the scripts under version control. Can anyone point me to a resource that explains how to do this?
Is this even possible? It seems like there are lots of interrelationships between tables and data when a feature class is added.
P.S. It doesn't have to be pure SQL, but it should be some kind of scripting so we can save to version control and run outside of ESRI desktop tools.
It would be exceedingly difficult to do this in SQL without breaking your database. As you indicated, there are a lot of relationships between the tables, and if you don't get it right, then your database is messed up.
If you're looking at a scripted solution, you might want to look at creating a Python script to create the versioned feature class. There are a few ways to do it, from creating a raw Python script in Notepad, to creating a geoprocessing model in ArcCatalog and exporting the model to a Python script.
Here's a link to the ESRI help on geoprocessing scripts: http://webhelp.esri.com/arcgisdesktop/9.3/index.cfm?TopicName=An_overview_of_writing_geoprocessing_scripts
Earlier I asked a question about command-line parameters to automate processing of a file in InfoPath. I'll probably get the Tumbleweed badge for that one.
Instead of attempting a batch solution through the command line, can someone suggest a good resource for developing a solution that will open an application and then perform actions through the application's user interface like opening a file, printing it, and closing the file?
I've seen a legacy application do this in the past where it would open Attachmate and perform I/O operations through Attachmate's interface - but I never saw the code.
One constraint is that the process will be initiated from an existing .NET solution (i.e. processing 10,000 files). I am also unable to rely on traditional Office macros like those found in Excel - InfoPath does not appear to support them.
One option for automating a GUI based application is to use AutoIT. It will allow you to script the actions that are necessary for clicking menu interfaces, working with dialogs, etc.
Depending on your needs, you can create an AutoIT script on your dev machine, compile it to a standard EXE, and deploy it with the .NET project's compiled artifacts. To pass data to it, either you have your AutoIT script take command line parameters, or you have the .NET solution write a to a file with all the input file parameters and have the AutoIT script read in the file to process it. Based on the number you have in the question, I'd go with the option of writing to a file.
Since you are already on .NET you might want to give the new UI Automation framework a try. I haven't tried it yet, but it is supposed to work with WPF and native Win32 applications.
MSDN also has some samples: UI Automation Control Pattern Samples
Attachmate has a scripting language, an API and all kinds of other stuff to help with automating it. So this may not have been a typical application.
On the other hand, Attachmate products are (IMO) horrible to the extreme and I will go to great lengths to avoid working with them in the first place.
Let me preface this by saying I don't care what language this solution gets written in as long as it runs on windows.
My problem is this: there is a site that has data which is frequently updated that I would like to get at regular intervals for later reporting. The site requires JavaScript to work properly so just using wget doesn't work. What is a good way to either embed a browser in a program or use a stand-alone browser to routinely scrape the screen for this data?
Ideally, I'd like to grab certain tables on the page but can resort to regular expressions if necessary.
You could probably use web app testing tools like Watir, Watin, or Selenium to automate the browser to get the values from the page. I've done this for scraping data before, and it works quite well.
If JavaScript is a must, you can try instantiating an Internet Explorer via ActiveX (CreateObject("InternetExplorer.Application")) and use it's Navigate2() Method to open your web page.
Set ie = CreateObject("InternetExplorer.Application")
ie.Visible = True
ie.Navigate2 "http://stackoverflow.com"
After the page has finished loading (check document.ReadyState), you have full access to the DOM and can use whatever methods to extract any content you like.
You can look at Beautiful Soup - being open source python, it is easily programmable. Quoting the site:
Beautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping. Three features make it powerful:
Beautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away.
Beautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application.
Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't autodetect one. Then you just have to specify the original encoding.
I would recommend Yahoo Pipes, that's exactly what they were built to do. Then you can get the yahoo pipes data as an RSS feed and do as you want with it.
If you are familiar with Java (or perhaps, other language that runs on a JVM such as JRuby, Jython, etc.), you can use HTMLUnit; HTMLUnit simulates a complete browser; http requests, creating a DOM for each page and running Javascript (using Mozilla's Rhino).
Additionally, you can run XPath queries on documents loaded in the simulated browser, simulate events, etc.
http://htmlunit.sourceforge.net
Give Badboy a try. It's meant to automate the system testing of your websites but you may find it's regular expression rules handy enough to do what you want.
If you have Excel then you should be able to import the data from the webpage into Excel.
From the Data menu select Import External Data and then New Web Query.
Once the data is in Excel then you can either manipulate it within Excel or output it in a format (e.g. CSV) you can use elsewhere.
In compliment to Whaledawg's suggestion, I was going to suggest using an RSS scraper application (do a Google search) and then you can get nice raw XML to programmatically consume instead of a response stream. There may even be a few open-source implementation which would give you more of an idea if you wanted to implement yourself.
You could use the Perl module LWP, with module JavaScript. While this may not be the quickest to set up, it should work reliably. I would definitely not have this be your first foray into Perl though.
I recently did some research on this topic. The best resource I found is this Wikipedia article, which gives links to many screen scraping engines.
I needed to have something that I can use as a server and run it in batch, and from my initial investigation, I think Web Harvest is quite good as an open source solution, and I have also been impressed by Screen Scraper, which seems to be very feature rich and you can use it with different languages.
There is also a new project called Scrapy, haven't checked it out yet, but it's a python framework.