Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I know my question will get marked as to broad but I didn't think of another place to ask it.I work in QA and month ago i started learning protractor by myself so i can test our projects with protractor there was no one to guide me i learned all by myself and google so I wanted you guys to check the code i made and give me some suggestions. Does it look like it should,because i think its worthless it gets the job done but i feel it is still beginner level, and there is none to guide me so i know the level i am at the moment any suggestions are welcome.My config file pretty basic.
exports.config = {
framework: 'jasmine2',
seleniumAddress: 'http://localhost:4444/wd/hub',
specs: ['page.js'],
onPrepare: function() {
var jasmineReporters = require('jasmine-reporters');
}
}
My specification file:
/*
##############################################
FUNCTIONS in helper_functons:
getsite();
logIn();
mainPageItem(); | 0 - Recipes | 1 - Collections | 2 - Profiles |
openRecipe(index);
createRecipe (name,description,step,numStep,ingName,numIngr,addToCollections,share);
deleteRecipe(recipeName); //must use getMainPageAndRefresh()
openRecipe(index);
browseRecipe();
openCollection(index);
createCollection(name,description);
openUser(index);
getNotifications();
goToProfile();
browseProfile();
getMainPageAndRefresh();
goToTimeline();
openLegal(index);
useSearch(textString);
createPost(postName);
logOut();
##############################################
*/
var functions = require('./helper_functions.js');
var uName = 'asd';
var pass = 'asd';
describe('Hooray',function(){
it('Gets site',function(){
functions.getSite();
});
it('Logs in ',function(){
functions.logIn(uName,pass);
});
it('Recipe options',function(){
browser.sleep(9000);
});
it('Logs out',function(){
functions.getMainPageAndRefresh();
functions.logOut();
});
});
And I am using another file where i created helper functions so my tests look more readable this is where the main code is http://pastebin.com/KgXCh74m I uploaded on pastebin because it is 500 lines. My idea was to call a function whenever you need it and test it in the page.js file.
By looking at your code, it seems you have put all your elements in a function and accessing them in your specs. In general this way of accessing elements would lead to some Flaky tests and you would not be able to run test scripts confidently and error free.
Protractor's community widely encourages the usage of Page Objects. Protractor has some good documentation,they have specifically mentioned style guide http://www.protractortest.org/#/style-guide to get started.
Some of the tips from my experience while writing protractor tests:
Avoid browser.sleep for page loads , instead use browser.manage().timeouts().pageLoadTimeout(10000);
Use Expected Conditions for interacting with web elements elementToBeClickable, elementToBeSelected etc.
Use Page Objects efficiently. Avoid accessing/performing actions on elements in page objects. It has to be done inside the specs i.e.
your page.js should look like this -
var somePage = function() {
this.username = element(by.id('someId')); //Here we are only defining the elements
this.password = element(by.id('someId'));
this.button = element(by.css('someCss'));
this.login = function(uid,pwd) { // If we are using functions in page objects, we would just access the defined elements above rather than defining them in the function (reusability & decoupling).
this.username.sendKeys(uid);
this.password.sendKeys(pwd);
this.button.click();
};
};
In the above page object if the username or password element changes you only have to change them, no need to touch the login function.
your spec.js should look like this -
var SomePage = require('path to your page.js');
describe('page', function() {
var page = new SomePage();
it('should test page', function() {
page.username.sendKeys('username'); // Here we are performing the action on the elements.
page.password.sendKeys('password');
page.login(username,password);
});
});
Maintaining test data in a separate JSON file.
Set browser.ignoreSynchronization = true while dealing with non-angular pages
Use CssSelectors as much as you can to identify elements.
Since Protractor is community driven --> Follow protractor on GitHub, StackOverflow & Gitter.
You would get all the updates/issues from these platforms so that you can solve your issues or ask for help, these awesome folks would help you!
Related
I have tried every single thing to perform dragAndDrop using webdriverio but nothing works. I have also posted a question in the webdriverio gitter but no response. below posted code is one of the ways I tried and its supposed to work but it just doesn't!
` await this.driver.moveToObject(source);
await sleep(2000);
await this.driver.buttonDown(0);
await sleep(2000);
await this.driver.moveToObject(destination);
await sleep(2000);
await this.driver.buttonUp(0);`
I'm not sure what properties are on the source and destination objects you are using but here is an example of how I was able to get it to work using the same commands you are trying.
In my example I have a table with columns that can be re-ordered by dragging and dropping them wherever I want them to be. First I get the two column headers I want to switch
let docIdHeader = browser.element('div[colid="documentid1"]');
let pageCountHeader = browser.element('div[colid="_PAGE_COUNT1"]');
If I log these objects out to the console I can see the properties stored in them.
> docIdHeader
{ sessionId: 'e35ae3e81f1bcf95bbc09f120bfb36ae',
value:
{ ELEMENT: '0.3568346822568915-1',
'element-6066-11e4-a52e-4f735466cecf': '0.3568346822568915-1' },
selector: 'div[colid="documentid1"]',
_status: 0 }
> pageCountHeader
{ sessionId: 'e35ae3e81f1bcf95bbc09f120bfb36ae',
value:
{ ELEMENT: '0.3568346822568915-2',
'element-6066-11e4-a52e-4f735466cecf': '0.3568346822568915-2' },
selector: 'div[colid="_PAGE_COUNT1"]',
_status: 0 }
Now using the same technique you are using and the selector property off of these objects I can get it to work in two ways.
browser.dragAndDrop(docIdHeader.selector, pageCountHeader.selector);
Or
browser.moveToObject(docIdHeader.selector)
browser.buttonDown(0)
browser.moveToObject(pageCountHeader.selector)
browser.buttonUp(0)
I ran this in the REPL interface so I know it works as I could see each step being executed after I sent the commands. If you are not familiar with how to use the REPL I highly recommend learning. You can play around with commands in the console until you figure something out and then add those commands to your tests.
Also, as I stated in my comments above. dragAndDrop() and moveToObject() are going to be deprecated soon and you will likely see a lot of warnings about it when you use these. The correct way to implement a drag and drop action going forward is to use browser.actions(). Unfortunately, I don't have an example of how to do it that way as I haven't played with it yet. If no one provides an example by tonight I will try to get one together for you.
Even I faced this issue wherein the cursor doesn't move to the destination object after buttonDown and using moveToObject twice worked for me.
await this.driver.moveToObject(source);
await this.driver.buttonDown(0);
await this.driver.moveToObject(destination);
await this.driver.moveToObject(destination);
await this.driver.buttonUp(0);
This was originally posted on discuss.emberjs.com. See:
http://discuss.emberjs.com/t/what-is-the-proper-use-of-store-filter-store-find-for-infinite-scrolling/3798/2
but that site seems to get worse and worse as far as quality of content these days so I'm hoping StackOverflow can rescue me.
Intent: Build a page in ember with ember-data implementing infinite scrolling.
Background Knowledge: Based on the emberjs.com api docs on ember-data, specifically the store.filter and store.find methods ( see: http://emberjs.com/api/data/classes/DS.Store.html#method_filter ) I should be able to set the model hook of a route to the promise of a store filter operation. The response of the promise should be a filtered record array which is a an array of items from the store filtered by a filter function which is suppose to be constantly updated whenever new items are pushed into the store. By combining this with the store.find method which will push items into the store, the filteredRecordArray should automatically update with the new items thus updating the model and resulting in new items showing on the page.
For instance, assume we have a Questions Route, Controller and a model of type Question.
App.QuestionsRoute = Ember.Route.extend({
model: function (urlParams) {
return this.get('store').filter('question', function (q) {
return true;
});
}
});
Then we have a controller with some method that will call store.find, this could be triggered by some event/action whether it be detecting scroll events or the user explicitly clicking to load more, regardless this method would be called to load more questions.
Example:
App.QuestionsController = Ember.ArrayController.extend({
...
loadMore: function (offset) {
return this.get('store').find('question', { skip: currentOffset});
}
...
});
And the template to render the items:
...
{{#each question in controller}}
{{question.title}}
{{/each}}
...
Notice, that with this method we do NOT have to add a function to the store.find promise which explicitly calls this.get('model').pushObjects(questions); In fact, trying to do that once you have already returned a filter record array to the model does not work. Either we manage the content of the model manually, or we let ember-data do the work and I would very much like to let Ember-data do the work.
This is is a very clean API; however, it does not seem to work they way I've written it. Based on the documentation I cannot see anything wrong.
Using the Ember-Inspector tool from chrome I can see that the new questions from the second find call are loaded into the store under the 'question' type but the page does not refresh until I change routes and come back. It seems like the is simply a problem with observers, which made me think that this would be a bug in Ember-Data, but I didn't want to jump to conclusions like that until I asked to see if I'm using Ember-Data as intended.
If someone doesn't know exactly what is wrong but knows how to use store.push/pushMany to recreate this scenario in a jsbin that would also help too. I'm just not familiar with how to use the lower level methods on the store.
Help is much appreciated.
I just made this pattern work for myself, but in the "traditional" way, i.e. without using store.filter().
I managed the "loadMore" part in the router itself :
actions: {
loadMore: function () {
var model = this.controller.get('model'), route = this;
if (!this.get('loading')) {
this.set('loading', true);
this.store.find('question', {offset: model.get('length')}).then(function (records) {
model.addObjects(records);
route.set('loading', false);
});
}
}
}
Since you already tried the traditional way (from what I see in your post on discuss), it seems that the key part is to use addObjects() instead of pushObjects() as you did.
For the records, here is the relevant part of my view to trigger the loadMore action:
didInsertElement: function() {
var controller = this.get('controller');
$(window).on('scroll', function() {
if ($(window).scrollTop() > $(document).height() - ($(window).height()*2)) {
controller.send('loadMore');
}
});
},
willDestroyElement: function() {
$(window).off('scroll');
}
I am now looking to move the loading property to the controller so that I get a nice loader for the user.
I'm learning with Titanium to make iPhone/Android apps. I'm using Alloy MVC framework. I never used javascript before, apart from simple scripts in HTML to access the DOM or something like that, so I never needed to structure the code before.
Now, with Titanium, I must use a lot of JS code and I was looking for ways to structure my code. Basically I found 3 ways to do it: prototype, namespace and functions inside functions.
Simple example for each:
Prototype:
NavigationController = function() {
this.windowStack = [];
};
NavigationController.prototype.open = function(windowToOpen) {
//add the window to the stack of windows managed by the controller
this.windowStack.push(windowToOpen);
//grab a copy of the current nav controller for use in the callback
var that = this;
windowToOpen.addEventListener('close', function() {
if (that.windowStack.length > 1)
{
that.windowStack.pop();
}
});
if(Ti.Platform.osname === 'android') {
windowToOpen.open();
} else {
this.navGroup.open(windowToOpen);
}
};
NavigationController.prototype.back = function(w) {
//store a copy of all the current windows on the stack
if(Ti.Platform.osname === 'android') {
w.close();
} else {
this.navGroup.close(w);
}
};
module.exports = NavigationController;
Using it as:
var NavigationController = require('navigator');
var navController = new NavigationController();
Namespace (or I think is something like that, coz the use of me = {}):
exports.createNavigatorGroup = function() {
var me = {};
if (OS_IOS) {
var navGroup = Titanium.UI.iPhone.createNavigationGroup();
var winNav = Titanium.UI.createWindow();
winNav.add(navGroup);
me.open = function(win) {
if (!navGroup.window) {
// First time call, add the window to the navigator and open the navigator window
navGroup.window = win;
winNav.open();
} else {
// All other calls, open the window through the navigator
navGroup.open(win);
}
};
me.setRightButton = function(win, button) {
win.setRightNavButton(button);
};
me.close = function(win) {
if (navGroup.window) {
// Close the window on this nav
navGroup.close(win);
}
};
};
return me;
};
Using it as:
var ui = require('navigation');
var nav = ui.createNavigatorGroup();
Functions inside functions:
function foobar(){
this.foo = function(){
console.log('Hello foo');
}
this.bar = function(){
console.log('Hello bar');
}
}
// expose foobar to other modules
exports.foobar = foobar;
Using it as:
var foobar = require('foobar').foobar
var test = new foobar();
test.bar(); // 'Hello bar'
And now my question is: which is the better to maintain code clean and clear? It seems that prototype is clear an easy to read/mantain. Namespace confuses me a bit but only needs to execute the initial function to be "available" (no use of new while declaring it, I suppose because it returns the object?namespace? "me"). Finally, functions inside functions is similar to the last, so I don't know exactly the difference, but is useful to export only the main function and have all the inside functions available for use it later.
Maybe the last two possibilities are the same, and I'm messing concepts.
Remember that I'm searching for a good way to structure the code and have functions available to other modules and also inside the own module.
I appreciate any clarification.
In the examples that they release, Appcelerator appears to follow the non-prototype approach. You can see it in the examples they have released: https://github.com/appcelerator/Field-Service-App.
I've seen a lot of different approaches to structuring applications in Titanium before Alloy. Since Alloy, I've found following the development team's examples helpful to me.
With that being said, it seems to me that all of this is still under interpretation and open to change and community development. Before Alloy there were some great community suggestions on structuring an app and I believe that it is still open with Alloy. Often when I find someone's example code I see something they did with it that appears to organize it a bit better than I thought of. It seems to make it a bit easier.
I think you should structure your application in a way that makes sense to you. You may stumble on to a better and easier way of developing applications with Alloy, because you are looking at it critically.
I haven't found a lot of extensive Alloy examples, but Field-Service-App makes sense to me. They have a nice separation of the elements in the application beyond MVC. Check it out.
I'm trying to test an angularJS directive that uses a templateURL. For the life of me I can't get the compiler to actually load the templateURL, even when it has been put into the templateCache. I realize karma preprocesses all the template contents and creates modules for each that preloads the templateCache, but I expected that this would have been equivalent.
Here is a http://jsfiddle.net/devshorts/cxN22/2/ that demonstrates whats going on.
angular.module("app", [])
.directive("test", function(){
return {
templateUrl:"some.html",
replace:true
}
});
//--- SPECS -------------------------
describe("template url test", function() {
var element, scope, manualCompiledElement;
beforeEach(module('app'));
beforeEach(inject(function($rootScope, $controller, $compile, $templateCache){
$templateCache.put("some.html", "<div>hello</div>");
scope = $rootScope.$new();
element = $compile(angular.element('<test></test>'))(scope);
manualCompiledElement = $compile(angular.element($templateCache.get('some.html')))(scope);
scope.$digest();
}));
it("has hello", function() {
expect(element.text()).toContain('hello');
});
it("template cache has contents", function(){
expect(manualCompiledElement.text()).toContain('hello');
});
});
What am I missing?
I realise you no longer necessarily need to know, but it looks to me like there are two problems contributing to this.
The first was pointed out by #Words-Like-Jared. You are defining the directive as restricted to attributes (default) but using it as an element. So you need restrict: 'E'.
The second problem is that your template is never actually retrieved and your compile/link never completes. The request for the contents of the template within the directive are asynchronous so a digest on the root scope is needed to resolve the promise and return them, similar to this answer for another question.
When you perform your manual compilation, the results are not asynchronous and the template is retrieved immediately. Actually the compilation in your manual compilation doesn't do a lot as you are compiling the contents of your template, which doesn't have any directives in.
Now at the end of your beforeEach where you use
$scope.$digest()
you are digesting on the current scope and its children. When you use $rootScope.$digest() or $scope.$apply() you will perform a digest across all scopes. So changing this to
$rootScope.$digest()
// or
$scope.$root.$digest()
in the line after your compilation means both of your tests will now pass. I have updated the fiddle here.
Here is a variation of the solution in coffeescript
expect = chai.expect;
app = angular.module('TM_App')
app.directive "test", ()->
templateUrl:"some.html",
replace :true
describe '| testing | templateUrl',->
element = null
beforeEach ->
module('TM_App')
beforeEach ->
inject ($compile,$rootScope, $templateCache)->
$templateCache.put "some.html", "<div>hello {{name}}</div>"
scope = $rootScope.$new();
element = $compile('<test/>')(scope);
scope.name = 'John'
scope.$digest()
it "has hello", ()->
expect(element.text() ).to.equal 'hello John'
expect(element[0].outerHTML).to.equal '<div class="ng-binding">hello John</div>'
I've been trying to send data from my background page to a content script in my chrome extension. i can't seem to get it to work. I've read a few posts online but they're not really clear and seem quite high level. I've got managed to get the oauth working using the Oauth contacts example on the Chrome samples. The authentication works, i can get the data and display it in an html page by opening a new tab.
I want to send this data to a content script.
i'm having a lot of trouble with this and would really appreciate if someone could outline the explicit steps you need to follow to send data from a bg page to a content script or even better some code. Any takers?
the code for my background page is below (i've excluded the oauth paramaeters and other )
` function onContacts(text, xhr) {
contacts = [];
var data = JSON.parse(text);
var realdata = data.contacts;
for (var i = 0, person; person = realdata.person[i]; i++) {
var contact = {
'name' : person['name'],
'emails' : person['email']
};
contacts.push(contact); //this array "contacts" is read by the
contacts.html page when opened in a new tab
}
chrome.tabs.create({ 'url' : 'contacts.html'}); sending data to new tab
//chrome.tabs.executeScript(null,{file: "contentscript.js"});
may be this may work?
};
function getContacts() {
oauth.authorize(function() {
console.log("on authorize");
setIcon();
var url = "http://mydataurl/";
oauth.sendSignedRequest(url, onContacts);
});
};
chrome.browserAction.onClicked.addListener(getContacts);`
As i'm not quite sure how to get the data into the content script i wont bother posting the multiple versions of my failed content scripts. if I could just get a sample on how to request the "contacts" array from my content script, and how to send the data from the bg page, that would be great!
You have two options getting the data into the content script:
Using Tab API:
http://code.google.com/chrome/extensions/tabs.html#method-executeScript
Using Messaging:
http://code.google.com/chrome/extensions/messaging.html
Using Tab API
I usually use this approach when my extension will just be used once in a while, for example, setting the image as my desktop wallpaper. People don't set a wallpaper every second, or every minute. They usually do it once a week or even day. So I just inject a content script to that page. It is pretty easy to do so, you can either do it by file or code as explained in the documentation:
chrome.tabs.executeScript(tab.id, {file: 'inject_this.js'}, function() {
console.log('Successfully injected script into the page');
});
Using Messaging
If you are constantly need information from your websites, it would be better to use messaging. There are two types of messaging, Long-lived and Single-requests. Your content script (that you define in the manifest) can listen for extension requests:
chrome.extension.onRequest.addListener(function(request, sender, sendResponse) {
if (request.method == 'ping')
sendResponse({ data: 'pong' });
else
sendResponse({});
});
And your background page could send a message to that content script through messaging. As shown below, it will get the currently selected tab and send a request to that page.
chrome.tabs.getSelected(null, function(tab) {
chrome.tabs.sendRequest(tab.id, {method: 'ping'}, function(response) {
console.log(response.data);
});
});
Depends on your extension which method to use. I have used both. For an extension that will be used like every second, every time, I use Messaging (Long-Lived). For an extension that will not be used every time, then you don't need the content script in every single page, you can just use the Tab API executeScript because it will just inject a content script whenever you need to.
Hope that helps! Do a search on Stackoverflow, there are many answers to content scripts and background pages.
To follow on Mohamed's point.
If you want to pass data from the background script to the content script at initialisation, you can generate another simple script that contains only JSON and execute it beforehand.
Is that what you are looking for?
Otherwise, you will need to use the message passing interface
In the background page:
// Subscribe to onVisited event, so that injectSite() is called once at every pageload.
chrome.history.onVisited.addListener(injectSite);
function injectSite(data) {
// get custom configuration for this URL in the background page.
var site_conf = getSiteConfiguration(data.url);
if (site_conf)
{
chrome.tabs.executeScript({ code: 'PARAMS = ' + JSON.stringify(site_conf) + ';' });
chrome.tabs.executeScript({ file: 'site_injection.js' });
}
}
In the content script page (site_injection.js)
// read config directly from background
console.log(PARAM.whatever);
I thought I'd update this answer for current and future readers.
According to the Chrome API, chrome.extension.onRequest is "[d]eprecated since Chrome 33. Please use runtime.onMessage."
See this tutorial from the Chrome API for code examples on the messaging API.
Also, there are similar (newer) SO posts, such as this one, which are more relevant for the time being.