Is there a way to automate Google Web Optimizer approvals? - testing

Google Web Optimizer uses URLs for its A/B testing experiments. In addition to the production URL, we also have several pre-production environments. Releases of software are pushed out.
We're only running our first experiment now, but we've set up five experiments in GWO -- one for each environment (and thus URL).
It's a bit of a pain to set up all these experiments manually -- especially when verifying the pages.
Is there some kind of API or other automated way to set these up?

No; as of May 2011, the GWO API above has been discontinued. A future version of the API will supposedly show up, at some point, though - so it may get possible again.

I think this is what your after: Website Optimizer API

Related

Why is this website so slow on mobile [shopify]

I'm trying to help a client with their slow-ish website https://www.dp-tools.de. If I use Google page speed for mobile I can see that it takes 7 seconds to be interactive but nothing in the found problems really tells me why it is actually that slow. I also tried chrome lighthouse and couldn't really see all that much.
Is there another way of checking or maybe anyone here sees why it is so slow?
Open up the client's Shopify Admin, and examine the theme. Does it have any strange slow code in it? Examine the Apps installed that directly affect theme. Are any of them crap, old, broken junk?
The best way to debug a slow theme is to just start hacking out any junk the client may have added to their theme. A lot of themes are so bad, for example, they load jQuery 3 times. Likely you have one bad apple in there, a call that is blocking and takes way to long to timeout or respond. Developer tools can point these out to in your console. Mobile versions have to come with a developer console you can inspect too right?

How to properly benchmark / stresstest single-page web application

I am somehow familiar with benchmarking/stress testing traditional web application and I find it relatively easy to start estimating maximum load for it. With tools I am familiar (Apache ab, Apache JMeter) I can get a rough estimate of the number of request/second a server with a standard app can handle. I can come up with user story, create a list of page I would like to check and benchmark them separately. A lot of information can be found on the internet how to go from novice like me to a master.
But in my opinion a lot of things is different when benchmarking single page application. The main entry point is the most expensive request, because the user loads majority of things needed for proper app experience (or at least in my app it is this way). After it navigation to other places is just ajax request, waiting for json, templating. So time to window load is not important anymore.
To add problems to this, I was not able to find any resources how people do it properly.
In my particular case I have a SPA written with knockout, and sitting on apache server (most probably this is irrelevant). I would like to have rough estimate how many users can my app on the particular server handle. I am not looking for a tool recommendation (also it would be nice), I am looking for experienced person to share his insight about benchmarking process.
I suggest you to test this application just like you would test any other web application, as you said - identify the common use cases, prepare scripts for them, run in the appropriate mix and analyze the results.
Web-applications can break in many different ways for different reasons. You are speculating that the first page load is heavy and the rest is just small ajax. From experience I can tell you that this is sometimes misleading - for example, you can find that the heavy page is coming from cache and the server is not working hard for it, but a small ajax response requires a lot of computing power or long running database query or has some locking in the code that cause it to break or be slow under load - that's why we do load testing.
You can do this with any load testing tool, ideally one that can handle those types of script with many dynamic values. My personal preference is WebLOAD by RadView
I am dealing with similar scenario, SPA application where first page loads and there after everything is done by just requesting for other html pages and/or web service calls to get the data.
My goal is to stress test the web server and DB server.
My solution is to just create request for those html pages(very low performance issue, IMO, since they are static and they can be cached for a very long time in the browser) and web service call requests. Biggest load will come from the request for data/processing via the web service call requests.
Capture all the requests for html and web service calls using a tool like fiddler, and use any load test tools(like JMeter) to run these requests using as many virtual users as you want to test your application with.

Best way to manage updates on an iOS client/server app

I have a logistical question: I'm trying to figure out the best way to manage APIs getting out of sync with an app. The best way to explain it is with an example:
Let's say MyApp Version 1.0 posts to a "submit_feedbacK" API that requires first_name, last_name, and email.
I then submit MyApp Version 2.0 to the App Store. That version is designed to post first_name, last_name, gender, and email to the API. All of these are required fields on the API.
The problem I have:
- If I update the API before the new App is live, it will break Version 1.0
- If I wait until Version 2.0 is live and remotely cripple 1.0, I have to time it correctly.
I'm going to guess that the 'right answer' is to maintain two different APIs. But if both APIs post to the same live database, that makes things a bit awkward.
Does anyone have suggestions on how to model this?
This question may share some aspects with iOS consuming API design.
The right answer is definately to provide two APIs (at least for a short period of time while users adjust). You do not have to maintain two versions at the same time, as once a newer version is released you can maintain that one, and simply provide the old one for legacy users. The only real changes you may have to make to it are things like security patches or major issues. Major changes (such as you deciding to restructure your entire database) may lead to the old version not working any more, however update to newer API versions should be designed to allow previous versions to still function.
The other question I linked you to gives an answer about how you can have different version of your app access the correct version of the API.
Another note is that it may be easier for you (depending on what framework you're using) to design your APIs as engines or subapps, and simply mount them at different end points. I know that this is easily do-able in Rails by using Engines, and in Node with Express using app.use() with sub-applications.
I would use a webservice/http endpoint for the communcation with your app. If you preferer to maintain the same URL in all versions of the app, then include a version number in all the requests/posts to the server so it knows how to handle them. This will also make the development and tests easier as new versions can test against the new api on the server.
So on any function you can call in the webservice/server add a single variable with version number. a BYTE ought to be enough as I think you could start over and "kill support for v1.0" once you hit 256 versions of the same function (if ever).
Once the server receives a request/post with data, you can just code a simple switch/case structure in the server API so support works for both versions.
If they do similar, but eg. swaps the parametres or something, you can handle all these serverside and the BAL/DAL (n-tier structure) can be maintained on the server part of the solution.
Btw. my answer is not just for iOS or smartdevices, but merly a client/server approach for a "work-in-progress" production setup where everything has to be online, while still being under development and maintanance.
Hope it makes sense, otherwise, comment on it and I shall try to explain it further.
just FYI, I use CodeIgniter. I'm using the REST Controller provided at https://github.com/philsturgeon/codeigniter-restserver. I ultimated ended up settling on having different end-points for every version. Basically I'd check out a new repository for each release and put it into a unique directory. (i.e. http://www.mysite.com/1.0/api/method, http://www.mysite.com/1.1/api/method, etc) Trying to maintain multiple versions of an API under one code-base just sounded too scary. At least when I released a version, I would know it is locked in stone and I don't have to worry about breaking it. (Note: I had to use a special .htaccess tweak to get multiple CodeIgniter instances running from the same domain. I can share it if you like)

Yslow alternatives - Optimisations for small websites

I am developing a small intranet based web application. I have YSlow installed and it suggests I do several things but they don't seem relevant for me.
e.g I do not need a CDN.
My application is slow so I want to reduce the bandwidth of requests.
What rules of YSlow should I adhere to?
Are there alternative tools for smaller sites?
What is the check list I should apply before rolling out my application?
I am using ASP.net.
Bandwidth on intranet sites shouldn't be an issue at all (unless you have VPN users, that is). If you don't and it's still crawling, it's probably something to do with the backend than the front-facing structure.
If you are trying to optimise for remote users, some of the same things apply to try and optimise the whole thing:
Don't use 30 stylesheets - cat them into one
Don't use 30 JS files, cat them into one
Consider compressing both JS and CSS using minifiers or the YUI compressor.
Consider using sprites (images with multiple versions in - eg button-up and button-down, one above the other)
Obviously, massive images are a no-no
Make sure you send expires headers to make sure stylesheets/js/images/etc are all cached for a sensible amount of time.
Make sure your pages aren't ridiculously large. If you're in a controlled environment and you can guarantee JS availability, you might want to page data with AJAX.
To begin,
limit the number of HTTP requests
made for images, scripts and other
resources by combining where
possible. Consider minifying them
too. I would recommend Fiddler for debugging HTTP
Be mindful of the size of Viewstate,
set EnableViewState = false where
possible e.g. For dropdown list controls
that never have their list of items changed,
disable Viewstate and populate in
Page_Init or override OnLoad. TRULY
understanding Viewstate is a
must read article on the subject
Oli has posted an answer while writing this and have to agree that bandwidth considerations should be secondary or tertiary for an intranet application.
I've discovered Page speed since asking this question. Its not really for smaller sites but is another great fire-bug plug-in.
Update: As of June 2015 Page Speed plugins for Firefox and Chrome is no longer maintained and available, instead, Google suggests the web version.
Pingdom tools provides a quick test for any publicly accessible web page.

How would you go about making an application that automatically retrieves your bank account balance twice a day?

I'm building a utility that will hopefully keep my wife in tune with how much money we have available.
I need a simple secure way of logging into my bank account and retrieving the balance.
Something like mechanize is the only method I can think of. I'm not even sure if that would work given the properly authenticated https that banks use.
Any ideas?
Write a perl script using LWP::UserAgent. It supports HTTPS connections. The only issue might be if the site requires javascript.
Web Client Programming with Perl has a few examples to get you started if you're not too familiar with perl.
If you really want to go there, get these extensions for Firefox: Live HTTP Headers, Firebug, FireCookie, and HttpFox. Also download cURL and a scripting language that can run cURL command-line tasks (or a scripting language like PHP or Perl that has access to cURL libraries directly).
I've started down this road for some idempotent GET tasks like getting PDFs of the S&P reports (of the stocks I track) from my online brokerage, and downloading the check images for my bank account. Both tasks are repetitive and slow ways of downloading data to my computer that the financial institutions don't provide any way of making it easier.
Here's why you shouldn't: (as a shortcut I'm going to call the archetypal large bank, brokerage, or other financial institution "BloatBank")
BloatBank is not likely to make public their API for accessing this kind of information. So it can change any time and all your hard work will be for naught. Whenever they change their mechanism, you'll have to adapt.
If BloatBank finds out you've been using automatic scripting to try to access your account information, they may ban you because you've violated their terms of service.
You might screw up, and the interaction between the hodgepodge of scripts on BloatBank's server, and your scripts that access your account, might cause a Bad Thing like closing your account. Testing this kind of script is tremendously difficult because you don't have any documentation about how their online service works, and you don't have a test account you can mess with.
(a variant of the above) You think you're safe because you're issuing GET requests. But BloatBank is just a crazy bank that doesn't know anything about REST, so there are some GET requests that can mess up your account.
If someone else does use your script to maliciously sniff your online password or mess with your account, any liability coverage from BloatBank may disappear because you've opened a security hole.
Why don't you teach your wife how to login to the bank herself? Or use Quicken (or Mint, etc) and teach her how to use the auto-download feature?
Have you checked out Watir? It is fantastic for automating web-browser actions. And since it's written in Ruby, you can take the results and store them in a DB (or email them to yourself) if needed.
If you are open to AIR, I'd say build an AIR app. I have worked with mechanize and I think it's cool. AIR gives you similar features with a richer GUI (see HTMLLoader and DOM manipulation of webpage).
If I were you, I'd simply pull the page and manipulate the DOM to suit my visual needs.
Please, if you find this easy to do for your bank please post your bank's name. If I have the same one I'll be closing my account.
More to your question. The process of loading a web page inside of your code rather than in a browser can be a black art, especially if their is any javascript involved. Your best bet would probably be embedding the IE Web Browser control in your app and then simulating key strokes and mouse clicks to arrive at your balance page. Then scrape the HTML for the balance.
I could try paying for Quicken and letting it do the balance downloading. Then I'd just need to find a way to get the number out of the software automatically.
This way I'm not violating any terms of service and I'm also reducing security risk since all "hacking" goes on locally.