probably my use-case is specific, but i'm sure i'm not the only one.
I have quite big Rails application, full of Rspec/Cucumber tests. Usually it takes like 30-40 minutes to run everything from scratch on Intel i5. Yes, we are using guard, so it's not every time from very beginning. But it's annoying anyway, and i want to distribute load somehow.
Also i have another development workstation with i7, and my idea to run guard loop on it. This way, i need something to automate Rspec/Cucumber tests running via guard on remote machine, but general behaviour should be the same: i'm changing something, guard runs test for changed part on remote workstation without any additional movements from my side. I don't want to push to repo during development, of course we are using CI and local CI will be not very reasonable. And of course we are using parallel_tests, so my question not about sharing load between CPU cores.
Ideas and suggestions are very welcome.
You could share the files with the fast computer (via smb f.e.) and run the tests on the remote computer and check it via ssh?
You could mount your project working directory on the Remote machine and start Guard there, preferably over SSH so you see the console output. In addition you could use the GNTP notifier and send notifications from the remote machine to your development machine:
Ruby
notification :gntp, :host => 'development.local', :password => 'secret'
Related
On production, we have a web infrastructure as it follows:
Load Balancer (haproxy)
API Server (PHP + apache)
Frontend Server (Javascript + nginx)
MySql Server
Redis Server
I'd love to start using Vagrant to make production environments exactly the same as the development ones, plus making it easy for a new developer to jumpstart doing his job.
The big question is: how should I build the box?
Should I put everything in one box or should I build more boxes? And how many?
It depends on convention you've reached with developers. Ask yourself one question: in what type of structure do you wish to work: distributed or centralized.
If answer is "distributed", you can make one box per one project. You won't mess up when you will nedd to up any project that got over last modification few time ago. But this method intakes much memory and storage space and sometimes it doesn`t make sense, if the most part of your projects based on the same production environment.
If answer is "centralized", it means one box per all projects builded on the same environment is enough for you. It saves plenty of time, but also it`s easy to confuse, when you're looking for an old project. You can set up Docker container per each project in your Vagrant box.
Additionally I'd like to suggest you Packer usage for box building. That`s definitely clear instrument for this goal, it can make "ready to work" Vagrant box for every virtualization environment and execute shell scripts/CMS scripts. Just put in box everything essential for production environment, later developers can add some package dependencies through Vagrant provisioning and share it by Vagrantfile settings.
We are new to test automation, and finally we finally automated our setup (black box testing). A brief overview of our setup:
Each setup consists of 2 linux PCs and 1 windows PC - the PCs communicate to one another via an embedded board.
Before we used to manually run test cases from the linux machine - this would require both typing in the Linux machine and also some operations in the windows PC. Now instead we have written a C wrapper from which you can trigger any number of test cases and thanks to AutoIT the windows PC operations are now automated as well.
Now we have multiple such setups - I want to have a central test controller that,
Given a set of test cases (and the corresponding executable to be run on the embedded board) can distribute and trigger it in parallel across setups
During overnight tests it can keep track of which test cases have been executed and which ones are pending
Quarantine test cases
Continuous integration - we use CVS
etc etc
Basically a powerful test harness software running on a PC - this PC is connected to all the setups using a router.
Any suggestions from open source (free) projects for such a software, more than having all the features mentioned above, i want something that does most of it, I can code and add additional functionalities as need
I tried browsing online and seem to find some but they all seem to be for testing websites, not sure if it would suit my use case. Would really appreciate inputs in this regard.
Thanks
ok i am going to ignore most of the stuff that you described which looks like you have automated already, and correct me if i am wrong but i think you are looking for is controller for all these automated jobs.
I would say that Jenkins CI is the ideal solution for you.
In jenkins everything is controlled by a master machine, this machine then in turn controls slaves (being your other pcs or linux machines) via java. so this gives you overview of the system.
you can then create jobs and ristrict where they can run, these jos can pretty much do anything. Including taking parameters for what to run, you can also create matrix configuration jobs which allow for one setup to be run simlutansily on however many slaves you need.
You can set this process or timer, or trigger , build externaly or internaly or etc etc ...
Also I am pretty sure Jenkins has some dedicated plugins for working with CVS some of which are built into the OOTB setup.
Jenkins is the way to go here .
I've been working on a webdriver framework for a while now, I guess it is
keyword driven now. We would like for there to be a central place for users to
store tests, preferably on a wiki, but then when they are run they would open up
the browser on users local machine.
I originally started working using Fitnesse, which works great for storing the
tests however when we hosted it on a server when a user tries to run a test it
opens the browser on the server which the user can't view. Does anyone know a
way that I could force Fitnesse to open the users local browser or display the
browser to the user? Or do you know another framework/way to store tests in a
central place but run them in local.
I've been looking at sending through the local users ip through a fixture to start up the initial framework, I was hoping that fitnesse would already know the ip.
Thanks,
James
You can either find a framework that does what you want, or the bare minimum would be to create a thin wrapper that copies the test dll's and executeable to a machine and executes psexec to execute the tests on the remote machine. You could probably write the entire thing in maybe 20 lines of code.
If you look at the Linux ecosystem (especially the Ubuntu and Alestic EC2 images) there is a common technique where the VMs are pre-configured to look at the EC2 user-data and use it as a boot script. The nice thing about this approach is that you can write a boot script that further provisions your machine, allowing you to avoid making a new image every time your software that runs on the machine changes.
I want to do the same thing for Windows, but given that I'm an Mac and Linux guy, I'm a bit lost on where to start. My requirements are:
This must run on Windows Server 2008
A bootstrap script needs to start when the machine boots up, read the user-data file by pulling down the contents http://169.254.169.254/1.0/user-data
The bootstap script then needs to run the contents of that file as if it were a script
The script embedded in the user-data needs to run in such a way that it has access to the desktop environment (ie: it can launch a browser, etc).
I'm not quite sure how services work in Windows or if I need to enable auto-login, so any advice here would be appreciated. The ultimate goal is to run a Java program that launches some custom software that in turn launches a web browser (IE, Firefox, etc) and is capable of taking screenshots.
The screenshot part is interesting, because in the past when I've tried this the only way I could get something other than a black screen was to have UltraVNC or RealVNC boot up as a service, though I don't know why that helped.
I'm looking for answers to three specific questions, as well as any general advice:
Should I be focussing on a Windows service or auto-login + bat file in the "Startup" folder?
If I use a Windows service, is there anything special that I need to do to make sure desktop access and/or screenshots are available?
Do you recommend any tools for common Linux commands, like curl or wget? Last time I used Windows I used Cygwin a lot, but is there something more appropriate to use here?
I have not tried auto-login on Windows instances in EC2, but here's the support document on how to enable it.
We boot-strap our Windows instances using a custom AMI with a custom Windows 'install' service already installed. The boot-strap installer reads a URL from user-data at startup. The URL points to a ZIP file stored in S3. The installer then downloads, un-zips, and executes the actual application installer -- in our case a simple CMD fie.
This setups allows us to have one base AMI and then be able to easily overlay 15+ different application configurations (without having to rebuild the AMI). If you only have one application configuration this may be overkill for your situation.
The only trouble we ran into was having our installer service start to early -- changing the service startup mode to "Automatic Delayed" fixed that issue.
We wrote our boot-strap installer in Java, launched via YAJSW, because we're comfortable with it. If you just want a few simple Unix tools, most are available pre-compiled for Windows, for example wget.
For something completely different, you could try PsExec to configure the instance after it has booted.
You can try using RightScale's free developer account to create plain Powershell scripts and associate them with your Windows instances to run at boot time. The RightScale dashboard solves exactly the problems you are trying to solve above.
DISCLAIMER: I work for RightScale.
As for screen capture CutyCapt is a simple tool you can point at a URL and generate an image from.
Unxutils is a great solution for those looking for unix tools on Windows. It's got the wget.exe that you're looking for, however, using Powershell to download stuff is not so bad either:
$wc = new-object system.net.webclient
$wc.DownloadFile("http://stackoverflow.com","test.html")
If you can write a batch file to do your setup, then you can run it at startup of the vm by doing this:
1. Run REGEDT32.EXE.
2. Modify the following value within HKEY_CURRENT_USER:
Software\Microsoft\Windows NT\CurrentVersion\Winlogon\ParseAutoexec
1 = autoexec.bat is parsed
0 = autoexec.bat is not parsed
As an answer to #3, I would say that you can do just about anything in a batch file that you need which includes downloading from a ftp server (but not from a http server). I am really interested in this stuff and so if you have questions, try asking me.
If you use Elastic Beanstalks you can use this:
Customizing the Software on EC2 Instances Running Windows
It uses YAML formatting standards, e.g.
packages:
msi:
mysql: http://dev.mysql.com/get/Downloads/Connector-Net/mysql-connector-net-6.6.5.msi/from/http://cdn.mysql.com/
or
sources:
"c:/myproject/myapp": http://s3.amazonaws.com/mybucket/myobject.zip
I know this is a little bit late to help out with the original post but for anyone who is still reading this one solution is to use the http://cloudinitnet.codeplex.com/ project. The service is easily installed using a powershell script and will create a local administrator account to use while running.
The goal for this project was to replace the Cloud-Init project used in Amazon Linux and Ubuntu.
Typically I develop my websites on trunk, then merge changes to a testing branch where they are put on a 'beta' website, and then finally they are merged onto a live branch and put onto the live website.
With a Facebook application things are a bit tricky. As you can't view a Facebook application through a normal web browser (it has to go through the Facebook servers) you can't easily give each developer their own version of the website to work with and test.
I have not come across anything about the best way to develop and test a Facebook application while continuing to have a stable live website that users can use. My question is this, what is the best practice for organising the development and testing of a Facebook application?
Try updating your hosts file (for windows users # c:\windows\System32\Drivers\etc\hosts) with an entry that will route all requests from your live domain back to your machine.
So 127.0.0.1 mywebappthatusesfacebook.com.
Then make sure that your app is running at the root of your webserver. # http://localhost/ Then goto mywebappthatusesfacebook.com in your browser and it should redirect right back to your local machine. Facebook won't know the difference. Hope this helps
The way I and my partner did it was we each made our own private Facebook applications, that pointed to our IP address where we worked on it. Since we worked in the same place, we each picked a different port, and had our router forward that port to our local IP address. It was kinda slow to refresh a page, but it worked very nicely.
You'll have to add both trunk and test versions as different applications and test them using test accounts. You may also use a single application and switch its target URL between cycles.
Testing FB apps is still a rather primitive process.
I generally setup a test application that is a complete copy of the production settings inside the FB development environment that uses an SSH tunnel to point to my development server. You can setup as many applications as you need inside FB - I generally have a development application, a staging app and production. Staging and Production are both on "live" servers rather than an SSH tunnel.
In your application you then use whatever language/framework/server tools are at your disposal to switch the FB configuration based on the server. In Rails, the Facebooker gem actually has built in support for different FB configurations.
Once all of that is done, testing is, unfortunately, still a matter of running the app within FB itself. I use Selenium to automate as much of this as possible.
Best way to do this:
Remove 'App Domain' from 'Basic Info'
Set website's 'Site URL' to : "http://localhost/" .
That simple.
(This only apply if you don't have a live system running in parallel to the test env. In that case get yourself another key.)
We have it setup much like Toby. A series of config files for each developer, that has the Facebook APP Id info (a different app for each developer), separate pages where the app is hosted, and git ignores the config files. We're LAMP with Code Igniter, and it's similar to Rails in that we can set the environment in 1 file, which points to the config with the Facebook constants.
Branching out into Selenium, using unit tests for model-testing.
For local testing we simply use a different app than for the server. In our case the Canvas-URL is set to localhost.local:8000.
You only have to make sure that when you use facebook connect that you type in localhost.local into the address field of the browser and not just localhost.
For testing a canvas or tab app it is faster if you use the 'open iframe in new tab' command of Firefox. This way the session and cookies from Facebook are preserved.
Another solution is NGROK
https://ngrok.com/
It opens a public tunnel to your local app
Example on my rails application by simply typing
./ngrok 3000
I get
http://630066fe.ngrok.com -> 127.0.0.1:3000