I'm near the end of Chapter 5 on Hartl's Tutorial. In the previous section (5.4), I have created a user signup page. Now I am required to check that I have created the user signup static page correctly by running rspec:
$ bundle exec rspec spec/
and I get this notice:
Pending:
StaticPagesHelper add some examples to (or delete) /Users/kelvinyu/rails_projects/sample_app/spec/helpers/static_pages_helper_spec.rb
# No reason given
# ./spec/helpers/static_pages_helper_spec.rb:12
static_pages/help.html.erb sample
# No reason given
# ./spec/views/static_pages/help.html.erb_spec.rb:4
static_pages/home.html.erb add some examples to (or delete) /Users/kelvinyu/rails_projects/sample_app/spec/views/static_pages/home.html.erb_spec.rb
# No reason given
# ./spec/views/static_pages/home.html.erb_spec.rb:4
Finished in 0.28953 seconds
16 examples, 0 failures, 3 pending
Randomized with seed 27698
Not sure what this "Pending" status really means and if I have an error. If so, what is the best way to fix this? Please let me know if more information is needed.
Pending means that the example is not yet implemented or finished.
In your case it means you need 3 more examples to implement.
How can you mark examples as pending? Usually you just omit the block when defining example (via it method). Or you can use the pending method. More information you can find here.
Those are the just the tests that are auto-generated along with the spec files when you create a model or controller. They're basically just placeholders until you create your own tests.
You're safe to (and should) delete them.
In recent Rails, newly generated controller spec includes the skip command, which will pend the test too.
Related
We have a concurrency issue in our system. This one occurs mainly during burst load through our API from an external system and is not reproducible manually.
So I would like to create a Gatling test to 1) reproduce it whenever I want and 2) check that we have solved the issue.
1) I am done for the first point. I have created two requests checking for the status 201 and I run them with many users.
2) This issue allow the creation of two resources with the same unique value. The expected behaviour is to have one that is created and the others should fail with the status 409. But I have no idea on how we can check that any of the request, but at least once, complete with 201 while all the others are failing with 409.
Can we do a kind of post-check on all requests with Gatling ?
Thanks
Store results already seen in a global ConcurrentHashMap and compute expected value in the is check in a function, based on presence in the CHM (201 for missing or 409 for existing).
I don't think you can achieve what you're after with a check on the call itself as gatling users have no visibility of results returned to other users, so you have no way of knowing whether the successful (201) request has already been made (short of some very messy hacking using a check transformer)
But you could use simulation level assertions to do this.
So you have your request where you assert that you expect a 201 response
http("my request")
.get("myUrl")
.check(status.is(201))
this should result in all but one of these requests failing in a simulation, which you can specify using the assertion...
setUp(
myScenario.inject(
...
)
)
.assertions(
details("my request").successfulRequests.count.is(1))
We are trying to automate E2E test cases for an booking application which involves around 60+ steps for each test case. Whenever there is a failure at the final steps it is very much time consuming if we go for traditional retry option since the test case will be executed from step 1 again. On the application we have some logical steps which can be marked somehow through which we would like to achieve resuming the test case from a logical point before the failed step. Say for example, among the 60 steps say every 10th step is a logical point in which the script can be resumed instead of retrying from the step 1. say if the failure is on line number 43 then with the help of booking reference number the test can be resumed from step number 41 since the validation has been completed till step 40 (step 40 is a logical closure point). There might be an option you may suggest to split the test case into smaller modules, which will not work for me since it is an E2E test case for the application which we would want to have in a single Geb Spec. The framework is built using Geb & Spock for Web Application automation. Please share your thoughts / logics on how can we build the recovery scenarios for this case. Thanks for your support.!
As of now i am not able to find out any solution for this kind of problem.
Below are few things which can be done to achieve the same, but before we talk about the solutions, we should also talk about the issues which it will create. You are running E2E test cases and if they fail at step 10 then they should be started from scratch not from step 10 because you can miss important integration defects which are occurring when you perform 10th step after running first 9 steps. For e.g. if you create an account and then immediately search for hotel, you application might through error because its a newly created account, but if you retry it from a step where you are just trying to search for the hotel rooms then it might work, because of the time spent between your test failure and restarting the test, and you will not notice this issue.
Now if you must achieve this then
Create a log every time you reach a checkpoint, which can be a simple text file indicating the test case name and checkpoint number, then use retry analyzer for running the failed tests, inside the test look for the text file with the test case name, if it exists then simple skip the code to the checkpoint mentioned in the text file. It can be used in different ways, for e.g. if your e2e test if going through 3 applications then file can have the test case name and the last passed application name, if you have used page objects then you can write the last successful page object name in the text file and use that for continuing the test.
Above solution is just an idea, because I dont think there are any existing solutions for this issue.
Hope this will give you an idea about how to start working on this problem.
The possible solution to your problem is to first define the way in which you want to write your tests.
I would recommend considering one test Spec (class) as one E2E test containing multiple features.
Also, it is recommended to use opensource Spock Retry project available on GitHub, after implementing RetryOnFailure
your final code should look like:
#RetryOnFailure(times= 2) // times parameter is for retry attempts, default= 0
class MyEndtoEndTest1 extends GebReportingSpec {
def bookingRefNumber
def "First Feature block which cover the Test till a logical step"()
{
// your test steps here
bookingRefNumber = //assign your booking Ref here
}
def "Second Feature which covers a set of subsequent logical steps "()
{
//use the bookingRefNumber generated in the First Feature block
}
def "Third set of Logical Steps"()
{ // your test steps here
}
def "End of the E2E test "()
{ // Your final Test steps here
}
The passing of all the Feature blocks (methods) will signify a successful E2E test execution.
It sounds like your end to end test case is too big and too brittle. What's the reasoning behind needing it all in one script.
You've already stated you can use the booking reference to continue on at a later step if it fails, this seems like a logical place to split your tests.
Do the first bit, output the booking reference to a file. Read the booking reference for the second test and complete the journey, if it fails then a retry won't take anywhere near as long.
If you're using your tests to provide quick feedback after a build and your tests keep failing then I would look to split up the journey into smaller smoke tests, and if required run some overnight end to end tests with as many retries as you like.
The fact it keeps failing suggests your tests, environment or build is brittle.
I've recently tried to perform a simple task: Install a package if it does not exist by pulling distribution out of web location (in-house repo) and deleting it once it is not longer needed.
Learning about :before notification I came up with following elegant code (in this example variable "pkg" keeps name of distribution image, "pkg_src_location" is URL of my web repository, "name_of_package" is named installed package):
local_image = "#{Chef::Config['file_cache_path']}/#{pkg}"
remote_file 'package_image' do
path local_image
source "#{pkg_src_location}/#{pkg}"
action :nothing
end
package name_of_package do
source local_image
notifies :create, 'remote_file[package_image]', :before
notifies :delete, 'remote_file[package_image]', :delayed
end
I was quite surprised that it does not work... Actually 'package' resource is being converged without 'remote_file' being created - and it fails due to source local_image not being in place...
I did a simple test:
log 'before' do
action :nothing
end
log 'after' do
action :nothing
end
log 'at-the-end' do
action :nothing
end
log 'main' do
notifies :write, 'log[before]', :before
notifies :write, 'log[at-the-end]', :delayed
notifies :write, 'log[after]', :immediately
end
What I learned is that 'main' is actually converged twice! Once when first encountered and once again, after 'before' resource is converged...
Recipe: notify_test::default
* log[before] action nothing (skipped due to action :nothing)
* log[after] action nothing (skipped due to action :nothing)
* log[at-the-end] action nothing (skipped due to action :nothing)
* log[main] action write
* log[before] action write
* log[main] action write
* log[after] action write
* log[at-the-end] action write
Is it a bug or feature? If this is 'feature', it is a really bad one and Chef shouldn't have it at all. It is simply useless the way it works and only wastes people's time...
Can anyone having more in-depth Chef understanding comment on it? Is there any way to make ':before' work? Maybe I'm just doing something wrong here?
To be a bit more specific, the before timing uses the "why run" system to make a guess about if the resource needs to be updated. In this case, the package resource is invalid to begin with so whyrun can't tell that an update is needed.
After a little rethink about it I get what's going wrong here:
The before notification is fired if the actual resource will have to be updated, in chef inner this mean getting the actual resource state to compare to the desired resource state ('\load_current_resource\ in providers).
Here you're willing to install a package, chef will ask the system about this package and it's version, and then will compare this result with the source you provided.
And here comes the problem, you can't compare with the source package as it is not installed.
For your case, the best bet is to leave the file on the system and get rid of notifications.
The before notification could be of interest if you want to launch a database backup before upgrading the DB system for example or as mentioned in the RFC for :before to stop a service before upgrading its package.
But it should not be used to trigger another resource providing one of the "calling" resource properties.
I'm writing a book on Rails 3 at the moment and past-me has written in Chapter 3 or so that when a specific feature is run that a routing error is generated. Now, it's unlike me to go writing things that aren't true, so I'm pretty sure this happened once in the past.
I haven't yet been able to duplicate the scenario myself, but I'm pretty confident it's one of the forgotten settings in the environment file.
To duplicate this issue:
Generate a new rails project
important: Remove the public/index.html file
Add cucumber-rails and capybara to the "test" group in your Gemfile
run bundle install
run rails g cucumber:skeleton
Generate a new feature, call it features/creating_projects.feature
Inside this feature put:
This:
Feature: Creating projects
In order to value
As a role
I want feature
Scenario: title
Given I am on the homepage
When you run this feature using bundle exec cucumber features/creating_projects.feature it should fail with a "No route matches /" error, because you didn't define the root route. However, what I and others are seeing is that it doesn't.
Now I've set a setting in test.rb that will get this exception page to show, but I would rather Rails did a hard-raise of the exception so that it showed up in Cucumber as a failing step, like I'm pretty sure it used to, rather than a passing step.
Does anybody know what could have changed since May-ish of last year for Rails to not do this? I'm pretty confident it's some setting in config/environments/test.rb, but for the life of me I cannot figure it out.
After I investigate the Rails source code, it seems like the ActionDispatch::ShowExceptions middleware that responsible of raising exception ActionController::RoutingError is missing in the test environment. Confirmed by running rake middleware and rake middleware RAILS_ENV=test.
You can see that in https://github.com/josh/rack-mount/blob/master/lib/rack/mount/route_set.rb#L152 it's returning X-Cascade => 'pass' header, and it's ActionDispatch::ShowExceptions's responsibility to pick it up (in https://github.com/rails/rails/blob/master/actionpack/lib/action_dispatch/middleware/show_exceptions.rb#L52)
So the reason you're seeing that your test case is passing because rack-mount is returning "Not Found" text, with status 404.
I'll git blame people and get it fix for you. It's this conditional here: https://github.com/rails/rails/blob/master/railties/lib/rails/application.rb#L159. If the setting is true, the error got translated right but we got error page output. If it's false, then this middleware doesn't get loaded at all. Hold on ...
Update: To clearify the previous block, you're hitting the dead end here. If you're setting action_dispatch.show_exceptions to false, you'll not get that middleware loaded, resulted in the 404 error from rack-mount got rendered. Whereas if you're setting action_dispatch.show_exceptions to true, that middleware will got loaded but it will rescue the error and render a nice "exception" page for you.
I need some help figuring out the best way to proceed with creating a Rails 3 engine(or plugin, and/or gem).
Apologies for the length of this question...here's part 1:
My company uses an email service provider to send all of our outbound customer emails. They have created a SOAP web service and I have incorporated it into a sample Rails 3 app. The goal of creating an app first was so that I could then take that code and turn it into a gem.
Here's some of the background: The SOAP service has 23 actions in all and, in creating my sample app, I grouped similar actions together. Some of these actions involve uploading/downloading mailing lists and HTML content via the SOAP WS and, as a result, there is a MySQL database with a few tables to store HTML content and lists as a sort of "staging area".
All in all, I have 5 models to contain the SOAP actions (they do not inherit from ActiveRecord::Base) and 3 models that interact with the MySQL database.
I also have a corresponding controller for each model and a view for each SOAP action that I used to help me test the actions as I implemented them.
So...I'm not sure where to go from here. My code needs a lot of DRY-ing up. For example, the WS requires that the user authentication info be sent in the envelope body of each request. So, that means each method in the model has the same auth info hard coded into it which is extremely repetitive; obviously I'd like for that to be cleaner. I also look back now through the code and see that the requests themselves are repetitive and could probably be consolidated.
All of that I think I can figure out on my own, but here is something that seems obvious but I can't figure out. How can I create methods that can be used in all of my models (thinking specifically of the user auth part of the equation).
Here's part 2:
My intention from the beginning has been to extract my code and package it into a gem incase any of my ESP's other clients could use it (plus I'll be using it in several different apps). However, I'd like for it to be very configurable. There should be a default minimal configuration (i.e. just models that wrap the SOAP actions) created just by adding the gem to a Gemfile. However, I'd also like for there to be some tools available (like generators or Rake tasks) to get a user started. What I have in mind is options to create migration files, models, controllers, or views (or the whole nine yards if they want).
So, here's where I'm stuck on knowing whether I should pursue the plugin or engine route. I read Jordan West's series on creating an engine and I really like the thought of that, but I'm not sure if that is the right route for me.
So if you've read this far and I haven't confused the hell out of you, I could use some guidance :)
Thanks
Let's answer your question in parts.
Part One
Ruby's flexibility means you can share code across all of your models extremely easily. Are they extending any sort of class? If they are, simply add the methods to the parent object like so:
class SOAPModel
def request(action, params)
# Request code goes in here
end
end
Then it's simply a case of calling request in your respective models. Alternatively, you could access this method statically with SOAPModel.request. It's really up to you. Otherwise, if (for some bizarre reason) you can't touch a parent object, you could define the methods dynamically:
[User, Post, Message, Comment, File].each do |model|
model.send :define_method, :request, proc { |action, params|
# Request code goes in here
}
end
It's Ruby, so there are tons of ways of doing it.
Part Two
Gems are more than flexible to handle your problem; both Rails and Rake are pretty smart and will look inside your gem (as long as it's in your environment file and Gemfile). Create a generators directory and a /name/name_generator.rb where name is the name of your generator. The just run rails g name and you're there. Same goes for Rake (tasks).
I hope that helps!