This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
I'm confused about routing in rails. I have custom actions on a controller called UsersController, such as "login".
In my routes.rb if I do:
resource :users do
collection do
get 'login'
post 'login'
get 'logout'
end
end
I can link to the action login of UsersController no problem but then going to localhost:3000/users gives me the error:
Couldn't find User without an ID
But if I do
resources :users
localhost:3000/users gives me the expected listing.
I tried to put both but only the first version that is present in the file will work as expected.
How can I add routes to the default ones?
You have a typo in your routes.
Try resourcesnot resource. (resourcedoesn't create the #index action)
Take a look to the manual
Related
I am working on a Rails 5 app and have a model called #offer. In the Offers controller I have an action where I want to redirect to a specific offer I got from the database (Offer Id = 14 in the examples below.)
Working in the development environment, if I use redirect_to offer_path(#offer.id) in the controller, the browser correctly displays the offer in the https://dev.example.com/offers/14 URL. Notice the dev part in the URL. So far so good.
However, if I use redirect_to #offer in the controller, the browser tries to open the https://example.com/offers/14 URL (that's the production URL) and the page shows an error (We're sorry, but something went wrong.
If you are the application owner check the logs for more information.)
I would like to use redirect_to #offer, but first, I think I need to understand why one redirect method behaves differently than the other. Thanks for any insight.
This question is old but still I am answering as it might help other rails users in the future.
In Offers controller redirect_to #offer or redirect_to offers_path(#offer) would resolve to the same path /offers/:id be it production or development.
I think, in the production database offer with id 14 does not exist and also in the controller, if op is using find method without rescuing exception then, the show action might be erroring while trying to fetch the Offer with id 14 from the production database but the find method my have returned exception which if not rescued the rails might show a default error response.
We don't have controller code posted by the op but to me, this seems to be the most logical answer.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have a stack system that passes page tokens in the URL. As well my pages are dynamically created content so I have one php page to access the content with parameters.
index.php?grade=7&page=astronomy&pageno=2&token=foo1
I understand the search indexing goal to be The goal is to have only one link per unique set of data on your website.
Bing has a way to specify specific parameters to ignore.
Google it seems uses rel="canonical" but is it possible to use this to tell Google to ignore the token parameter? My URL (without tokens) can be anything like:
index.php?grade=5&page=astronomy&pageno=2
index.php?grade=6&page=math&pageno=1
index.php?grade=7&page=chemistry&page2=combustion&pageno=4
If there is not a solution for Google... Other possible solutions:
If I provide a site map for each base page, I can supply base URLs but any crawing of that page's links will crate tokens on resulting pages. Plus I would have to constantly recreate the site map to cover new pages (e.g. 25 posts per page, post 26 is on page 2).
One idea I've had is to identify bots on page load (I do this already) and disable all tokens for bots. Since (I'm presuming) bots don't use session data between pages anyway, the back buttons and editing features are useless. Is it feasible (or is it crazy) to write custom code for bots?
Thanks for your thoughts.
You can use the Google Webmaster Tools to tell Google to ignore certain URL parameters.
This is covered on the Google Webmaster Help page.
The owner of a site that I am working on has asked me to make the About Us page editable (by her, through a web interface). In fact, there are 5 pages in total that she wants to make editable - About Us, Terms of Service, and so on.
In the old implementation, when these pages were static view files, I had all the URLs coded into routes.rb
scope :controller => :home do
get :about
get :terms
# etc ...
end
Now that these different actions are fetching data from the DB (or wherever) it seems like the standard RESTful approach might be to make a Pages resource and consolidate all the views into a show action.
That doesn't feel quite right. Individual resources aren't usually hardwired into the site the way an About Us page is - the contents of the page might change, but the page itself isn't going anywhere, and there are links to it in the footer, in some of our emails, etc.
Specifically, factoring out the individual routes from the PagesController would raise the following problems:
I couldn't used named route helpers like about_path
The routes for permanent pages on the site would be stored in the database, which means that...
maintenance would probably be a headache, since that is not the normal place to keep routes.
So currently I think that the best approach is to leave these URLs coded into routes.rb, and have separate controller actions, each of which would fetch its own page from the DB.
Can anyone share some insight? How do you deal with data that's not totally static but still needs to be hard-wired into the site?
If you are going to allow markdown, I like the idea of a Pages controller and model. If your layout feels like all 5 pages should have a similar feel, then I'd go with one template that populates with the user generated content and appropriate navigation.
My choice would be to set the routes, make the views (including routing), and populate the views with the user generated markdown.
Without knowing more about your site, it's hard to say, but my preference is not to allow users to generate pages that reflect the site identity (About, terms, etc.) unless that's what they are paying for.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to create a web site builder. What I'm thinking is to have a one server as the main web server
my concept is as follows
1 - user enters url (http://www.userdomain.com)
2- it masks and redirect to one of my custom domain (http://www.myapp.userdomain.com)
3 - from the custom domain (myapp.userdomain) my application will identify the web site
3 - according to the website, it will render pages
my concerns are,
1 - is this the proper way of doing something like this (online web site builder)
2- since I'm masking the url i will not be able to do something like
'http://www.myapp.userdomain.com/products'
and if the user refresh the page it goes to home page (http://www.myapp.userdomain.com). how to avoid that
3- I'm thinking of using Rails, liquid for this. Will that be a good option
thanks in advance
cheers
sameera
Masking domains with redirects is going to get messy plus all those redirects may not play nice for SEO. Rails doesn't care if you host everything under a common domain name. It's just as easy to detect the requested domain name as it is the requested subdomain.
I suggest pointing all of your end-user domains directly to the IP of your main server so that redirects are not required. Use the the :domain and :subdomain conditions in the Rails router or parse them in your application controller to determine which site to actually render based on the hostname the user requested. This gives you added flexibility later as you could tell Apache or Nginx which domains to listen for and setup different instances of your application as to support rolling upgrades and things like that.
Sounds like this was #wukerplank's approach and I agree. Custom router to look at the domain name of the current request keeps the rest of your application simple.
There will you get some more help by getting site details of existing online site builder you can look upon [wix][1] , [weebly][2] , ecositebuilder and word press and many
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
On one of my sites have a lot of restricted pages which is only available to logged-in users, and for everyone else it outputs a default "you have to be logged in ... " view.
The problem is; a lot of these pages are listed on Google with the not-logged-in-view, and it looks pretty bad when 80% of the pages in the list have the same title and description/preview.
Would it be a good choice to, along with my default not-logged-in-view, send a 401 unauthorized header? And would this stop Google (and other engines) to index these pages?
Thanks!
(and if you have another (better?) solution I would love to hear about it!)
Use a robots.txt to tell search engines not to index the not logged in pages.
http://www.robotstxt.org/
Ex.
User-agent: *
Disallow: /error/notloggedin.html
401 Unauthorized is the response code for requests that requires user authentication. So this is exactly the response code you want and have to send. Status Code Definitions
EDIT: Your previous suggestion, response code 403, is for requests, where authentication makes no difference, eg. disabled directory browsing.
here are the status codes googlebot understands and recommends.
http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=40132
in your case an HTTP 403 would be the right one.