Rename session cookies in Rails - ruby-on-rails-3

since I'd like the session cookie to reflect the url and not the app name, I'd like to rename the cookies..
The current session cookie name is called _APPNAME_session
is there a way to rename it to _somethingelse_session?
I see the name of it when I do
curl -i <appurl>
I see
set_cookie = _APPNAME_session=....

Rails >= 6.0.0, in config/application.rb, add the following line:
config.session_store :cookie_store, key: '_somethingelse_session'
Rails >= 5.0.0, in config/initializers/session_store.rb, set/change the following line:
Rails.application.config.session_store :cookie_store, key: '_somethingelse_session'
Rails < 5.0.0, in config/initializers/session_store.rb, set/change the following line:
<APPNAME>::Application.config.session_store :cookie_store, key: '_somethingelse_session'

Related

Is there a way to add additional configurable settings in OpsCenter 6.0.2 Lifecycle Manager config profiles?

I would really like to add the following settings to our spark-defaults.conf using OpsCenter 6.0.2 in order to avoid configuration drift. Is there a way to add these config items to the config profile template?
spark.cores.max 4
spark.driver.memory 2g
spark.executor.memory 4g
spark.python.worker.memory 2g
NOTE: As Mike Lococo has pointed out in the comments for this answer -- this answer may work to update the config profile values but will not result in those values being written to spark-defaults.conf.
The following is not a solution!
You can; you have to update the config profile via the LCM Config Profile API (https://docs.datastax.com/en/opscenter/6.0/api/docs/lcm_config_profile.html#lcm-config-profile).
First, identify the config profile that needs updating:
$ curl http://localhost:8888/api/v1/lcm/config_profiles
Get the href for the specific config profile that needs updating, request it, and save the response body to a file:
$ curl http://localhost:8888/api/v1/lcm/config_profiles/026fe8e3-0bb8-49c1-9888-8187b1624375 > profile.json
Now, in the profile.json file you just saved to, you add or edit the key at json > spark-defaults-conf to include the following keys:
"spark-defaults-conf": {
"spark-cores-max": 4,
"spark-python-worker-memory": "2g",
"spark-ssl-enabled": false,
"spark-drivers-memory": "2g",
"spark-executor-memory": "4g"
}
Save the updated profile.json. Finally, execute an HTTP PUT to the same config profile URL, using the edited file as the request data:
$ curl -X PUT http://localhost:8888/api/v1/lcm/config_profiles/026fe8e3-0bb8-49c1-9888-8187b1624375 -d #profile.json

Rails: how to retain session when redirecting to canonical domain (e.g. company.example.com -> example.com)

Rails 3.2.12, ruby 1.9.3
We allow users to specify the company they are with using a subdomain, like mycompany.example.com, but we redirect to the canonical example.com and need to remember that the user is from mycompany.
We have our environment set up so the config.session_store contains :domain => 'example.com (an alternative that also works is :domain => :all, :tld_length => 2) and this is supposed to work to allow sharing of session information between subdomains. There are a number of great posts, such as this one: Share session (cookies) between subdomains in Rails?
But before the redirect I am sending session.inspect to the log, and it's clearly getting a different session (two separate session ids, etc.). So the most basic issue is that I cannot use the session to remember the mycompany part before I strip it off.
I can work around that, but there are a number of cases where the same user will be from multiple companies (and part of this is our support team who needs to be able to switch companies).
I have tried this on Chrome and Safari on OS X. I am using "pow" so my local development environment has a domain like example.dev which helps rule out several issues (vs. normal localhost:3000 server).
Am I missing something? Is it indeed possible to share a cookie across domains?
UPDATE:
Example code called in a before_filter defined in ApplicationController:
def redirect_to_canonical_if_needed
logger.debug "Starting before_filter. session contains: #{session}"
if request.host != 'example.com'
session[:original_domain] = "Originally came from #{request.host}"
logger.debug "Redirecting, session contains: #{session}"
redirect_to 'http://example.com', :status => :moved_permanently
end
end
Setting added to config/environments/production.rb and removed from config/initializers/session_store.rb
config.session_store = { :key => 'example_session', :secret => "secret", :domain => :all, :tld_length => 2 }
or
config.session_store = { :key => 'example_session', :secret => "secret", :domain => 'example.com' }
And logging result, if I start from a fresh environment where no session exists going to the url a.example.com:
Starting before_filter, session contains: {}
Redirecting, session contains: {"session_id"=>"4de9b56fb540f7295cd3192cef07ba63", "original_domain"=>"a.example.com"}
Filter chain halted as :redirect_to_canonical_if_needed rendered or redirected
Completed 301 Moved Permanently in 2294ms (ActiveRecord: 855.7ms)
Started GET "/" for 123.456.789.123 at 2013-07-12 09:41:12 -0400
Processing by HomeController#index as HTML
Parameters: {}
Starting before_filter, session contains: {}
So the before filter fires on each new request. First request there's no session, hence the "not loaded" message. The test for need to redirect is true. I put something in the session and it gets an id and what I put in it. I do the redirect. New request occurs on the root domain, before filter fires again, and here's the issue: session is not initialized
This should work fine between the two I have setup the following on my dev
Application is at example.dev
I view and set a session variable at a.example.dev then visit b.example.dev and it is set as long as when (as you describe) you set domain to 'example.dev' for the session store
This code in my root controller/action does exactly what your describing
unless request.subdomain.to_s == 'another'
session[:original_domain] = request.subdomain.to_s
redirect_to 'http://another.' + request.domain.to_s
end
And viewing original_domain is available in the session
If you put the example code in I can have a look for any pitfalls

Doctrine (with Symfony2) only tries connection to DB using root#localhost

The error:(occurring in the prod env)
request.CRITICAL: PDOException: SQLSTATE[28000] [1045] Access denied for user 'root'#'localhost' (using password: YES) (uncaught exception) at /srv/inta/current/vendor/doctrine-dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php line 36 [] []
What I've tried so far
The weird thing is that I actually have access using the root user, and the provided password. Logging in as root via the console works great.
I'm using the following parameters.yml file located in app/config/
parameters:
database_driver: pdo_mysql
database_host: localhost
database_port: ~
database_name: int_apartments
database_user: root
database_password: pw goes here
mailer_transport: smtp
mailer_host: localhost
mailer_user: ~
mailer_password: ~
locale: en
secret: ThisTokenIsNotSoSecretChangeIt
As you can see, it is quite standard with only the name of the db, user and password changed.
In my config.yml located in app/config (the relevant portions)
imports:
- { resource: security.yml }
- { resource: parameters.yml }
...
doctrine:
dbal:
driver: %database_driver%
host: %database_host%
port: %database_port%
dbname: %database_name%
user: %database_user%
password: %database_password%
charset: UTF8
dbname: int_apartments
orm:
auto_generate_proxy_classes: %kernel.debug%
auto_mapping: true
mappings:
StofDoctrineExtensionsBundle: false
Now, I wanted to start at "step 1" and verify that the parameters.yml file is actually being imported, so I changed the host to "localhos" or the user to "tom" or whatever and the error message located in app/logs/prod.log stays exact as is - the location doesn't change and the user doesn't change.
So I checked my config_prod.yml located in app/config
imports:
- { resource: config.yml }
#doctrine:
# metadata_cache_driver: apc
# result_cache_driver: apc
# query_cache_driver: apc
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
nested:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
...and everything seems standard!
Summary of what's going on
So here is the quick version.
Authentication error exists for root#localhost
Verified my authentication creditials by logging in as that user via the console
Want to check if the parameters.yml file is being loaded
Changed some values - none affected the error message
(small)Edit:
What I actually want to do is to connect to the DB as a completely different user with a different password. Even when I enter different credentials into my parameters.yml file, doctrine still spits out the "root#localhost" error.
Ideas?
Silly mistake, seems due to a bad user/group/owner configuration on the server.
the app/cache directory is owned by "root", but when I run
app/console cache:clear --env=prod --no-debug
I am running as another user (not root). So there were issues in clearing the cache and doctrine seems to have been using a very old configuration located in the cache files.
Lessons learned:
Always try running as root (as a last resort)
Use a properly configured web server to avoid ownership issues
I solved my problem by renaming the prod folder i uploaded to prod_old because the system could not delete the folder for some reason.

Couchdb bad_utf8_character_code

I am using couchdb through couchrest in a ruby on rails application. When I try to use futon it alerts with a message box saying bad_utf8_character_code. If I try to access records from rails console using Model.all it raises either of 500:internal server error,
RestClient::ServerBrokeConnection: Server broke connection: or Errno::ECONNREFUSED: Connection refused - connect(2)
Could any one help me to sort this issue?
I ran into this issue. I tried various curl calls to delete, modify, and even just view the offending document. None of them worked. Finally, I decided to pull the documents down to my local machine one at a time, skip the "bad" one, and then replicate from my local out to production.
Disable app (so no more records are being written to the db)
Delete and recreate local database (run these commands in a shell):
curl -X DELETE http://127.0.0.1:5984/mydb
curl -X PUT http://127.0.0.1:5984/mydb
Pull down documents from live to local using this ruby script
require 'bundler'
require 'json'
all_docs = JSON.parse(`curl http://server.com:5984/mydb/_all_docs`)
docs = all_docs['rows']
ids = docs.map{|doc| doc['id']}
bad_ids = ['196ee4a2649b966b13c97672e8863c49']
good_ids = ids - bad_ids
good_ids.each do |curr_id|
curr_doc = JSON.parse(`curl http://server.com:5984/mydb/#{curr_id}`)
curr_doc.delete('_id')
curr_doc.delete('_rev')
data = curr_doc.to_json.gsub("\\'", "\'").gsub('"','\"')
cmd = %Q~curl -X PUT http://127.0.0.1:5984/mydb/#{curr_id} -d "#{data}"~
puts cmd
`#{cmd}`
end
Destroy (delete) and recreate production database (I did this in futon)
Replicate
curl -X POST http://127.0.0.1:5984/_replicate -d '{"target":"http://server.com:5984/mydb", "source":"http://127.0.0.1:5984/mydb"}' -H "Content-Type: application/json"
Restart app

Rails mailer edit_user_url uses http not https

My entire app is https, no http.
If add the following to any of the views
I get a "edit user" linked to
https://localhost:3000/user/2/edit
But when I place the same line in a mailer view the email contains
http://localhost:3000/user/2/edit
Notice the "http" instead of "https"??
Using
rails 3.0.5 and ruby 1.8.7
I believe that you have to put in your config/environments/production.rb:
config.action_mailer.default_url_options = {:protocol => 'https'}
Editing my config/environments/development file with
host = "hostaddress.io"
config.action_mailer.default_url_options = { host: host, protocol: 'https' }
worked for me on Rails 4.2.2.