Elixir's module attributes in Phoenix production - module

As far as I know module attributes evaluated during compilation time. I was trying to follow this post about mocking API in Elixir:
defmodule Example.User do
#github_api Application.get_env(:example, :github_api)
def get(username) when is_binary(username) do
#github_api.make_request(:get, "/users/#{username}")
end
end
And I'm wondering if that's gonna work in production at all. As far as I understand when this module is compiled there's no access to the Application. So my question is: can I use module attributes to store some config values that come from Application.get_env?

You absolutely can. As long as the application was compiled using MIX_ENV specified to environment you want the application running under, and as long as that call evaluates to what you expect for that environment, it'll all work fine.
For a deeper look at how module attributes are affected by compilation for an almost identical case as what you've described, take a look at this blog post here.

Related

Julia-lang: Extending a method from an installed module, "using" order

I'm trying to extend a method from the JSON.jl module, specifically I'm doing this (extending 'JSON.lower', as documented, to allow serialisation),
generally - I'm asking what are the current Julia "best practices" for extending a method from another module - especially - if this is a method in-change of some of some policy of the module (as in this case in JSON.jl).
Is there actually a way to guarantee the updated method will be used by the imported module, except for patching it?
Let's demonstrate both desired and sinister behaviour:
module TestProgram
using JSON
import JSON.lower # not sure whether this is needed
JSON.print(Set([1,2,1]))
## output:
## {"dict":{"2":null,"1":null}}
## not the output I want or expect
## in this case might be a good idea to patch JSON.jl and pull request
## and "JSON.print" - is now compiled with this default behaviour
JSON.lower{T}(v::Set{T}) = collect(v) # overloading 'JSON.lower'
println(json(Set([1,2,1])))
## output:
## [2,1]
JSON.print(Set([1,2,1]))
## output still:
## {"dict":{"2":null,"1":null}}
end # module
If anywhere during runtime before - some piece of code already ran this method, json(v::Set{T}) - then this method is already compiled, and will not use my extention of 'JSON.lower'
This actually happen to me and was hard to debug, as the json() call was behaving as expected, while JSON.print() was behaving differently (just by change in some run-flow)
Since any module (JSON.jl here) can be used by another package I'm using - I have no way of knowing my code actually runs first and actually extends a method correctly
So - what are the best practices to ensure that?
[Currently tested this behaviour with Julia 0.4.6 and 0.5.0]
Yes, your analysis is correct on Julia 0.5 and prior. Many developers know this issue number by heart (#265). A large amount of work went into fixing this for the upcoming version 0.6; Julia now tracks the callers of every function and re-compiles them as needed.
In general, though, the best advice here is to patch the library directly and push your change. This is true in both 0.5 and 0.6, even with the run-time correctness fix. It's advised that a you shouldn't extend imported functions with types that you didn't define yourself since it changes that behavior for all other packages that depend upon it. This has become to be colloquially referred to as "type piracy" since you're commandeering the method for your own purposes — purposes that may not suit other callers.
As a temporary workaround, you could try adding the definition to your ~/.juliarc.jl file, which will get executed at startup and has a better chance at being defined before any other methods get compiled. Even this isn't bullet-proof, though, since the package could use precompilation to speed up its usage.

Initializer does not execute when models reload on Rails 3.1 development environment

We are currently using Ruby 1.9.3, Rails 3.1 (i know, we're working hard to upgrade all our applications).
We're using a module (let's call it 'OurModule' to add a method (let's call it 'OurAddOnMethod' to a model defined in a gem (let'd call that 'GemModel'). We have that module file living in the 'config/initializers' directory.
That file defines the module, and then calls this to include it in the model:
# Include the extension
GemModel.send(:include, OurModule)
When developing, things work well mostly, but periodically we will get an error that basically says "Undefined method 'OurAddOnMethod' in 'GemModel'". Restarting the server resolves the issue (for a while).
I'm assuming this is happening because the models are reloaded periodically with changes made in the development environment, and it appears that the initializers do not also get reloaded at that time..? It seems like this may not be the best way to set things up; it is quite frustrating to deal with.
Can anyone enlighten me on a better way to achieve this?
I ended up using wrapping the code in the following, and keeping it in initializers:
ActionDispatch::Callbacks.to_prepare do
# configure stuff or initialize
end
I feel really bad, i completely missed this question that seems to completely cover mine (linking to the answer that i used):
https://stackoverflow.com/a/8636163/287516

What is the difference between "package" and "module" in Frege?

Hi I've been playing a little bit with Frege and I just noticed in some examples that package and module are used interchangeably:
package MyModuleOne where
and sometimes:
module MyModuleTwo where
When importing from one or the other I don't see any difference in the behavior of my program. Is there something I should keep in mind when using package or module keywords ?
Yes. It used to start out with package, but later I realized this was an obstacle when porting Haskell code which uses module. Hence I added module, and thus currently module and package are the same keyword, just spelled differently.
But the intention is, of course, to retire package sooner or later. So my advice would be to use module only.
(This reminds me that I probably have to update the lang spec with regard to this. Never mind.)

Rails3 Engine helper over-ride

So I have a Rails 3.0 Engine (gem).
It provides a controller at app/controllers/advanced_controller.rb, and a corresonding helper at app/helpers/advanced_helper.rb. (And some views of course).
So far so good, the controller/helper/views are just automatically available in the application using the gem, great.
But I want to let the local application selective over-ride helper methods from AdvancedHelper in the engine (and ideally be able to call 'super'). That's a pretty reasonable thing to want to allow, right, a perfectly reasonable (and I'd think common) design?
Problem is, I can't seem to find any way to make it work. If the application defines it's own app/helpers/advanced_helper.rb (AdvancedHelper), then the one from the engine never gets loaded at all -- so that would work if you wanted to replace ALL the helper methods in there (without calling super), but not if you just want to over-ride one.
So that kind of makes sense actually, so I pick a different name. Let's call my local one ./app/helpers/local_advanced_helper.rb (LocalAdvancedHelper). This helper DOES get loaded, if I put a method in there that wasn't in the original engine's AdvancedHelper, it is available to views.
But if I put a method in there with the same name as one in the engine's AdvancedHelper... my local one NEVER gets called. It's like the AdvancedHelper (from engine) is earlier in the call chain than the LocalAdvancedHelper (from app). Indeed, if I turn on the debugger, and look at helpers.ancestors, that's exactly what's going on, they're in the reverse order I'd want in the ancestor chain. So AdvancedHelper (from engine) could theoretically call 'super' to call up to LocalAdvancedHelper (from app) -- but that of course wouldn't make a lot of sense to do, you'd never want to do that.
But what I would want to do... I can't do.
Anyone have any ideas, is there any way to provide this design, which seems perfectly reasonable to me, where an app can selectively over-ride a helper method from an Engine?
Anyone have any explanation of why it's working the way it is? I tried looking at actual Rails source code, but got lost pretty quick, the code around this stuff is awfully abstract split amongst a bunch of places.
This is pretty esoteric question, I'm pessimistic anyone will have any ideas, I hope you surprise me!
== Update
Okay, in order to understand what part of Rails code is being called where, I put a "def self.included ; debugger ; end" on each of my helpers, then in the debugger I can raise an exception to see a stack trace.
That still isnt' really helping me get to the bottom of it, the Rails code jumps all over the place and is pretty confusing.
But it's clear that a helper with the 'standard' name (ie WidgetHelper for WidgetController) is called by different rails code, to include in the 'master' view helper module for a given controller, than other helpers are. I'm wondering if I give the helper a different name, then manually include it in my controller with ("helper OtherNamedAdvancedHelper"), if that will change the load order.
We can use Module#class_eval to override.
In main app,
MountedEngineHelper.class_eval do
def advanced_helper
...
end
end
In this way other methods defined in engine helper are still available.
Thanks for your elaboration. I think this really is a problem. And it is still present in Rails 3.2.3, so I filed an issue.
The least-smelling workaround I came up with is to do a "half alias method chain":
module MountedEngineHelper
def advanced_helper
...
end
end
module MyHelper
def advanced_helper_with_extra_behavior
advanced_helper
extra_behavior
end
end
The obvious drawback is that you have to change your templates so that your helper is called. At least, you make the existence of extra behavior explicit there.
These release notes from Rails4 seem enticingly related to this problem, and potentially note it's been fixed:
http://edgeguides.rubyonrails.org/upgrading_ruby_on_rails.html#helpers-loading-order

How to locally test cross-domain builds?

Using the dojo toolkit, what is the proper way of locally testing code that will be executed as cross-domain, without making the actual build?
As it appears, there are three possible options (each, with their own drawbacks):
Using local (non xd) XMLHttpRequest dojo.require
This option does not really test the xd behavior, since it dojo.require[s] the js synchronously via XHR.
djConfig.debugAtAllCosts = true;
Although this option does load the required code asynchronously (via the 'script' tag), it also pulls the code in via XHR, parses the dojo.require[s] inside that, and pulls them in. This (using the loader_debug), again, is not what the loader_xd is doing. More info on this topic in a different question.
Creating a cross-domain build
This approach requires a build, which is not possible in the environment which I'm running the code in (We're using our own on-the-fly build process, which includes only the js that is necessary for a particular page. This process is not suitable for development).
Thus, my question: is there a way to use the loader_xd, which does not require an xd build (which adds the xd prefix / suffix to every file)?
The 2nd way (using the debugAtAllCosts) also makes me question the motivation for pre-parsing the dojo.require[s]. If the loader_xd will not (or rather can not) pre-parse, why is the method that was created for testing/debugging doing so?
peller has described the situation. If you wanted to just generate .xd.js file for your modules, you could look at util/buildscripts/jslib/buildUtilXd.js and its buildUtilXd.xdgen() function.
It would take a bit of work to make your own script, but you could look at util/buildscripts/build.js for pointers.
I am hoping in the future for Dojo (maybe Dojo 2.x timeframe) we can switch to a loader that just uses script tags with a module format that has a function wrapper around the module, something that is coded by the developer. This would allow the same module format to work in the local and xd cases.
I don't think there's any way to do XD loading without building and deploying it. Your analysis of the various options seems about right.
debugAtAllCosts is there specifically to solve a debugging problem, where most browsers, until recently, could not do anything intelligent with code brought in through eval. Still today, Firefox will report exception in the console as appearing at the eval site (bootstrap.js) with a line number offset from the eval, rather than from the actual eval buffer, and normally that eval buffer is anonymous. Firebug was the first debugger to jump through some hoops to enhance the debugging experience and permitted special metadata that Dojo's loader injects between the XHR and the eval to determine a filepath to the source. Webkit/Safari have recently implemented this also. I believe debugAtAllCosts pre-dates the XD loader.