Validations using simple_form: validated object lost after hitting create - ruby-on-rails-3

We have a form created by several controllers's new actions, which we reuse via render :new in the create action to display validation error messages. I believe this is the way to go for simple_form and validations. Correct me, if I'm wrong here.
We also have a general language switching mechanic, that redirects to the current_url, with a different locale.
The problem:
After a failed validation and the second rendering of the new form, the language selection throws an error (which would be very misleading to post here). The problem is that the create action expects the validated object, which our language selection does not pass to the current url again.
How would you tackle this problem?
We could try to teach our language switcher about "create" and have it send another post request with the same params, but this seems awful. There would have to be a lot of logic in our little helper and where would we store the objects (at least one kind of them is not persisted at all)?
Someone mentioned (ab-)using a flash message to recreate the object, but it's a huge form with up to 50 validations and this get's uglier with size, I guess.
Storing the object in the session in these cases and have the helper post the object again, if it exists might work. I like this one the most, but it's far from feeling right as well.
We could try to have simple_form use the "new" action instead of just rendering "new", but this seems really bad.
We could disable language switching for create actions altogether, with an alert saying this one step has to be finished in the chosen language.
Do you have any opinions, other suggestions? I'd be very grateful.
Thanks,
Andy

So we changed the language helper to send the same post request again, if it is on a page created by a POST. It ended up looking like this. Not a lot of code added:
def language_link(language)
url_options = { locale: language }
if request.request_method == 'POST'
link_to(language, url_for(params.merge(url_options)), method: :post)
else
link_to(language, url_for(url_options))
end
end
We were carefully making sure we don't end up sending valid data a second time. Creating a second payment, or a second order would be quite bad here for example. We need to keep this in mind in the future as well, when we're creating new post routes accessible on a part of our application where language is changeable. That's the main problem here.
It does not consider PUT requests now because we don't have any edit/update functionality on the part of the app where language is selectable.
We can live with this version in our code. So I post this as an answer. But I'd still be happy to see a better (less dangerous) version, our any thoughts on this at all.
Cheers,
Andy

Related

Is it a good practice the attach an event related parameter to an object's model as a variable?

This is about an API handling the validation during saving an object. Which means that the front-end client sends a request to the API to a specific end point, then on the back-end the API creates a new object if the right conditions are meet.
Right now the regular method that we use is that the models has a ruleset for each fields and then the validation is invoked when the save function is invoked, but technically the validation is done right before the object is saved into the database.
Then during today's code review I came across a solution which I wasn't sure if it's a good practice or not. And it was about that the front-end must send a specific parameter to the API every time. This is because other APIs are using our API as well, and we needed to know if the request was sent as and API request or a browser request. If this parameter is present then we want to execute an extra validation function on a specific field.
(1)If I would have to implement it, then I would check the incoming parameter in the service handler or in the controller level, and if I got one, I would invoke the validation right away, and if it fails I would throw an error.
(2)The implementation I saw however adds an extra variable to the model, and sets the model variable when there is an incoming parameter, then validates only when the save function is invoked on the object(which first validates the ruleset defined on the object fields, then saves the object into the database)
So my problem with (2) is that the object now grown bigger with an extra variable that is only related to a specific event. So I would say it's better to implement (1). But (2) also has an advantage, and that is when you create the object on different end point by parsing the parameters, then the validation will work there as well, even if the developer forget to update the code there.
Now this may seems like a silly question because, why would I care about just 1 extra variable, but this is like a bedrock of something good or bad. So if I say this is ok, then from now on the models will start growing with extra variables that are only related to specific events, which I think should be handled on the controller/service handler level. On the other hand the code would be more reliable if it's not the developer who should remember all the 6712537 functionalities and keep them in mind when makes some changes somewhere. Let's say all the devs will get heart attack tomorrow from the excitement of an amazing discovery, and a new developer has to work on the project while he doesn't know about these small details, and then he has to change something on the code that is related to this functionality - so that new feature should be supported by this old one as well.
So my question is if is there any good practice on this, and what do you think what would be the best approach?
So I spent some time on thinking on the solution, and I think the best is to have an array of acceptable trigger variables in the model class. Then when the parameters are passed to the model on the controller level, then the loader function can be modified that it takes the trigger variables from the parameters and save it in the model's associative array variable that stores the trigger variables.
By default this array is empty, and it doesn't matter how much new variables are needed to be created, it will only contain the necessary ones when those are used.
Then of course the loader function needs to be modified in a way that it can filter out the non trigger variables as well as it is done for the regular fields, and there can be even a rule set of validation on the trigger variables if necessary.
So this solves the problem with overgrowing the object with unnecessary variables and the centralized validation part, because now the validation can be always done in the model instead of the controller.
And since the loader function is modified to store the trigger variables in the model's trigger variables array variable, the developer never has to remember that this functionality was created. Which is good, because in the future when he creates a new related function or end point that should handle object creation, he will not miss it to validate it against the old functionality, because the the loader function that he modified in the past like this will handle it for him.
It needs to be noted tho, that since the loader function doesn't differentiate between the parameters, and where to load them other then checking the names of the parameters with the filter functions, these parameter names should be identical from each other, otherwise a buggy functionality can be created accidentally. Like if you forget that a model attribute with the same name was used, then you can accidentally trigger an event that was programmed to be triggered if the trigger variable with the same name is present. However this can be solved by prefixing the trigger variables for example.

TheTVDB API - Starting out

I'm looking for assistance for the bare minimum code to pull some information from the TheTVDB API (v3).
I've never coded anything to do with APIs before.
I tried to shortcut using TVDBSharper, but that uses asynchronous routines, and tasks, etc. which I just can't get my head around at the moment, given the documentation is for C#, and I clearly don't understand how "await" works in VB.
I've tried searching for API examples, but most are about creating an API.
The first thing TheTVDB API documentation says is:
"Users must POST to the /login route with their API key and credentials in the following format in order to obtain a JWT token."
^ I don't know how to POST. Any examples I've seen are very long and confusing, and mostly in C#.
So (and I apologise for this drivel, but I've tried on and off for months now)…
Could someone please show me the minimal amount of VB.NET code to pull the show name from, for example series ID 73739 (Lost). Hopefully from there, I can start to figure some things out.
I have a valid API Key from the TheTVDB.
Mostly you don't need to understand async/await in any great detail but I was once where you are now, and though I don't claim to be an expert, I did manage to get my head around it like this:
You know how, if you had something that threw an exception and you never caught it:
Sub Main(arguments)
Whatever()
End Sub
Sub Whatever
StuffBefore()
OtherWhateverThrowsException()
StuffAfter()
End Sub
Sub OtherWhateverThrowsException()
StuffBefore()
throw New Exception("Blah")
End Sub
As soon as you threw that exception, your VB thread would stop what it was doing, and wind its way back up through the call stack until it popped out of the main, and crashed to the command line - a matrixy style "return to the source" if you like
Async Await is a bit like that. When there's some method that is going to take a long time to do its work (download strings from tvdb) we could make it sit around doing nothing in our code, having a up of coffee and waiting for TVDB's slow server. This makes things easy to understand because if we sit and wait, we wait 30 seconds, then we get the response, and process the response. Obviously we can't process the response before we get it so we have to sit around and wait for it, and this is always true..
But it'd be better if we could let our thread nip back out the way it came in, "go back to the source", do something else for someone else, and then call it(or another one of its coworkers, we probably don't care) back to carry on working for us when TVDB's server responds. This is what Async Await does for us. Methods that are marked Async are treated differently by the compiler, something like saving your progress on your xbox game. If you reach a point where you want to wait, you can issue the waiting command, the thread that was doing our work performs a savegame, goes off and works for someone else, then when we're ready it comes back, loads the game again and carries on where it left off.
The save game file is manifest as a Task; methods that once upon a time were subs (didn't return anything) should now be Functions that return a Task (a savegame with no associated data). Methods that once upon a time returned something like a string, should now be marked as returning a Task(Of String) - the Task part is to save the state of play (data that VB wants to work with), the string is the data your app wants to work with.
Once you mark something as Async, it needs to contain an Await statement. Await is that SaveYourGameAndGoDoSomethingElseWhileThisFinishes. Typically, while you're awaiting something your program won't have any other stuff it needs the thread to do, so it's not just your Function that calls TVDB's API that needs to Await/be marked Async - every single function in the chain, all the way up and out of your code, needs to be marked as Async, and typically you'll Await at every step of the way back up:
Sub DownloadTVDBButton_Click(arguments)
DoStuff()
End Sub
Sub DoStuff
StuffBefore()
GetFromTVDB()
StuffAfter()
End Sub
Sub GetFromTVDB()
Dim i = 1
GetDataFromTVDBServer() 'wait 30s for TVDB
ParseDataFromTVDB()
End Sub
Sub ParseDataFromTVDB()
End Sub
Becomes:
Async Sub DownloadTVDBButton_Click(arguments) 'windows forms event handlers are always subs. Do not use async subs in your own code
Await DoStuff()
End Sub
Function DoStuffAsync Returns Task
StuffBefore()
Await GetFromTVDBAsync()
StuffAfter()
End Function
Async GetFromTVDBAsync() Returns Task
Dim i = 1
Await GetDataFromTVDBServerAsync() 'go back up, and do something else for 30s
ParseDataFromTVDB()
End Sub
Sub ParseDataFromTVDB() 'downstream; doesn't need to be async/await
End Sub
We switched to using TVB's Async data call, so we await it. When we await, the thread would go back up to the previous function DoStuffAsync. Because we're awaiting that, the thread goes back up a level again into the button click handler. Because we're awaiting that also, the thread goes back up again and out of your code. It goes back to its regular day job of drawing the UI, making it looks like the program is still responding etc. When the TVDB call completes the thread comes back to the point just after it (ready to run ParseData), and it has all the data back from TVDB, and the savegame has been reloaded so everything it knew before/the state is as it was (variable i exists and is 1; you could conceive that it would have been lost otherwise when the thread went off to do something else)
In essence, async/await has allowed us to work exactly as we would have done without it, it's just that it built a little savegame mechanism that meant our thread could go off an do something else while TVDB was busy getting our data, rather than having to sit aorund doing nothing while we waited
It may also help to think of Await as a device that unpacks a save game and gets your data out of it. If a GetSomething() sits for 30s then returns a String you want, then GetSomethingAsync() will instantly return a Task that will (in 30s when the work is done) encloses that same String you want, and Await GetSomethingAsync() will wait until the Task is done then get the string you want out of it
Methods that are named like "...Async" should be thought of as "behave in an asyncronous way". They DON'T have to be marked with the Async modifier; Async is only needed if a method uses the Await word but I'm recommending you use Await on everything that returns a Task (i.e. is awaitable) all the way up and down your call tree. When you get more confident you don't always have to Await SomethingAsync but honestly the overhead of doing so is minimal and the consequences of not doing so are occasionally disastrous. All developers who follow convention always name their stuff ...Async if it behaves in an async way; you should adopt this too, and make sure you name all your Async methods with an"Async" at the end of the name
I don't know how to POST
You don't really need to. The TVDB API has a swagger endpoint; swagger is a way of describing a REST service programmatically so that your visual studio can build a set of classes to use it and provide you with nicely named things. Whipping out a WebClient and manually creating some JSON is very old school/low level
TVDB's swagger descriptor is at https://api.thetvdb.com/swagger.json
You're supposed to be able to right click your project, choose Add... Rest API Client:
,
Paste https://api.thetvdb.com/swagger.json in as the url and pick a namespace (an organizational unit) for all the generated classes to go in.
At the moment something in TVDB's API is causing AutoRest (the tool that VS uses to parse the API spec) to choke but ordinarily it would work out and you'd get a bunch of code (autorest generates c#; you'd be best off generating the c# into a new project and then adding reference to that project from your VB) objects to work with that would do all the POSTing etc for you.
As noted, my VS can't process the TVDB API at the moment and I dont have enough time today to figure out why, but you could sure post a question on AutoRest's github (or on SO) saying "why does https://api.thetvdb.com/swagger.json cause a "Input string not in correct format"" and get some more help
You asked (maybe implicitly) a couple of follow up questions in the comments:
I don't know about REST/swagger (I've heard of it though), and can't see any way to add to the project as you described, and I'm no closer to getting info from TheTVDB. However, it might have have helped me use functions in TVDBSharper. I will just have to try a few things with it. Thanks again
Yes; sorry - I should have been more explicit that "Add REST API client" is only available in a C# project because it relies on a tool that generates C#. This isnt a blocker though - you can just make a C# project and add it to your VB solution alongside your VB project; the two languages are totally interoperable. Your VB can tell your C# what to do
However, there isn't much point in trying at the moment, because the tool that is suppsoed to do it can't handle what TVDB is putting out; my VS can successfully ask the TVDB API to describe itself, but it doesn't seem able to understand the response.
In a nutshell; VS has a bug that means it can't use TVDB API directly, you're best off trying via TvDbSharper. The https://github.com/HristoKolev/TvDbSharper readme has some examples in. They're C# but basically "remove the semicolons and they'll pretty much work in VB"
Now, a bit about the headline terms here, background understanding if you like. API, RESTand swagger are easy enough to explain:
API
An API is effectively a website (in this case run by TVDB), intended for software to consume rather than humans. It takes raw data in and chucks raw data out - unlike a normal website intended for our eyes, nothing about it is presentational in the slightest.
REST
REST as a phrase and a concept is a source of confusion for many and a lot of times you try and read about what REST means and the blogs quickly start getting bogged down with details and make it too complex, with all these funky examples. They kinda forget to explain the REST part because it's come to mean not much at all - it's something so obvious and nondescript that we don't think about it any more.
In essence, something is RESTful if the server doesn't have to remember something about what you did before, in order to service a request you make now - every request stands on its own and can be serviced completely without reference to something else. This is a different workflow to other forms where you might want to change the name of something by issuing a editname('newname') command. What name actually gets edited depends on whether you first did selectshow() or selectactor() and also which show or which actor - a workflow like that means the server has to start remembering whether you selected a show or actor, and what show/actor was selected before it can process the editname() command. If you selected show 123, the edit would edit the name of the show id 123. If you selected an actor 456, the edit name would edit the name of an actor 456
Critically, if you replayed the same editname() at a different time a different thing would get edited because the state of your dialog with the server changes. It's kinda dumb to make the server have to remember all that, for everyone, when really we could push the job of identifying whether we want to name an actor or a show and which show, onto the client
By making it that you have editactorname(123,'Jon wayne') you're transferring all the info the server needs to perform the request; your credentials, the actor id, the new name, the fact that it's an actor name and not a show name. All this goes in the one request, and you can replay this request as many times as you like at any time, and it always has the same effect; things that happened before don't affect it (well.. apart from authentication)
It gets a bit woolly if taken literally - "well if the server doesn't remember anything how does it even remember I changed the name of actor 123, to Jon Wayne so it can service my later request of getactorname(123)?" but that's more about the state of the data in the server, not the state of your interaction with the server. Things that are truly stateless are mostly purely calculatory and not too useful; something somewhere needs to be able to remember something or there is nothing to calculate. Things are rarely completely stateless; even TVDB's API requires you to authenticate first, using a user/password/apikey and then the serverissues a token that becomes your username/password/apikey equivalent for every subsequent request - the server has to start remembering that token, or every time you quote it it will say "can't edit actor name; not authorized". So, yeah.. when viewed holistically something usually has to be rememberd at some point otherwise nothing works. REST things are rarely 100% truly stateless, but mostly they are - and it's really about that "when you want to edit the actor name, send a) that you want to edit actorname, b) what actor, c) what name, d) your credentials to prove youre allowed to" - everything the server needs in the one hit
Swagger
Now called OpenAPI, swagger is a protocol for describing an API: when an api has some actions that take some data, and return some data, it's helpful to know what the actions are called (setactoryearsactive), what type of data they take (date, date), what sort of things you should put in it (the from date, the to date or null if still active), what they return (boolean) and what the return means (true if success, false if not).
If we have a standardized way of describing these things then we can build standard software that reads the standard description of the API and writes a bunch of standard code that uses the API. This is software that writes a description so other software can read it and write software that uses the first set of software. It's an API API.
There is a lot of software here:
The API is software(tvdb),
The thing that generates the description of the API is software (Swagger),
The thing that consumes the description of the API and creates a client is software(AutoRest),
And the thing that uses the client is software (your app).
You could code your app to hit the api directly- the API's just responding
to HTTP requests, which are just text files formatted in a particular way sent to port 80 of the web server that hosts the API. You could write one such request in notepad and use telnet to send it and get a valid response. You could code your app to do it (you were just about to). You could use someone else's library (TvBbSharper) which does it somehow. You could use some software that generates something like TvDbSharper; it reads the description of the api and generates classes for you to use; those classes will make the http requests. Everything can be done at any level; you could write all your apps in assembler, the lowest of the low. It takes ages and it is boring - this is why we use ever higher levels of abstraction.
We make something and then make it do a thousand things and then realize that listing the same code over and over and changing one bit each time is boring, and repetitive and something a computer should do, so we devise ways of making it so software can write the boring repetitive code so that we can do the interesting things.
Swagger and AutoRest are those kind of things; Swagger inspects all the methods, what they take and return and generates a regular consistent description. AutoRest reads it and generates a regular consistent set of client classes. Then the human uses the client classes to do the interesting things. The AutoRest part doesn't work out for us at the moment; it's written by different people than the Swagger team so some differences arise; Awagger describes something and Autorest can't understand it. It will one day I'm sure (in this game of walls and ladders); such is the nature of open source - everyone has a different set of priorities.
Right now we could probably get AutoRest working by finding the one thing it is choking on and removing it. There may be no need; if the TvDbSharper guys have written enough of a set of client classes that you can use TvDbSharper to do all your necessary things. It is thus effectively already the set of client classes AutoRest would have built, maybe more; use TvDbSharper.
The idea behind Swagger and Autorest is that a TvDbSharper shouldn't need to exist: it's a very specific application, only works with tvdb, only works in .net.
If we put effort into making Swagger able to generate a description of any API written in any language, and we put effort into making Autorest able to consume that description and output any language, then we have something more useful than TvDbSharper/no need to TvDbSharper because we can generate something that does the same (of course, specific applications can be superior, just like bespoke tailored suits are superior bt that's another philosophy for another time)

Why is Mage_Persistent breaking /api/wsdl?soap

I get the following error within Magento CE 1.6.1.0
Warning: session_start() [<a href='function.session-start'>function.session-start</a>]: Cannot send session cookie - headers already sent by (output started at /home/dev/env/var/www/user/dev/wdcastaging/lib/Zend/Controller/Response/Abstract.php:586) in /home/dev/env/var/www/user/dev/wdcastaging/app/code/core/Mage/Core/Model/Session/Abstract/Varien.php on line 119
when accessing /api/soap/?wsdl
Apparently, a session_start() is being attempted after the entire contents of the WSDL file have already been output, resulting in the error.
Why is magento attempting to start a session after outputting all the datums? I'm glad you asked. So it looks like controller_front_send_response_after is being hooked by Mage_Persistent in order to call synchronizePersistentInfo(), which in turn ends up getting that session_start() to fire.
The interesting thing is that this wasn't always happening, initially the WSDL loaded just fine for me, initially I racked my brains to try and see what customization may have been made to our install to cause this, but the tracing I've done seems to indicate that this is all happening entirely inside of core.
We have also experienced a tiny bit of (completely unrelated) strangeness with Mage_Persistent which makes me a little more willing to throw my hands up at this point and SO it.
I've done a bit of searching on SO and have found some questions related to the whole "headers already sent" thing in general, but not this specific case.
Any thoughts?
Oh, and the temporary workaround I have in place is simply disabling Mage_Persistent via the persistent/options/enable config data. I also did a little bit of digging as to whether it might be possible to observe an event in order to disable this module only for the WSDL controller (since that seems to be the only one having problems), but it looks like that module relies exclusively on this config flag to determine it's enabled status.
UPDATE: Bug has been reported: http://www.magentocommerce.com/bug-tracking/issue?issue=13370
I'd report this is a bug to the Magento team. The Magento API controllers all route through standard Magento action controller objects, and all these objects inherit from the Mage_Api_Controller_Action class. This class has a preDispatch method
class Mage_Api_Controller_Action extends Mage_Core_Controller_Front_Action
{
public function preDispatch()
{
$this->getLayout()->setArea('adminhtml');
Mage::app()->setCurrentStore('admin');
$this->setFlag('', self::FLAG_NO_START_SESSION, 1); // Do not start standart session
parent::preDispatch();
return $this;
}
//...
}
which includes setting a flag to ensure normal session handling doesn't start for API methods.
$this->setFlag('', self::FLAG_NO_START_SESSION, 1);
So, it sounds like there's code in synchronizePersistentInf that assumes the existence of a session object, and when it uses it the session is initialized, resulting in the error you've seen. Normally, this isn't a problem as every other controller has initialized a session at this point, but the API controllers explicitly turns it off.
As far as fixes go, your best bet (and probably the quick answer you'll get from Magento support) will be to disable the persistant cart feature for the default configuration setting, but then enable it for specific stores that need it. This will let carts
Coming up with a fix on your own is going to be uncharted territory, and I can't think of a way to do it that isn't terribly hacky/unstable. The most straight forward way would be a class rewrite on the synchronizePersistentInf that calls it's parent method unless you've detected this is an API request.
This answer is not meant to replace the existing answer. But I wanted to drop some code in here in case someone runs into this issue, and comments don't really allow for code formatting.
I went with a simple local code pool override of Mage_Persistent_Model_Observer_Session to exit out of the function for any URL routes that are within /api/*
Not expecting this fix to need to be very long-lived or upgrade-friendly, b/c I'm expecting them to fix this in the next release or so.
public function synchronizePersistentInfo(Varien_Event_Observer $observer)
{
...
if ($request->getRouteName() == 'api') {
return;
}
...
}

Is it REST if I pass the following URI /apps/{id}?control=start

I'm in the process of designing a REST API for our web app.
POST > /apps > Creates an app
PUT > /apps/{id} > Updates the app
I want to start the apps.
Is this REST and if not, how can I make it more RESTful?
POST > /apps/{id}?control=start
Sun Cloud API does this: http://kenai.com/projects/suncloudapis/pages/CloudAPISpecificationResourceModels
Or is it better to:
2. PUT /apps/{id} and include a status parameter in the response Json/XML?
3. POST /apps/{id} and include a status parameter in the response Json/xml?
4. POST /apps/start?app={id}
I think the right question here is more whether the HTTP verbs are being used as intended rather than whether the application is or is not as RESTful as possible. However, these days the two concepts are pretty much the same.
The thing about PUT is that whatever you PUT you should be able to immediately GET. In other words, PUT does a wholesale replacement of the resource. If the resource stored at apps/5 is something that has a "control" attribute as part of its state, then the control=start part should be part of the representation you put. If you want to send just the new piece of the resource, you are doing a PATCH, not a PUT.
PATCH is not widely supported, so IMHO you should use a POST. POST has no requirements of safety or idempotency; generally you can do whatever you want with a POST (more or less), including patching parts of a resource. After all that is what you do when you create a new item in a collection with a POST. Updating a portion of a resource is not really much different.
Generally though you POST new data in the request body, not as query parameters. Query parameters are used mostly for GETs, because you are, well, querying. :)
Does starting an app changes it state? (to "running", for example) If it does what you're actually doing is updating the state of the resource (application). That seems like a good use for the PUT operation. Although as Ray said, if control is part of the state of the resource, the body of the PUT request should contain the state you're updating. I believe a partial update would be possible (CouchDB uses this).
On the other hand, if starting an app means creating a new resource (representing the app execution, for example), the POST method would be a great fit. You could have something like this:
POST /app/1/start
Which would result in a HTTP/1.1 201 Created. Then, to access the information on the created execution, you could use a URL like this:
GET /app/1/execution/1
To me, this would seem like a good "Restful" approach. For more information, check out this article.
PUT apps/{id}
I would PUT the app to update it's status from off to on
I like to do something like,
POST /runningapps?url=/app/1

Need guidance in creating Rails 3 Engine/Plugin/Gem

I need some help figuring out the best way to proceed with creating a Rails 3 engine(or plugin, and/or gem).
Apologies for the length of this question...here's part 1:
My company uses an email service provider to send all of our outbound customer emails. They have created a SOAP web service and I have incorporated it into a sample Rails 3 app. The goal of creating an app first was so that I could then take that code and turn it into a gem.
Here's some of the background: The SOAP service has 23 actions in all and, in creating my sample app, I grouped similar actions together. Some of these actions involve uploading/downloading mailing lists and HTML content via the SOAP WS and, as a result, there is a MySQL database with a few tables to store HTML content and lists as a sort of "staging area".
All in all, I have 5 models to contain the SOAP actions (they do not inherit from ActiveRecord::Base) and 3 models that interact with the MySQL database.
I also have a corresponding controller for each model and a view for each SOAP action that I used to help me test the actions as I implemented them.
So...I'm not sure where to go from here. My code needs a lot of DRY-ing up. For example, the WS requires that the user authentication info be sent in the envelope body of each request. So, that means each method in the model has the same auth info hard coded into it which is extremely repetitive; obviously I'd like for that to be cleaner. I also look back now through the code and see that the requests themselves are repetitive and could probably be consolidated.
All of that I think I can figure out on my own, but here is something that seems obvious but I can't figure out. How can I create methods that can be used in all of my models (thinking specifically of the user auth part of the equation).
Here's part 2:
My intention from the beginning has been to extract my code and package it into a gem incase any of my ESP's other clients could use it (plus I'll be using it in several different apps). However, I'd like for it to be very configurable. There should be a default minimal configuration (i.e. just models that wrap the SOAP actions) created just by adding the gem to a Gemfile. However, I'd also like for there to be some tools available (like generators or Rake tasks) to get a user started. What I have in mind is options to create migration files, models, controllers, or views (or the whole nine yards if they want).
So, here's where I'm stuck on knowing whether I should pursue the plugin or engine route. I read Jordan West's series on creating an engine and I really like the thought of that, but I'm not sure if that is the right route for me.
So if you've read this far and I haven't confused the hell out of you, I could use some guidance :)
Thanks
Let's answer your question in parts.
Part One
Ruby's flexibility means you can share code across all of your models extremely easily. Are they extending any sort of class? If they are, simply add the methods to the parent object like so:
class SOAPModel
def request(action, params)
# Request code goes in here
end
end
Then it's simply a case of calling request in your respective models. Alternatively, you could access this method statically with SOAPModel.request. It's really up to you. Otherwise, if (for some bizarre reason) you can't touch a parent object, you could define the methods dynamically:
[User, Post, Message, Comment, File].each do |model|
model.send :define_method, :request, proc { |action, params|
# Request code goes in here
}
end
It's Ruby, so there are tons of ways of doing it.
Part Two
Gems are more than flexible to handle your problem; both Rails and Rake are pretty smart and will look inside your gem (as long as it's in your environment file and Gemfile). Create a generators directory and a /name/name_generator.rb where name is the name of your generator. The just run rails g name and you're there. Same goes for Rake (tasks).
I hope that helps!