how to identify django-nonrel/django fork? - django-nonrel

I'm contributing on a 3rd party Django package and trying to make it compatible with django-nonrel. The only issue really is that a model contains a ManyToManyField using "through" attribute, which is not supported by nonrel. So I want to add a condition, which only adds the field to the model if the Django framework is NOT nonrel.
How can I identify that the Django framework is nonrel? I don't necessarily want to depend on it being GAE, Mongo or some other usage. Just plain nonrel.

This is the solution, courtesy of #aburgel:
from django.db import connection
if connection.features.supports_joins:
# sql stuff....
pass
else:
# NoSQL, i.e., nonrel stuff
pass

Related

Forward compatiblity in GraphQL

GraphQL is well-known for its easy to maintain backward-compatibility of APIs defined with it. You're supposed to just add new fields/types every time you need them and never remove old ones, only mark them with #deprecated directive. Thus, your server could evolve independently of its clients versions.
However, I have a quite opposite problem: we have many independent servers, some of which could not be updated ever, and client (potentially) could connect to anyone of them. Thus, when client adopts new fields in API types, that were introduced in some newer server, and then it connects to the older one, it will get the error, because it will try to query fields that do not exist on that server.
So the question is: is there some known approach on how to handle this type of situation in GraphQL?
The only thing I came up with is to have a top-level query field, that will return a list of supported types, as a string list. Thus, whenever you want to add a new filed in the existing type foo, you just add a new type foo2 and add this type to the list of supported types. Thus the client could decide what types it can use and, accordingly, what features it could show. However, this looks quite scary due to, well, graph nature of GraphQL: it is very hard to guaranty that clinet's query won't get to some unsupported type via some quirky path.
The other solution is, of course, just version the whole API and treat any change to schema as incompatible API version. But this looks, well, either too stiff, or too laborious to maintain.
P.S. I suppose, that, maybe GraphQL is just not a good solution for this type of situations, but, as usually happens, we decided to go with GraphQL far before we could foresee these use-cases.
... usually there is no state with 'could not be updated ever' servers
How it wouldn't affect REST servers? Just responds with 404 for /api/Vxxx - clearly not supported new version? Better DX than with graphQL? I don't think so.
Possible solutions:
provide APIs with some query version (+loadable schema) - ask devs to use at app beginning (with login query);
add 'field introduced in API version xxx' in docs;
keep a list of servers with supported API versions;
some service/server queryable for [nearest] server with minimum API version xxx.

API Model Defintions

Typically when I am making API calls I am using javascript (ajax). JSON doesn't include value types of properties, and so everything is passed as a string.
Since I manage my own API I create request-able models that will tell you the definition of a model.
For example Id value type is int, StartDate value type is date.
I use the property types to automate form creation.
Is there a standard as to how to do this? My way works, but I'd prefer to be doing this by the book if it already exists.
OpenAPI is a standard you could follow. If you also make use of Swagger, it will allow you to produce a JSON schema which can be used in generating forms.
The hard part is typings are done at compilation and JS does that in browser.
You could use a typing model agent such as graphQL that adds a definition for those types ahead of time. Those definitions can then be dynamically fetched and enforced using typescript and a tool like apollo.
If you dont want to use typescript or graphql you could use something like mongoose schema and expose the schema on an endpoint then have your front end rebuild the schema dynamically to check types by casting when creating new objects.
Personally ive done this old fashion way by writing my own form schema and enforce the form types strictly on the front end by interpreting the fieldTypes
// returned from API somewhere
const fields = [{
type: 'input',
name: 'firstName'
rank: 0,
validation: '/^[a-zA-Z\s]+$/'
}]
Edit:
Found this great library that exports typed interfaces based on graphQL models.
https://github.com/avantcredit/gql2ts

API design for machine learning models

Using Django rest framework to build an API webservice that contains many of already trained machine learning models. Some models can predict a batch_size of 1 or an image at a time. Others need a history of data (timelines) to be able to predict/forecasts. Usually these timelines can hardly fit and passed as parameter. Being that, we want to give the requester the ability to request by either:
sending the data (small batches) to predict as parameter.
passing a database id/reference as parameter then the API will query the database and do the predictions.
So the question is, what would be the best API design for identifying which approach the requester chose?. Some considered approaches:
Add /db to the path of the endpoint ex: POST models/<X>/db. The problem with this approach is that (2x) endpoints are generated for each model.
Add parameter db as boolean to each request. The problem with such approach is that it adds additional overhead for each request just to check which approach. Also, make the code less readable.
Global variable set for each requester when signed for the API token. The problem is that you restricted the requester for 1 mode which is not convenient.
What would be the best approach for this case
The fact that you currently have more than one source would cause me to seriously consider attempting to abstract the "source" component as much as possible, to allow all manner of sources. For example, suppose that future users would like to pull data out of a mongodb, instead of a whatever db you currently are using? Or from some other storage structure? Or pull from a third party? Or, or, or....
In any case the question is now "how much do they all have in common, and what should they all implement?"
class Source(object):
def __get_batch__(self, batch_size=1):
raise NotImplementedError() #each source needs to implement this on its own
#http_library.POST_endpoint("/db")
class DBSource(Source):
def __init__(self, post_data):
if post_data["table"] in ["data1", "data2"]:
self.table = table
else:
raise Exception("Must use predefined table to prevent SQL injection")
def __get_batch__(self, batch_size=1):
return sql_library.query("SELECT * FROM {} LIMIT ?".format(self.table), batch_size)
#http_library.POST_endpoint("/local")
class LocalSource(Source):
def __init__(self, post_data):
self.data = post_data["data"]
def __get_batch__(self, batch_size=1):
data = self.data[self.i, self.i+batch_size]
i += batch_size
return data
This is just an example. However, if a fixed part of your path designates "the source", then you have left yourself open to scale this indefinitely.
Add /db to the path of the endpoint ex: POST models//db. The problem with this approach is that (2x) endpoints are generated for
each model.
Inevitable. DRY out common code to sub-methods.
Add parameter db as boolean to each request. The problem with such approach is that it adds additional overhead for each request just to
check which approach. Also, make the code less readable.
There won't be any additional overhead (that's what your underlying framework does to match a URL to a function/method anyway). However, these are 2 separate functionalities, I would keep them separate, so I would prefer the first approach.
Global variable set for each requester when signed for the API token. The problem is that you restricted the requester for 1 mode
which is not convenient.
Yikes! unless you provide a UI letting a user to select his preference and apply it globally (I don't think any UX will agree to that)
That being said, the api design should be driven by questioning who is mastering (or owning) the data. If it's the application and user already knows the ID of that entity, then you shouldn't be asking the data from the user.
If it's the user, and then if it won't fit in a POST body, then I would say, a real-time API may not be the right solution, think about message queues/pub-sub based systems.
If you need a hybrid solution as you asked in the question, then, I would prefer the 1st approach.

Generate interactive API docs from Tornado web server code

I have a Tornado web server that exposes some endpoints in its API.
I want to be able to document my handlers (endpoints) in-code, including description, parameters, example, response structure, etc., and afterwards generate an interactive documentation that enables one to "play" with my API, easily make requests and experience the response on a sandbox environment.
I know Swagger, and particularly their SwaggerUI solution is one of the best tools for that, but I get confused how it works. I understand that I need to feed the SwaggerUI engine some .yaml that defines my API, but how do I generate it from my code?
Many github libraries I found aren't good enough or only support Flask...
Thanks
To my understanding, SwaggerUI is dependent on swagger specification.
So, it boils down to generating the Swagger Specification in a clean and elegant manner.
Did you get a chance to look at apispec?
I am finding this to be an active project with a plugin for tornado.
Here's how we are doing it in our project. We made our own module and we are still actively developing this. For more info: https://pypi.org/project/tornado-swirl/
import tornado.web
import tornado_swirl as swirl
#swirl.restapi('/item/(?P<itemid>\d+)')
class ItemHandler(tornado.web.RequestHandler):
def get(self, itemid):
"""Get Item data.
Gets Item data from database.
Path Parameter:
itemid (int) -- The item id
"""
pass
#swirl.schema
class User(object):
"""This is the user class
Your usual long description.
Properties:
name (string) -- required. Name of user
age (int) -- Age of user
"""
pass
def make_app():
return swirl.Application(swirl.api_routes())
if __name__ == "__main__":
app = make_app()
app.listen(8888)
tornado.ioloop.IOLoop.current().start()

Need guidance in creating Rails 3 Engine/Plugin/Gem

I need some help figuring out the best way to proceed with creating a Rails 3 engine(or plugin, and/or gem).
Apologies for the length of this question...here's part 1:
My company uses an email service provider to send all of our outbound customer emails. They have created a SOAP web service and I have incorporated it into a sample Rails 3 app. The goal of creating an app first was so that I could then take that code and turn it into a gem.
Here's some of the background: The SOAP service has 23 actions in all and, in creating my sample app, I grouped similar actions together. Some of these actions involve uploading/downloading mailing lists and HTML content via the SOAP WS and, as a result, there is a MySQL database with a few tables to store HTML content and lists as a sort of "staging area".
All in all, I have 5 models to contain the SOAP actions (they do not inherit from ActiveRecord::Base) and 3 models that interact with the MySQL database.
I also have a corresponding controller for each model and a view for each SOAP action that I used to help me test the actions as I implemented them.
So...I'm not sure where to go from here. My code needs a lot of DRY-ing up. For example, the WS requires that the user authentication info be sent in the envelope body of each request. So, that means each method in the model has the same auth info hard coded into it which is extremely repetitive; obviously I'd like for that to be cleaner. I also look back now through the code and see that the requests themselves are repetitive and could probably be consolidated.
All of that I think I can figure out on my own, but here is something that seems obvious but I can't figure out. How can I create methods that can be used in all of my models (thinking specifically of the user auth part of the equation).
Here's part 2:
My intention from the beginning has been to extract my code and package it into a gem incase any of my ESP's other clients could use it (plus I'll be using it in several different apps). However, I'd like for it to be very configurable. There should be a default minimal configuration (i.e. just models that wrap the SOAP actions) created just by adding the gem to a Gemfile. However, I'd also like for there to be some tools available (like generators or Rake tasks) to get a user started. What I have in mind is options to create migration files, models, controllers, or views (or the whole nine yards if they want).
So, here's where I'm stuck on knowing whether I should pursue the plugin or engine route. I read Jordan West's series on creating an engine and I really like the thought of that, but I'm not sure if that is the right route for me.
So if you've read this far and I haven't confused the hell out of you, I could use some guidance :)
Thanks
Let's answer your question in parts.
Part One
Ruby's flexibility means you can share code across all of your models extremely easily. Are they extending any sort of class? If they are, simply add the methods to the parent object like so:
class SOAPModel
def request(action, params)
# Request code goes in here
end
end
Then it's simply a case of calling request in your respective models. Alternatively, you could access this method statically with SOAPModel.request. It's really up to you. Otherwise, if (for some bizarre reason) you can't touch a parent object, you could define the methods dynamically:
[User, Post, Message, Comment, File].each do |model|
model.send :define_method, :request, proc { |action, params|
# Request code goes in here
}
end
It's Ruby, so there are tons of ways of doing it.
Part Two
Gems are more than flexible to handle your problem; both Rails and Rake are pretty smart and will look inside your gem (as long as it's in your environment file and Gemfile). Create a generators directory and a /name/name_generator.rb where name is the name of your generator. The just run rails g name and you're there. Same goes for Rake (tasks).
I hope that helps!