I developed a Flask App that uses SQL ALchemy as DB Layer. As it needs user registration and login, added Flask Security Too (https://flask-security-too.readthedocs.io/en/stable/).
The problem that I am facing is that when a user starts multidevice sessions (i.e. logs in on a site, an updates data in other) there is a conflict at SQLAlchemy ORM that finds that the same instance of a model is shared among different data base sessions.
SqlAlchemy: Insert New Row and Modify Another: "Can't attach instance <ObjectT>; another instance with key is already present in this session"
Regarding that, I have studied about merging objects, transition objects, and neither of those seems to be a valid solutions.
Without going into the code, what I am trying to figure out is "which is the strategy that a multi user multi device session needs to have to use SQL Alchemy?"
are using scoped sessions?
is SQL ALchemy a good fit?
Is there any other alternative to Flask Security Too that avoids facing this kind of problems?
More Data:
My database instantiation:
ONOS_SQLALCHEMY_DATABASE_URI= os.environ.get('BONOS_SQLALCHEMY_DATABASE_URI')
engine = create_engine(BONOS_SQLALCHEMY_DATABASE_URI, pool_size=14)
SessionFactory = scoped_session(sessionmaker(bind=engine))
session = SessionFactory(expire_on_commit=True)
Base = declarative_base()
The code that generates problems when I try to save changes on a session:
class SQLAlchemyDatastore(Datastore):
def commit(self):
self.db.session.commit()
def put(self, model):
self.db.session.merge(model)
self.db.session.commit()
self.db.session.add(model)
return model
Related
I would like to maintain comma separated lists of entries of the following form <ip>:<app> indexed by a an account ID. There would be one such list for each user indexed by their account ID with the number of users in the millions. This is mainly to track which server in a cluster a user using a certain application is connected to.
Since all servers are written in Java, with Redisson I'm currently doing:
RSet<String> set = client.getSet(accountKey);
and then I can modify the set using some typical Java container APIs supported by Redisson. I basically need three types of updates to these comma separated lists:
Client connects to a new application = append
Client reconnects with existing application to new endpoint = modify
Client disconnects = remove
A new connection would require a change to a field like:
1.1.1.1:foo,2.2.2.2:bar -> 1.1.1.1:foo,2.2.2.2:bar,3.3.3.3:baz
A reconnect would require an update like:
1.1.1.1:foo,2.2.2.2:bar -> 3.3.3.3:foo,2.2.2.2:bar
A disconnect would require an update like:
1.1.1.1:foo,2.2.2.2:bar -> 2.2.2.2:bar
As mentioned the fields would be keyed by the account ID of the user.
My question is the following: Without using Redisson how can I implement this "directly" on top of Redis commands? The goal is to allow rewriting certain components in a language different than Java. The cluster handles close to a million requests per second.
I'm actually quite curious how Redisson implements an RSet under the hood and I haven't had time to dig into it. I guess one option would be to use Lua, but I've never used it with Redis. Any ideas how to efficiently implement these operations on top of Redis on a manner that is easily supported by multiple languages, i.e. not relying on a specific library?
Having actually thought about the problem properly, it can be solved directly with a HSET. Where <app> is the field name and the value are the IPs. Keys being user accounts.
We are creating a online platform and exposing an Julia API via a embedded code-editor. The user can access the API and run some analysis on our web-app. I have a question related to controlling access to the API and objects.
The API right now contains a database handle and other objects that are exposed to the user and can be used to hack the internal system.
Below is the current architecture:
UserProgram.jl
function doanalysis()
data = getdata()
# some analysis on data
end
InternalProgram.jl
const client = MongoClient()
const collection = MongoCollection(client,"dbname","collectionName")
function getdata()
data = #some function to get data from collection
return data
end
#after parsing the user program
doanalysis()
To run the user analysis, we pass user program as a command-line argument (using ArgParse module) and run the internal program as follows
$ julia InternalProgram.jl --file Userprogram.jl
With this architecture, user potentially gets access to "client" and "collection" and can modify internal databases.
Is there a better way to solve this problem without exposing the objects?
I hope someone has an answer to this.
You will be exposing yourself to multiple types of vulnerabilities - as the general rule, executing user inputed code is a VERY BAD IDEA.
1/ like you said, you'll potentially allow users to execute random code against your database.
2/ your users will have access to all the power of Julia to do things on your server (download files they can later execute for example, access other servers and services on the server [MySQL, email, etc]). Depending on the level of access of the Julia process, think unauthorized access to your file system, installing key loggers, running spam servers, etc.
3/ will be able to use Julia packages and get you into a lot of trouble - like for example add/use the Requests.jl package and execute DoS attacks on other servers.
If you really want to go this way, I recommend that:
A/ set proper (minimal) permissions for the MongoDB user configured to be used in the app (ex: http://blog.mlab.com/2016/07/mongodb-tips-tricks-collection-level-access-control/)
B/ execute each user's code into a separate sandbox / container that only exposes the minimum necessary software
C/ have your containers running on a managed platform where tooling exists (firewalls) to monitor incoming and outgoing traffic (for example to block spam or DoS attacks)
In order to achieve B/ and C/ my recommendation is to use JuliaBox. I haven't used it myself, but seems to be exactly what you need: https://github.com/JuliaCloud/JuliaBox
Once you get that running, you can also use https://github.com/JuliaWeb/JuliaWebAPI.jl
According to comment and a criticism for question being too broad; I'll try to make it more specific;
Environment - Server: autobahn|python with twisted, wampv2
Given that:
a) I have a class which extends RouterSession and authenticates user, and at the moment of authentification knows who the user accessing the wamp service is;
and
b) I have a class which extends ApplicationSession and on creation exposes several rpc methods through wamp
How do I access the user data in the exposed RPC method. By the user data - I mean - which I verified at the beginning of a specific client connection through RouterSession.
Because ApplicationSessions are initiated only once, and don't have a clue about caller (from what I have seen in debugger).
The reason I would need this - is to execute the rpc call with the context of a calling user. Where method's result might depend on specific user profile properties.
I am probably looking for something which could represent per-connection created Application instances (which could then hold reference to authorization result and data). Like most of the server Protocols operate in twisted.
-----------------ORIGINAL POST-----------
Brief question: imagine a scenario where user rights are based not on method but object; Example - I may have right to edit my profile account and profile accounts of my subordinates but not any other. This leaves to a situation where I would expose "com.myorg.profile.update_profile_details" RPC through WAMP to everyone, but would have to check which user is trying to modify which profile.
I see there is a validate mechanism in WAMP Router to deal with validating request - but it seems, that it lacks a reference to previous authentication result or session.
I really would like to avoid passing session keys (or, god forbid auth tokens) back and forth through the WAMP; whats the suggested approach for this?
----------------END OF ORIGINAL POST----------
After debugging traces back and forth, I found a solution which fits me - the trick is to pass additional options parameter when registering RPC methods:
self.register(self.test_me_now, 'com.ossnet.testme', options = RegisterOptions(details_arg = 'details', discloseCaller = True))
and voila - there is a new incoming parameter into registered RPC method: 'details'
with following contents:
CallDetails: CallDetails(progress = None, caller = 774234234575675677138, authid = johan.gram, authrole = user, authmethod = ticket)
Introduction
I'm currently building an access control system in my DataMapper ORM installation (with CodeIgniter 2.*). I have the initial injection of the User's rights (Root/Anonymous layers too) working perfectly. When a User logs in the DataMapper calls that are done in the system will automatically be marked with the Userrights the User has.
So until this point it works perfectly, but now I'm a bit in a bind. The problem is that I need some way to catch and filter each method-call on the Object that is instantiated.
I have two special calls so I can disable the Userrights-checks too. This is particularly handy at the exact moment I want to login a User and need to do initial checks;
DataMapper::disable_userrights();
$this->_user = new User($this->session->userdata('_user_id'));
$this->_userrights = ($this->_user ? $this->_user->userrights(TRUE) : NULL);
DataMapper::enable_userrights();
The above makes sure I can do the initial User (and it's Userrights) injection. Inside the DataMapper library I use the $CI =& get_instance(); to access the _ globals I use. The general rule in this installment I'm building is that $this->_ is reserved for a "globals" system that always gets loaded (or can sometimes be NULL/FALSE) so I can easily access information that's almost always required on each page/call.
Details
Ok, so image the above my logged-in User has the Userrights: Create/Read/Update on the User Entity. So now if I call a simple:
$test = new User();
$test->get_where('name', 'Allendar');
The $_rights Array inside the DataMapper instance will know that the current logged-in User is allowed to perform certain tasks on "this" instance;
protected $_rights = array(
'Create' => TRUE,
'Read' => TRUE,
'Update' => TRUE,
'Delete' => FALSE,
);
The issue
Now comes my problem. I want to control these Userrights by validating them over each action that is performed. I have the following ideas;
Super redundant; make a global validation method that is executed at the start of each other method in the DataMapper Class.
Problem 1: I have to spam the whole DataMapper Class with the same calls
Problem 2: I have no control over DataMapper extension methods
Problem 3: How to detect relational includes? They should be validated too
Low level binding on certain Core DataMapper calls where I can clearly detect what kind of action is executed on the database (C/R/U/D).
So I'm aiming for Option 2 (and 1.) Problem 2), as it will also solve 1.) Problem 2.
The problem is that DataMapper is so massive and it's pretty complex to discern what actually happens when on it's deepest calling level. Furthermore it looks like all methods are very scattered and hardly ever use each other ($this->get() is often not used to do an eventual call to get a dataset).
So my goal is:
User (logged-in, Anonymous, Root) makes a DataMapper istance
$user_test = new User;
User wants to get $user-test (Read)
$user_test->get(1);
DataMapper will validate the actual call that is done at the database
IF it is only SELECT; OK
IF something else than SELECT (or JOINs to other Model that the User doesn't have access to that/those Models, it will fail with a clear error message)
IF JOINed Models also validate; OK
Return the actual instance;
IF OK: continue DataMapper's normal workflow
IF not OK: inform the User and return the normal empty DataMapper instance of that Model
Furthermore, for this system I think I will need to add some customization for the raw_sql (etc.) SQL calls so that I have to inject the rights manually related to that SQL statement or only allow the Root User to do those things.
Recap
I'm curious if someone ever attempted something like this in DataMapper or has some hints how I can use/intercept those lowest level calls in DataMapper.
If I can get some clearance on the deepest level of DataMapper's actual final query-call I can probably get a long way myself too.
I would like to suggest not to do this in Datamapper itself (mainly due to the complexity of the code, as you have already noticed yourself).
Instead, use a base model, and have that extend Datamapper. Then add the code to the base model required for your ACL checks, and then overload every Datamapper method that needs an ACL check. Have it call your ACL, deal with an access denied, and if access is granted, simply return the result of parent::method();.
Instead of extending Datamapper, your application models should then extend this base model, so they will inherit the ACL features.
I am developing a site using App Engine and Webapp2.
I understand the concepts of OO and more or less how they are applied in Python. However I am confused about how App Engine uses OO. When an instance of my app is created, is one instance of each class created and re-used for each user? Or are separate instances created for each user? This will decide whether I should use instance or class variables.
So to be even more specific, when should I use self. variables (instance variables) and when should I leave out self. (class variables)?
Thanks for your help. :)
I would separate the concepts of object-orientation (OO) and request handling. First and foremost, App Engine is based on a request-driven model. A request is the base for most actions triggered on App Engine.
Second of all, be aware of the differences between an App Engine instance[0], which is like a container for you application and provided by the App Engine infrastructure, and an webapp2.WSGIApplication[1], which is an object instance of a class you defined.
To simplify things, I assume your app only has 1 webapp2.WSGIApplication . Now let's start with the first request your application gets. Before that, nothing of your app exists, except the code and configuration available on App Engine machines. Once the request reaches App Engine, a new App Engine instance[0] is created. Once the App Engine instance is in place and set up, it will instantiate a webapp2.WSGIApplication instance[1]. Now you have both relevant "instances" in place, the object being a part of the container. Next, the incoming request is routed to your webapp2.WSGIApplication instance which will handle the request according to the implementation you have done.
The App Engine system will create new App Engine instances for you dependening on the load. If a single instance is not able to handle all the requests that come in, it will create a new instance(first [0], then [1] within the former) and spread the load. If that's still not enough, a third instance is created and so on. The same is true if load decreases. If you application is currently running on 3 instances, but 2 would be enough to handle the load, 1 instance will be killed. In addition, you don't know which particular instance will handle which request.
And this leads us to your second question, should you depend on instance variables. Because App Engine creates and kills instances as it seems appropriate and you don't know which instance handles a request, you should always assume instances to be stateless. While it is not always the case, potentially every request can be handled by a completely new instance.
If you need to have state, use memcache (volatile) or datastore (persistent) or some other data backend (blobstore, files API, and so on).
[0] https://developers.google.com/appengine/docs/adminconsole/instances
[1] http://webapp-improved.appspot.com/guide/app.html
PS: people do use instance memory to optimize requests, but beginners who start to learn about App Engine should consider this an advanced technique.