Relay node lost after refetch - relay

I'm struggling with some weird behaviour of Relay. Basically I have a query like
query{
user{
friends{
[Connection of friendType]
},
friend(id: [...]){
[gives the friendType]
}
}
}
So there is a connection of friends, and also the possiblity to access one friend by id (to display the friends profile with some additional information). The connection works nicely, until a specific friend gets fetched by the component that shows the profile (this happens using a RelayRenderer).
The fetching of the friend works, it gets displayed... nice. But afterwards, the friends data no longer appears in the connection, the object is still present, but I can't access any of the data fetched by the component that displays the profile.
Also, even if the data displayed in the profile is the same as in the list (i.e. the fragments coincide), the profile component will fetch the friend again.
I think, this is related to the forced encapsulation by Relay. Two components display the same fragment of one particular object (the friend) but only the one, that actually fetched the data is allowed to see it.
This is counter-intuitive in my opinion, as it makes caching effectively useless since I'd have to fetch the whole connection of friends again after displaying one of them...
Any thoughts as to what I am doing wrong here?
EDIT: I've been doing some research, and this is what I've seen:
The node data only gets lost, if it's a part of a fragment, i. e. if I have
fragment on user{
friends { edges { node {
id,
name
}
}
and in another place
fragment on user {
friend(id: [...]) {
id,
name
}
}
then querying the second fragment breaks the first one, the list (using the first fragment) doesn't get any access to node.name. If on the other hand both fragments use a nested
fragment on friend {name}
then it works.
This however is very tedious, as it means splitting up Relay containers on the level of individual Texts (in some cases).

Related

SignalR, how to ensure that only one user can edit a given form at a time?

I have a dashboard with a list of items and a finite number of users. I want to show "an item is being edited" near said item to avoid simultaneous edits and overwrites of data.
This seems to me like updating a flag in the database and relatively simple signalr implementation with the javascript simply adding/removing a css class.
I have seen this:
Prevent multiple people from editing the same form
which describes a method with posting every X minutes and clearing the flag from the database when there are no more update messages from the user.
The issue is:
I was wondering if there was a signalr method (like disconnect; i know it exists but I don't know if it fits this scenario) to do that elegantly rather than running a timer function. If so, is it possible for the server to miss the event and permanently leave the flagged as "editing" when it is not?
you could implement a hub for this, here is a example:
public class ItemAccessHub : Hub
{
public override Task OnConnectedAsync()
{
// your logic to lock the object, set a state in the db
return base.OnConnectedAsync();
}
public override Task OnDisconnectedAsync(Exception exception)
{
// your logic to unlock the object
return base.OnDisconnectedAsync(exception);
}
}
to get information from the query you can access the HttpContext:
Context.GetHttpContext().Request.Query.TryGetValue("item-id", out var itemId)
so you could start a connection when the user is accessing the form and send the id of the item in the query:
/hub/itemAccess?item-id=ITEM_ID
and when the user closes the form then disconnect the connection.
with this method the item is also unlocked when the user loses his network connection.
the on disconnect method is allays invoked when a client disconnects, so you can do your clean up in this method.
in this hub you can than also implement the update function
i hope this is what you are looking for

Flux without data caching?

Almost all examples of flux involve data cache on the client side however I don't think I would be able to do this for a lot of my application.
In the system I am thinking about using React/Flux, a single user can have 100's of thousands of the main piece of data we store (and 1 record probably has at least 75 data properties). Caching this much data on the client side seems like a bad idea and probably makes things more complex.
If I were not using Flux, I would just have a ORM like system that can talk to a REST API in which case a request like userRepository.getById(123) would always hit the API regardless if I requested that data in the last page. My idea is to just have the store have these methods.
Does Flux consider it bad that if I were to make request for data, that it always hit the API and never pulls data from a local cache instance? Can I use Flux in a way were a majority of the data retrieval requests are always going to hit an API?
The closest you can sanely get to no caching is to reset any store state to null or [] when an action requesting new data comes in. If you do this you must emit a change event, or else you invite race conditions.
As an alternative to flux, you can simply use promises and a simple mixin with an api to modify state. For example, with bluebird:
var promiseStateMixin = {
thenSetState: function(updates, initialUpdates){
// promisify setState
var setState = this.setState.bind(this);
var setStateP = function(changes){
return new Promise(function(resolve){
setState(changes, resolve);
});
};
// if we have initial updates, apply them and ensure the state change happens
return Promise.resolve(initialUpdates ? setStateP(initialUpdates) : null)
// wait for our main updates to resolve
.then(Promise.params(updates))
// apply our unwrapped updates
.then(function(updates){
return setStateP(updates);
}).bind(this);
}
};
And in your components:
handleRefreshClick: function(){
this.thenSetState(
// users is Promise<User[]>
{users: Api.Users.getAll(), loading: false},
// we can't do our own setState due to unlikely race conditions
// instead we supply our own here, but don't worry, the
// getAll request is already running
// this argument is optional
{users: [], loading: true}
).catch(function(error){
// the rejection reason for our getUsers promise
// `this` is our component instance here
error.users
});
}
Of course this doesn't prevent you from using flux when/where it makes sense in your application. For example, react-router is used in many many react projects, and it uses flux internally. React and related libraries/patters are designed to only help where desired, and never control how you write each component.
I think the biggest advantage of using Flux in this situation is that the rest of your app doesn't have to care that data is never cached, or that you're using a specific ORM system. As far as your components are concerned, data lives in stores, and data can be changed via actions. Your actions or stores can choose to always go to the API for data or cache some parts locally, but you still win by encapsulating this magic.

Deep level access control in DataMapper ORM

Introduction
I'm currently building an access control system in my DataMapper ORM installation (with CodeIgniter 2.*). I have the initial injection of the User's rights (Root/Anonymous layers too) working perfectly. When a User logs in the DataMapper calls that are done in the system will automatically be marked with the Userrights the User has.
So until this point it works perfectly, but now I'm a bit in a bind. The problem is that I need some way to catch and filter each method-call on the Object that is instantiated.
I have two special calls so I can disable the Userrights-checks too. This is particularly handy at the exact moment I want to login a User and need to do initial checks;
DataMapper::disable_userrights();
$this->_user = new User($this->session->userdata('_user_id'));
$this->_userrights = ($this->_user ? $this->_user->userrights(TRUE) : NULL);
DataMapper::enable_userrights();
The above makes sure I can do the initial User (and it's Userrights) injection. Inside the DataMapper library I use the $CI =& get_instance(); to access the _ globals I use. The general rule in this installment I'm building is that $this->_ is reserved for a "globals" system that always gets loaded (or can sometimes be NULL/FALSE) so I can easily access information that's almost always required on each page/call.
Details
Ok, so image the above my logged-in User has the Userrights: Create/Read/Update on the User Entity. So now if I call a simple:
$test = new User();
$test->get_where('name', 'Allendar');
The $_rights Array inside the DataMapper instance will know that the current logged-in User is allowed to perform certain tasks on "this" instance;
protected $_rights = array(
'Create' => TRUE,
'Read' => TRUE,
'Update' => TRUE,
'Delete' => FALSE,
);
The issue
Now comes my problem. I want to control these Userrights by validating them over each action that is performed. I have the following ideas;
Super redundant; make a global validation method that is executed at the start of each other method in the DataMapper Class.
Problem 1: I have to spam the whole DataMapper Class with the same calls
Problem 2: I have no control over DataMapper extension methods
Problem 3: How to detect relational includes? They should be validated too
Low level binding on certain Core DataMapper calls where I can clearly detect what kind of action is executed on the database (C/R/U/D).
So I'm aiming for Option 2 (and 1.) Problem 2), as it will also solve 1.) Problem 2.
The problem is that DataMapper is so massive and it's pretty complex to discern what actually happens when on it's deepest calling level. Furthermore it looks like all methods are very scattered and hardly ever use each other ($this->get() is often not used to do an eventual call to get a dataset).
So my goal is:
User (logged-in, Anonymous, Root) makes a DataMapper istance
$user_test = new User;
User wants to get $user-test (Read)
$user_test->get(1);
DataMapper will validate the actual call that is done at the database
IF it is only SELECT; OK
IF something else than SELECT (or JOINs to other Model that the User doesn't have access to that/those Models, it will fail with a clear error message)
IF JOINed Models also validate; OK
Return the actual instance;
IF OK: continue DataMapper's normal workflow
IF not OK: inform the User and return the normal empty DataMapper instance of that Model
Furthermore, for this system I think I will need to add some customization for the raw_sql (etc.) SQL calls so that I have to inject the rights manually related to that SQL statement or only allow the Root User to do those things.
Recap
I'm curious if someone ever attempted something like this in DataMapper or has some hints how I can use/intercept those lowest level calls in DataMapper.
If I can get some clearance on the deepest level of DataMapper's actual final query-call I can probably get a long way myself too.
I would like to suggest not to do this in Datamapper itself (mainly due to the complexity of the code, as you have already noticed yourself).
Instead, use a base model, and have that extend Datamapper. Then add the code to the base model required for your ACL checks, and then overload every Datamapper method that needs an ACL check. Have it call your ACL, deal with an access denied, and if access is granted, simply return the result of parent::method();.
Instead of extending Datamapper, your application models should then extend this base model, so they will inherit the ACL features.

WebDriver / Read elements into variables and re-use them

I have a big issue With Webdriver (Selenium 2).
In my test code, I find all the elements in the beginning of my test, and do some actions on them (like click(), checking attributes, etc.). My problem is that my page is refreshed and re-load my elements, and Webdriver don't know to recognize the elements again.
I know that I can find my elements again, but in some functions I don't know my XPath/ids, and I get just the WebElements, not XPath/IDs.
Am I right in saying that it's no possible to read elements into variables and re-use them?
WebElement element = driver.findElement(By.id("xyz"));
The above line will store the element object in element. You can certainly pass this element to other functions to make use of it over there.
We generally follow a pattern called PageObject patterns where we create all objects of a page as members of a class and instantiate them at once. This way we can use them any where in our project. For example all objects in Login page will be created as public static variables in a class called LoginPage. The constructor of LoginPage class will find elements and store them.
Next time any where you want to access an object of LoginPage, we access them as below(asuming that you have created elements userName and submit)...
LoginPage.userName.sendKeys("buddha");
LoginPage.submit.click();
However as Robie mentioned there is a chance for this objects to become unaccessible using the previously created object after page refresh. You can use the below modified approach for ensuring these objects are always found.
Instead of creating the objects as a member variable, create a getmethod for each object that you may need to use.
class LoginPage
{
public static WebElement getUserName()
{
return driver.findElement(By.id("xyz"));
}
}
Once LoginPage is defined that way, next time you want to use userName, you use below syntax.This way you don't have to give locators to the functions that needs to use these objects.
LoginPage.getUserName().sendKeys("buddha");
By using this approach, you can ensure that the objects are always accessible.
Buddha is incorrect in the following statement:
You can reuse it any number of times, however, it only works as long as the id doesn't change.
As you have correctly observed, if the page reloads, then elements become stale, even if the original object is still displayed on screen. In fact, refreshing of HTML via AJAX calls can also make objects stale even if the URL has not changed.
This is how Selenium works, and you have to understand this when deciding how to implement a test framework.
You can store elements, reuse them and pass them to functions, but understand when they will become stale and need to be refound.
In my current project, I have a very AJAX heavy application in which objects are continually becoming stale, so have extended WebElement to find and store it's HTML Id when constructed, then refinds by id if a stale element exception occurs and re performs the method that failed. However, this was achieved using Ruby and very specific to my application as I know every object has a unique HTML Id. I do not believe this approach would work for most applications under test.
I would also question whether storing elements in public static variables populated on construction, is actually following the Page Object pattern. I have never seen it implemented this way before, and can see lots of potential pitfalls. Lazy instantiation may be a better approach when following the Page Object pattern.

NHibernate, Databinding to DataGridView, Lazy Loading, and Session managment - need advice

My main application form (WinForms) has a DataGridView, that uses DataBinding and Fluent NHibernate to display data from a SQLite database. This form is open for the entire time the application is running.
For performance reasons, I set the convention DefaultLazy.Always() for all DB access.
So far, the only way I've found to make this work is to keep a Session (let's call it MainSession) open all the time for the main form, so NHibernate can lazy load new data as the user navigates with the grid.
Another part of the application can run in the background, and Save to the DB. Currently, (after considerable struggle), my approach is to call MainSession.Disconnect(), create a disposable Session for each Save, and MainSession.Reconnect() after finishing the Save. Otherwise SQLite will throw "The database file is locked" exceptions.
This seems to be working well so far, but past experience has made me nervous about keeping a session open for a long time (I ran into performance problems when I tried to use a single session for both Saves and Loads - the cache filled up, and bogged down everything - see Commit is VERY slow in my NHibernate / SQLite project).
So, my question - is this a good approach, or am I looking at problems down the road?
If it's a bad approach, what are the alternatives? I've considered opening and closing my main session whenever the user navigates with the grid, but it's not obvious to me how I would do that - hook every event from the grid that could possibly cause a lazy load?
I have the nagging feeling that trying to manage my own sessions this way is fundamentally the wrong approach, but it's not obvious what the right one is.
Edit
It's been more than a year since I asked this question...and it turns out that keeping a main session open for the lifetime of the app has indeed led to performance problems.
There seem to be a lot more NH users on SO these days - anyone want to suggest a better approach?
yeah it's me again. ;-)
stumbling upon your new question reminds me of the following: Did you understand the principle of lazy loading or are you mistaking lazy loading for pagination? NHibernate also provides functionality for that.
If you just want to display some defined properties within your grid that are of course within the object graph i think you should retrieve the whole data at once using 'fetched joins'. If the rowcount of the data is to high you can think about pagination, as far as i know its also possible using DataGridView and Binding.
Lazy Loading results in multiple database calls - in your case i'ld think at least one per row. This seems not to be the best performing solution.
If instead you are using paging with FetchType.Join you can get rid of the long running session and all your problems should be solved. So how about that?
I had a project where there was a main grid for selection.
I had a class which paged through the set and i called session.Clear() everytime when i got a new page.
class MyList : IList<Data>
{
private int _pagesize = 50;
private int _session; // from ctor
private int _firstresult = int.MinValue;
private IList<Data> cached;
public Data this[int index]
get
{
if (!index.Between(_firstresult, _firstresult + cached.Count))
{
_firstresult = index;
GetData();
}
if (!index.Between(_firstresult, _firstresult + cached.Count))
throw new IndexOutOfRangeException();
return cachedData[index - _firstresult];
}
void GetData()
{
Session.Clear();
cached = Session.QueryOver<Data>()
.Skip(_firstresult)
.Take(_pagesize)
.List();
}
}
If you need Databinding maybe implement IBindingList