How can I encapsulate the session/transaction acquisition into the lazy-init of relations in Squeryl? - orm

I am trying to implement a One-To-Many relation using Squeryl, and following the instructions on their site.
The documentation gives the following example:
object SchoolDb extends Schema {
val courses = table[Course]
val subjects = table[Subject]
val subjectToCourses =
oneToManyRelation(subjects, courses).
via((s,c) => s.id === c.subjectId)
}
class Course(val subjectId: Long) extends SchoolDb2Object {
lazy val subject: ManyToOne[Subject] = SchoolDb.subjectToCourses.right(this)
}
class Subject(val name: String) extends SchoolDb2Object {
lazy val courses: OneToMany[Course] = SchoolDb.subjectToCourses.left(this)
}
I find that any calls to Course.subject or Subject.courses needs to be wrapped in a transaction. However, One of my goals in using an ORM is to hide these details from callers. As such, I don't want the calling code to have to wrap a call to these fields in a transaction.
It seems that if I modify the example to wrap the lazy init function in a transaction, like so:
class Subject(val name: String) extends SchoolDb2Object {
lazy val courses: OneToMany[Course] = {
inTransaction {
SchoolDb.subjectToCourses.left(this)
}
}
I get the following exception:
Exception in thread "main" java.lang.RuntimeException: no session is bound to current thread, a session must be created via Session.create
and bound to the thread via 'work' or 'bindToCurrentThread'
at scala.Predef$.error(Predef.scala:58)
at org.squeryl.Session$$anonfun$currentSession$1.apply(Session.scala:111)
at org.squeryl.Session$$anonfun$currentSession$1.apply(Session.scala:111)
at scala.Option.getOrElse(Option.scala:104)
at org.squeryl.Session$.currentSession(Session.scala:110)
at org.squeryl.dsl.AbstractQuery.org$squeryl$dsl$AbstractQuery$$_dbAdapter(AbstractQuery.scala:116)
at org.squeryl.dsl.AbstractQuery$$anon$1.<init>(AbstractQuery.scala:120)
at org.squeryl.dsl.AbstractQuery.iterator(AbstractQuery.scala:118)
at org.squeryl.dsl.DelegateQuery.iterator(DelegateQuery.scala:9)
But, like I said, if I wrap the caller in a transaction, then everything works.
So, how can I encapsulate the fact that this object is backed by a database in the object itself?

I assume you get this error in calls on the courses object?
I don't know very much about how Squeryl works, but I believe that the OneToMany[Course] is a live object. That means that the calls on the courses object need a session since any call may lazily go to the database to fetch data.
How you organise this depends on what type of application you use. In a web application it often makes sense to add a filter (first point of entry) to start and stop the transaction. In a GUI client, say a swing application, it's a good solution to start the transaction at the point where you receive the user interaction. That way you get transactions that are not to long and also stretches over calls which you expect to be performed atomically (either fully or not at all).

Related

Hwo to convert Flux<Item> to List<Item> by blocking

Background
I have a legacy application where I need to return a List<Item>
There are many different Service classes each belonging to an ItemType.
Each service class calls a few different backend APIs and collects the responses to create a SubType of the Item.
So we can say, each service class implementation returns an Item
All backend API access code is using WebClient which returns Mono of some type, and I can zip all Mono within the service to create an Item
The user should be able to look up many different types of items in one call. This requires many backend calls
So for performance sake, I wanted to make this all asynchronous using reactor, so I introduced Spring Reactive code.
Problem
If my endpoint had to return Flux<Item> then this code work fine,
But this is some service code which is used by other legacy code caller.
So eventually I want to return the List<Item> but When I try to convert my Flux into the List I get an error
"message": "block()/blockFirst()/blockLast() are blocking,
which is not supported in thread reactor-http-nio-3",
Here is the service, which is calling a few other service classes.
Flux<Item> itemFlux = Flux.fromIterable(searchRequestByItemType.entrySet())
.flatMap(e ->
getService(e.getKey()).searchItems(e.getValue()))
.subscribeOn(Schedulers.boundedElastic());
Mono<List<Item>> listMono = itemFlux
.collectList()
.block(); //This line throws error
Here is what the above service is calling
default Flux<Item> searchItems(List<SingleItemSearchRequest> requests) {
return Flux.fromIterable(requests)
.flatMap(this::searchItem)
.subscribeOn(Schedulers.boundedElastic());
}
Here is what a single-item search is which is used by above
public Mono<Item> searchItem(SingleItemSearchRequest sisr) {
return Mono.zip(backendApi.getItemANameApi(sisr.getItemIdentifiers().getItemId()),
sisr.isAddXXXDetails()
?backendApi.getItemAXXXApi(sisr.getItemIdentifiers().getItemId())
:Mono.empty(),
sisr.isAddYYYDetails()
?backendApi.getItemAYYYApi(sisr.getItemIdentifiers().getItemId())
:Mono.empty())
.map(tuple3 -> Item.builder()
.name(tuple3.getT1())
.xxxDetails(tuple3.getT2())
.yyyDetails(tuple3.getT3())
.build()
);
}
Sample project to replicate the problem..
https://github.com/mps-learning/spring-reactive-example
I’m new to spring reactor, feel free to pinpoint ALL errors in the code.
UPDATE
As per Patrick Hooijer Bonus suggestion, updating the Mono.zip entries to always contain some default.
#Override
public Mono<Item> searchItem(SingleItemSearchRequest sisr) {
System.out.println("\t\tInside " + supportedItem() + " searchItem with thread " + Thread.currentThread().toString());
//TODO: how to make these XXX YYY calls conditionals In clear way?
return Mono.zip(getNameDetails(sisr).defaultIfEmpty("Default Name"),
getXXXDetails(sisr).defaultIfEmpty("Default XXX Details"),
getYYYDetails(sisr).defaultIfEmpty("Default YYY Details"))
.map(tuple3 -> Item.builder()
.name(tuple3.getT1())
.xxxDetails(tuple3.getT2())
.yyyDetails(tuple3.getT3())
.build()
);
}
private Mono<String> getNameDetails(SingleItemSearchRequest sisr) {
return mockBackendApi.getItemCNameApi(sisr.getItemIdentifiers().getItemId());
}
private Mono<String> getYYYDetails(SingleItemSearchRequest sisr) {
return sisr.isAddYYYDetails()
? mockBackendApi.getItemCYYYApi(sisr.getItemIdentifiers().getItemId())
: Mono.empty();
}
private Mono<String> getXXXDetails(SingleItemSearchRequest sisr) {
return sisr.isAddXXXDetails()
? mockBackendApi.getItemCXXXApi(sisr.getItemIdentifiers().getItemId())
: Mono.empty();
}
Edit: Below answer does not solve the issue, but it contains useful information about Thread switching. It does not work because .block() is no problem for non-blocking Schedulers if it's used to switch to synchronous code.
This is because the block operator inherited the reactor-http-nio-3 Thread from backendApi.getItemANameApi (or one of the other calls in Mono.zip), which is non-blocking.
Most operators continue working on the Thread on which the previous operator executed, this is because the Thread is linked to the emitted item. There are two groups of operators where the Thread of the output item differs from the input:
flatMap, concatMap, zip, etc: Operators that emit items from other Publishers will keep the Thread link they received from this inner Publisher, not from the input.
Time based operators like delayElements, interval, buffer(Duration), etc. will schedule their tasks on the provided Scheduler, or Schedulers.parallel() if none provided. The emitted items will then be linked to the Thread the task was scheduled on.
In your case, Mono.zip emits items from backendApi.getItemANameApi linked to reactor-http-nio-3, which gets propagated downstream, goes outside both the flatMap in searchItems and in itemFlux, until it reaches your block operator.
You can solve this by placing a .publishOn(Schedulers.boundedElastic()), either in searchItem, searchItems or itemFlux. This will cause the item to switch to a Thread in the provided Scheduler.
Bonus: Since you requested to pinpoint errors: Your Mono.zip will not work if sisr.isAddXXXDetails() is false, as Mono.zip discards any element it could not zip. Since you return a Mono.empty() in that case, no items can be zipped and it will return an empty Mono.
If we have only spring-boot-starter-webflux defined as application dependency, then springbok spin up a `Netty server.
One is not expected to block() in a reactive application using a non-blocking server.
However, once we add spring-boot-starter-web dependency then even with the presence of spring-boot-starter-webflux, springboot spinup a tomcat server. Which is a thread-per-request model and is expected to have blocking calls
So to solve my problem, all I had to do above is, to add spring-boot-starter-web dependency in pom.xml. After that applications is started in Tomcat
with timcat .collectList().block() works in Controller class to return the List<Item>.
Whereas with the Netty server I could return only Flux<Item> not List<Item>, which is expected.

Neo4j 3.5's embeded database does not seem to persist data

I am trying to build a small command line tool that will store data in a neo4j graph. To do this I have started experimenting with Neo4j3.5's embedded databases. After putting together the following example I have found that either the nodes I am creating are not being saved to the database or the method of database creation is overwriting my previous run.
The Example:
fun main() {
//Spin up data base
val graphDBFactory = GraphDatabaseFactory()
val graphDB = graphDBFactory.newEmbeddedDatabase(File("src/main/resources/neo4j"))
registerShutdownHook(graphDB)
val tx = graphDB.beginTx()
graphDB.createNode(Label.label("firstNode"))
graphDB.createNode(Label.label("secondNode"))
val result = graphDB.execute("MATCH (a) RETURN COUNT(a)")
println(result.resultAsString())
tx.success()
}
private fun registerShutdownHook(graphDb: GraphDatabaseService) {
// Registers a shutdown hook for the Neo4j instance so that it
// shuts down nicely when the VM exits (even if you "Ctrl-C" the
// running application).
Runtime.getRuntime().addShutdownHook(object : Thread() {
override fun run() {
graphDb.shutdown()
}
})
}
I would expect that every time I run main the resulting query count will increase by 2.
That is currently not the case and I can find nothing in the docs that references a different method of opening an already created embedded database. Am I trying to use the embedded database incorrectly or am I missing something? Any help or info would be appreciated.
build Info:
Kotlin jvm 1.4.21
Neo4j-comunity-3.5.35
Transactions in neo4j 3.x have a 3 stage model
create
success / failure
close
you missed the third, which would then commit or rollback.
You can use Kotlin's use as Transaction is an AutoCloseable

Spring data - webflux - Chaining requests

i use reactive Mongo Drivers and Web Flux dependancies
I have a code like below.
public Mono<Employee> editEmployee(EmployeeEditRequest employeeEditRequest) {
return employeeRepository.findById(employeeEditRequest.getId())
.map(employee -> {
BeanUtils.copyProperties(employeeEditRequest, employee);
return employeeRepository.save(employee)
})
}
Employee Repository has the following code
Mono<Employee> findById(String employeeId)
Does the thread actually block when findById is called? I understand the portion within map actually blocks the thread.
if it blocks, how can I make this code completely reactive?
Also, in this reactive paradigm of writing code, how do I handle that given employee is not found?
Yes, map is a blocking and synchronous operation for which time taken is always going to be deterministic.
Map should be used when you want to do the transformation of an object /data in fixed time. The operations which are done synchronously. eg your BeanUtils copy properties operation.
FlatMap should be used for non-blocking operations, or in short anything which returns back Mono,Flux.
"how do I handle that given employee is not found?" -
findById returns empty mono when not found. So we can use switchIfEmpty here.
Now let's come to what changes you can make to your code:
public Mono<Employee> editEmployee(EmployeeEditRequest employeeEditRequest) {
return employeeRepository.findById(employeeEditRequest.getId())
.switchIfEmpty(Mono.defer(() -> {
//do something
}))
.map(employee -> {
BeanUtils.copyProperties(employeeEditRequest, employee);
return employee;
})
.flatMap(employee -> employeeRepository.save(employee));
}

nhibernate lazy loading

I've a problem of session when I use lazy loading in the UI layer.
My piece of code (in the DAO layer)
public List<Visites> GetVisitesClientQuery(string idClient)
{
using (ISession session = Repository.TSession())
{
var results = (from v in session.Query<Visites>()
where v.Clients.Idclient == idClient
select v);
return results.ToList<Visites>();
}
}
I call it in the UI layer :
var visites = VisiteManager.Instance.GetVisitesClientQuery(lstClients.SelectedValue.ToString());
foreach (Visites v in visites)
{
foreach (Factures f in v.Factures)
{
...
}
}
v.Factures is a collection.
If I call it in the using it works (the session is opened) but in this case it doesn't work and I've this error.
Initializing[NHibernateTest.BusinessObjects.Visites#036000007935]-
failed to lazily initialize a collection of role:
NHibernateTest.BusinessObjects.Visites.Factures, no session or session was closed
Is it possible to handle a lazy loading call in the UI layer ?
The problem here, is that you handle your session management inside your repository (DAO layer), which isn't a good idea.
An ISession implementation in NHibernate represents a 'Unit Of Work'. A Unit Of Work needs to know the 'context' of the 'use-case' to be able to use this concept successfully.
Your repository however has no notion of the 'context' of the use case in which it (the repository) is used.
Therefore, it is not your DAO layer that should decide when to open an ISession, but it is your 'application layer' (or even your UI layer, if you do not have an Application Layer) which should do that , since that will be the layer that knows your context.
By doing so, you can indeed effectively use the Session as a UnitofWork. In order to save an entity, you'll have to use the same session to save the entity, as the session that you've used to load that entity. (Otherwise, you'll need to 'lock' the entity into the session).
Next to that, it will also solve your lazy loading problem. :)
You should eager load the collections you know you will be using with
session.Query().Fetch(x=>x.Factures).
That will load all Visites with the Factures.

Flushing in NHibernate

This question is a bit of a dupe, but I still don't understand the best way to handle flushing.
I am migrating an existing code base, which contains a lot of code like the following:
private void btnSave_Click()
{
SaveForm();
ReloadList();
}
private void SaveForm()
{
var foo = FooRepository.Get(_editingFooId);
foo.Name = txtName.Text;
FooRepository.Save(foo);
}
private void ReloadList()
{
fooRepeater.DataSource = FooRepository.LoadAll();
fooRepeater.DataBind();
}
Now that I am changing the FooRepository to Nhibernate, what should I use for the FooRepository.Save method? Should the FooRepository always flush the session when the entity is saved?
I'm not sure if I understand your question, but here is what I think:
Think in "putting objects to the session" instead of "getting and storing data". NH will store all new and changed objects in the session without any special call to it.
Consider this scenarios:
Data change:
Get data from the database with any query. The entities are now in the NH session
Change entities by just changing property values
Commit the transaction. Changes are flushed and stored to the database.
Create a new object:
Call a constructor to create a new object
Store it to the database by calling "Save". It is in the session now.
You still can change the object after Save
Commit the changes. The latest state will be stored to the database.
If you work with detached entities, you also need Update or SaveOrUpdate to put detached entities to the session.
Of course you can configure NH to behave differently. But it works best if you follow this default behaviour.
It doesn't matter whether or not you explicitly flush the session between modifying a Foo entity and loading all Foos from the repository. NHibernate is smart enough to auto-flush itself if you have made changes in the session that may affect the results of the query you are trying to run.
Ideally I try to use one session per "unit of work". This means one cohesive piece of work which may involve several smaller steps. If you feel that you do not have a seam in your architecture where you can achieve this, then managing the session inside the repository will also work. Just be aware that you are missing out on some of the power that NHibernate provides you.
I'd vote up Stefan Moser's answer if I could - I'm still getting to grips with Nh myself but I think it's nice to be able to write code like this:
private void SaveForm()
{
using (var unitofwork = UnitOfWork.Start())
{
var foo = FooRepository.Get(_editingFooId);
var bar = BarRepository.Get(_barId);
foo.Name = txtName.Text;
bar.SomeOtherProperty = txtBlah.Text;
FooRepository.Save(foo);
BarRepository.Save(bar);
UnitOfWork.CommitChanges();
}
}
so this way either the whole action succeeds or it fails and rolls back, keeping flushing/transaction management outside of the Repositories.