Spring WebFlux Reactive MongoDB - how to combine two change streams? - spring-webflux

In my app i want to combine 2 change streams to listen to changes to some filter object and to the inserts/deletes to the collection which is actually in question (to which i apply the filter residing in a different collection)
Therefore i have an approximate code like this:
public Flux<List<Object>> findAll(String userId){
<some code here>
return service.findByUserId(userId).
< here i get initial set of data when change streams do not act >
.concatWith(reactiveMongoTemplate
.changeStream("Filter",options,Filter.class)
< here i listen to changes to the filter >
)
.concatWith(reactiveMongoTemplate
.changeStream("Object",options,Object.class)
< here i listen to changes to the collection itself >
);
}
The problem is - second change stream is not working. This i can tell by switching them - as i switch the higher concatWith works and the one which goes below does not.
The question
Have you faced with such behavior?
What could be a better approach to do some complex listening on 2+ collections to deliver to the UI these changes?
Update
This is the whole method i have - with one changeStream working and another one not:
public Flux<List<EventDTO>> findAll(String userId){
Aggregation fluxAggregation = changeStreamHelper.createAggregationBasedOnUserId(userId);
ChangeStreamOptions options = changeStreamHelper.createChangeStreamOpts(fluxAggregation);
return eventListFilterService.findEventListFilterByUserId(userId).flatMap(fltr -> Mono.just(activeEventsFilter.applyCriterion(null, fltr))
.flatMap(criteria -> eventFilteredRepository.findEventsByCriteria(criteria)
.flatMap(eventMapper::toDto).collectList()))
.concatWith(reactiveMongoTemplate
.changeStream("EventListFilter", options, EventListFilter.class).map(ChangeStreamEvent::getBody)
.map(fltr -> activeEventsFilter.applyCriterion(null, fltr))
.flatMap(criteria -> eventFilteredRepository.findEventsByCriteria(criteria)
.flatMap(eventMapper::toDto).collectList())
)
.concatWith(reactiveMongoTemplate
.changeStream("Event", changeStreamHelper.createSimpleChangeStreamOpts(), Event.class).map(ChangeStreamEvent::getBody)
.flatMap(event -> eventListFilterService.findEventListFilterByUserId(userId).flatMap(fltr -> Mono.just(activeEventsFilter.applyCriterion(null, fltr))
.flatMap(criteria -> eventFilteredRepository.findEventsByCriteria(criteria)
.flatMap(eventMapper::toDto).collectList())
)));
}

Related

Slick plain sql query with pagination

I have something like this, using Akka, Alpakka + Slick
Slick
.source(
sql"""select #${onlyTheseColumns.mkString(",")} from #${dbSource.table}"""
.as[Map[String, String]]
.withStatementParameters(rsType = ResultSetType.ForwardOnly, rsConcurrency = ResultSetConcurrency.ReadOnly, fetchSize = batchSize)
.transactionally
).map( doSomething )...
I want to update this plain sql query with skipping the first N-th element.
But that is very DB specific.
Is is possible to get the pagination bit generated by Slick? [like for type-safe queries one just do a drop, filter, take?]
ps: I don't have the Schema, so I cannot go the type-safe way, just want all tables as Map, filter, drop etc on them.
ps2: at akka level, the flow.drop works, but it's not optimal/slow, coz it still consumes the rows.
Cheers
Since you are using the plain SQL, you have to provide a workable SQL in code snippet. Plain SQL may not type-safe, but agile.
BTW, the most optimal way is to skip N-th element by Database, such as limit in mysql.
depending on your database engine, you could use something like
val page = 1
val pageSize = 10
val query = sql"""
select #${onlyTheseColumns.mkString(",")}
from #${dbSource.table}
limit #${pageSize + 1}
offset #${pageSize * (page - 1)}
"""
the pageSize+1 part tells you whether the next page exists
I want to update this plain sql query with skipping the first N-th element. But that is very DB specific.
As you're concerned about changing the SQL for different databases, I suggest you abstract away that part of the SQL and decide what to do based on the Slick profile being used.
If you are working with multiple database product, you've probably already abstracted away from any specific profile, perhaps using JdbcProfile. In that case you could place your "skip N elements" helper in a class and use the active slickProfile to decide on the SQL to use. (As an alternative you could of course check via some other means, such as an environment value you set).
In practice that could be something like this:
case class Paginate(profile: slick.jdbc.JdbcProfile) {
// Return the correct LIMIT/OFFSET SQL for the current Slick profile
def page(size: Int, firstRow: Int): String =
if (profile.isInstanceOf[slick.jdbc.H2Profile]) {
s"LIMIT $size OFFSET $firstRow"
} else if (profile.isInstanceOf[slick.jdbc.MySQLProfile]) {
s"LIMIT $firstRow, $size"
} else {
// And so on... or a default
// Danger: I've no idea if the above SQL is correct - it's just placeholder
???
}
}
Which you could use as:
// Import your profile
import slick.jdbc.H2Profile.api._
val paginate = Paginate(slickProfile)
val action: DBIO[Seq[Int]] =
sql""" SELECT cols FROM table #${paginate.page(100, 10)}""".as[Int]
In this way, you get to isolate (and control) RDBMS-specific SQL in one place.
To make the helper more usable, and as slickProfile is implicit, you could instead write:
def page(size: Int, firstRow: Int)(implicit profile: slick.jdbc.JdbcProfile) =
// Logic for deciding on SQL goes here
I feel obliged to comment that using a splice (#$) in plain SQL opens you to SQL injection attacks if any of the values are provided by a user.

Terraform: How Do I Setup a Resource Based on Configuration

So here is what I want as a module in Pseudo Code:
IF UseCustom, Create AWS Launch Config With One Custom EBS Device and One Generic EBS Device
ELSE Create AWS Launch Config With One Generic EBS Device
I am aware that I can use the 'count' function within a resource to decide whether it is created or not... So I currently have:
resource aws_launch_configuration "basic_launch_config" {
count = var.boolean ? 0 : 1
blah
}
resource aws_launch_configuration "custom_launch_config" {
count = var.boolean ? 1 : 0
blah
blah
}
Which is great, now it creates the right Launch configuration based on my 'boolean' variable... But in order to then create the AutoScalingGroup using that Launch Configuration, I need the Launch Configuration Name. I know what you're thinking, just output it and grab it, you moron! Well of course I'm outputting it:
output "name" {
description = "The Name of the Default Launch Configuration"
value = aws_launch_configuration.basic_launch_config.*.name
}
output "name" {
description = "The Name of the Custom Launch Configuration"
value = aws_launch_configuration.custom_launch_config.*.name
}
But how the heck do I know from the higher area that I'm calling the module that creates the Launch Configuration and Then the Auto Scaling Group which output to use for passing into the ASG???
Is there a different way to grab the value I want that I'm overlooking? I'm new to Terraform and the whole no real conditional thing is really throwing me for a loop.
Terraform: How to conditionally assign an EBS volume to an ECS Cluster
This seemed to be the cleanest way I could find, using a ternary operator:
output "name {
description = "The Name of the Launch Configuration"
value = "${(var.booleanVar) == 0 ? aws_launch_configuration.default_launch_config.*.name : aws_launch_configuration.custom_launch_config.*.name}
}
Let me know if there is a better way!
You can use the same variable you used to decide which resource to enable to select the appropriate result:
output "name" {
value = var.boolean ? aws_launch_configuration.custom_launch_config[0].name : aws_launch_configuration.basic_launch_config[0].name
}
Another option, which is a little more terse but arguably also a little less clear to a future reader, is to exploit the fact that you will always have one list of zero elements and one list with one elements, like this:
output "name" {
value = concat(
aws_launch_configuration.basic_launch_config[*].name,
aws_launch_configuration.custom_launch_config[*].name,
)[0]
}
Concatenating these two lists will always produce a single-item list due to how the count expressions are written, and so we can use [0] to take that single item and return it.

Finding the Size of a Model in Google App Maker

Ive been trying to figure out how Google App Maker works with models by trying to write a simple button to return the length(number of records) that exists within a model I've created and loaded temporary data into (which should have about 150 records).
I'm working with a model called Generic Logs that has ten different
app.models.GenericLogs.fields._values.length - Returns 10
alert(app.models.GenericLogs.fields.Id.maxValue) - Returns null
alert(app.models._values.length) - Returns 2 (I have a second model)
alert(app.models.GenericLogs.datasources._values.length) - Returns 1
I definitely want to get the 150+ response for all of the records (non-unique)
Option 1:
Set your datasource limit setting to 0 and do console.log(app.datasources.YourDatasource.items.length). The downside to this is that all records will be returned to the client and might slow down your UI.
Option 2:
Create a server function -
function YourFunction() {
var query = app.models.YourModel.newQuery();
query.limit = 0;
var results = query.run();
return results.length;
}
Create a button in your client and attach the following to the onClick event google.script.run.withSuccessHandler(function (serverresult) {console.log(serverresult)}).YourFunction()
Reference: https://developers.google.com/appmaker/scripting/server#querying_records

How to clear datatable filters?

Im using custom filtering for my datatable using the method:
$.fn.dataTableExt.afnFiltering.push("custom filter function");
This function adds a filter to my datatable.
The problem is that when I use ajax to create an other datatable object, this filter persists and is applied to this other table that should have nothing to do with this filter. How do I clear the filter or bind it to the first datatable only?
if you make a push on $.fn.dataTableExt.afnFiltering, it means it's an array. So when you receive your data, you can remove the filter reference in this array by using :
delete $.fn.dataTableExt.afnFiltering[index or key];
this method will set the element to undefined
or by using the splice method in javascript.
$.fn.dataTableExt.afnFiltering.splice(index,1);
this method will remove the element from the array.
so
var index = $.fn.dataTableExt.afnFiltering.indexOf("custom filter function");
$.fn.dataTableExt.afnFiltering.splice(index,1);
should resolve your problem
(you can have precision here Javascript - remove an array item by value as indexOf is not supported by IE<9)
If you are going to use the 1.10+ version of the datatable in the future, the use of the search plug-in document is shown below:
Search plug-in development
To reset the filter for version 1.10+, simply add any of the following;
$.fn.dataTable.ext.search = [];
$.fn.dataTable.ext.search.pop();
after this blocks you can add;
table.draw();
$.fn.dataTableExt.afnFiltering.pop();
credit to #DrewT
As mentioned from #kthorngren, there is no build in way of tracking, if or how much custom searches are active.
If you are sure, that there is only one custom search is active a
$.fn.dataTableExt.afnFiltering.pop();
will work - there is big BUT:
$.fn.dataTable.ext.search is an array which contains the search settings for custom search and for searchPanes.
An erasing of this array with $.fn.dataTable.ext.search = []; or two pop()'s, allthough there is only one custom search is active --> will brake searchPanes.
e.g. if you have three panes active, this would mean:
$.fn.dataTable.ext.search[0] -> SearchPane Col1
$.fn.dataTable.ext.search[1] -> SearchPane Col2
$.fn.dataTable.ext.search[2] -> SearchPane Col3
$.fn.dataTable.ext.search[3] -> Custom Search -> safe to delete
$.fn.dataTable.ext.search[4] -> Custom Search -> safe to delete
Following code does the job in my case:
let lenOfSearchPanes = dt.settings()[0]._searchPanes.c.columns.length;
let lenOfSearchArr = $.fn.dataTable.ext.search.length;
let diff = lenOfSearchArr - lenOfSearchPanes
if (diff > 0) {
$.fn.dataTable.ext.search = $.fn.dataTable.ext.search.slice(0, -diff);
}

Calling deleteRecord() too often or too fast at a time?

I want to delete all records of the model "Article" (around five pieces). I'm doing it like that:
CMS.ArticlesController = Em.ArrayController.extend
deleteAll: ->
#get("content").forEach (article) ->
article.deleteRecord()
However, while executing, it says after three articles:
Uncaught TypeError: Cannot call method 'deleteRecord' of undefined
It works though when using a little delay:
CMS.ArticlesController = Em.ArrayController.extend
deleteAll: ->
#get("content").forEach (article) ->
setTimeout( (->
article.deleteRecord()
), 500)
Why is that?
(Im using Ember.js-rc.1 and Ember Data rev 11 together with the ember-localstorage-adapter by #rpflorence, but I don't think that matter since I didn't call commit() yet...)
Update 1
Just figured out it also works with Ember.run.once...
Update 2
I opened a GitHub issue: https://github.com/emberjs/data/issues/772
As discussed on GitHub, the forEach()-loop breaks, because it breaks the index while removing items.
The solution:
"Copy" it in another array using toArray():
#get("content").toArray().forEach(article) ->
article.deleteRecord()
The nicer approach, if there was a function like forEachInReverse, is to loop backwards, so even though items are removed, the missing index wouldn't hurt the loop.
I still had issues with the above answer. Instead, I used a reverse for loop:
for(var i = items.length - 1; i >= 0; i--) {
items.objectAt(i).destroyRecord(); // or deleteRecord()
}
This destroys each item without disrupting the index.