How to read the result of Flow in Android Compose #Composable function for non-GUI consumption (e.g. for writing in repository) - repository

I am introducing DataStore in my Android Compose app for storing user preferences. While I am not happy about keeping the DataStore instance as a attribute of Context instance - because the Context is accessible from the #Composable only (and not in, e.g. repository) - I am still going for it.
Lets assume (following the references tutorial), that getEmail is the function that reads the DataStore key-value pair and that returns the Flow instance.
My intention is to put the following (approxiamte) code into one of my top-level #Composable, which has AppContainer as an argument - such composables are very top level:
var email = getEmail.collectAsState("") //or should I use single()?
appContainer.salesOrderRespository.setEmail(email)
But I am afraid to do this in this very crude way in which I have wrote this above. E.g. I am concerned about the following things:
should I put this code in some scope (because its collectAsState can block the GUI thread), like:
val scope = rememberCoroutineScope()
scope.launch {
var email = getEmail.collectAsState("") //or should I use single()?
appContainer.salesOrderRespository.setEmail(email)
}
Can I use construction var email = getEmail.collectAsState("") - email can not be accessible immediately, it is assigned asynchronously. That is why I may need something like this (just imagination):
getEmail.collectAsState("").onReadingDone( it - > { appContainer.salesOrderRespository.setEmail(it) })
And, of course, I am eager to execute this code as early as possible. Almost of my repositories are in need of configuration data and if application is starting and going ahead while still reading the configuration data from the DataStore, then the app will not work as expected.
So - I am trying to to just one thing - read DataStore (1. as attribute of the Context, because there is no other proper global instance to whom I can create such attribute; 2. inside #Composable, because Context can be accessed from the #Composable only) and assigne the read value to the attributes of one or more repositories. But I observed that this operation is very complex and involved many concerns. I listed them. That is my question is quite long and complex, but it effectively tries to tackle one simple thing only - reading+assigning.

You don't have to use the Context helper. Instead, I would look into integrating Hilt and injecting your data store into your repository.
Here is a blog about the technique.
https://medium.com/androiddevelopers/datastore-and-dependency-injection-ea32b95704e3

Related

Scoped properties for Application Insights

I would like to know if there is an elegant way to add scoped properties to Application Insights, something similar to Serilog:
var yearEnricher = new PropertyEnricher("Year", year);
using (LogContext.PushProperties(yearEnricher))
{
// ...
}
In the previous example every log created within the using block will have the property Year stamped on it.
I figured out how to do this when I want the property to be present within the whole request pipeline:
var requestTelemetry = context.Features.Get<RequestTelemetry>();
requestTelemetry?.Properties.Add(propertyName, propertyValue.ToString());
Sometimes I want to create a logging scope in code that is not related to the web context so it doesn't make sense to rely on the IHttpContextAccessor. I acknowledge I could leverage OperationTelemetry and TelemetryClient.StartOperation to achieve my goal but it is cumbersome as I've to implement a few properties in which I've no interest (such as Name, Success, Duration...).
Is there a better way than relying on OperationTelemetry?
If you don't want to use OperationTelemetry, you might want to look into implementing your own ITelemetryInitializer (see documentation here).
It should be fairly easy to implement a stack-like global structure to hold the properties you want to push, and pop the stack on your Dispose method.
Note that you'll probably need to utilize CallContext in order for your stacks to be thread safe.

Update Running workflow with dynamicUpdateMap

I have a workflow running and i'm trying to update it dynamically. It is a Flowchart and i'm trying to change the Next property of a FlowStep.
The problem is that when loading WorkflowApplication.Load(workflowApplicationInstance, map); the instance with the map, i got the error:
In order for an implementation map to be directly applied to a workflow instance, the root of the definition must not have any public/imported children or public/imported delegates.
i tried saving the map to file and to database, because i saw in other examples, the map is saved with extension file.map not file.xaml of file.xml. Anyway it was useless, it's still not loading.
Solved that. The problem was when calling PrepareForUpdate and CreateUpdateMap methods, from their API, i was calling them with ActivityBuilder parameter and it should have been Activity. So having the ActivityBuilder of a workflow you can obtain the activity of it like this:
ActivityBuilder workflowDefinition;
Activity flowcharWorkflow = workflowDefinition.Implementation as Flowchart();
if your workflow definition has a root of flowchart.

Posting a Task to the Web Consoles Execution(Management) Context

In the apache brooklyn web interface we would like to display some content for the sytsem managers. The content is too long to be served as a simple sensor value.
Our idea was to create a task and write the content into the output stream of the task, and then offer the REST based URL to the managers like this:
/v1/activities/{task}/stream/stdout (Of course the link masked with some nice text)
The stream and task is created like this:
LOG.info("{} Creating Activity for ClusterReport Feed", this);
activity = Tasks.builder().
displayName("clusterReportFeed").
description("Output for the Cluster Report Feed").
body(new Runnable() {
#Override
public void run() {
//DO NOTHING
}
}).
parallel(true).
build();
LOG.info("{} Task Created with Id: " + activity.getId(), this);
Entities.submit(server, activity).getUnchecked();
The task seems to be created and the interraction works perfectly fine.
However when I want to access the tasks output stream from my browser using a prepared URL I get the error that the task does not exist.
Our idea is that we are not in the right Management/Execution Context. The Web page is running in an other context compared to the entities and their sensors. How can we put a task so that it's visible for the web consoles context also.
Is it possible to write the content into a file and then offer it for download via Jetty(brooklyns web server)? That would be a much simpler way.
Many tasks in Brooklyn default to being transient - i.e. they are deleted shortly after they complete (things like effector invocations are by default non-transient).
You can mark your task as non-transient using the code below in your use of the task builder:
.tag(BrooklynTaskTags.NON_TRANSIENT_TASK_TAG)
However, note that (as of Brooklyn version 0.9.0) tasks are kept in-memory using soft references. This means the stdout of the task will likely be lost at some point in the future, when that memory is needed for other in-memory objects.
For your use-case, would it make sense to have this as an effector result perhaps?
Or could you write to an object store such as S3 instead? The S3-approach would seem best to me.
For writing it to a file, care must be taken when used with Brooklyn high-availability. Would you write to a shared volume?
If you do write to a file, then you'd need to provide a web-extension so that people can access the contents of that file. As of Brooklyn 0.9.0, you can add your own WARs in code when calling BrooklynLauncher (which calls BrooklynWebServer).

Apache JENA TDB files locked after creation with web application

I am using JENA to create a triple store (TDB functionality) with the following code:
public void createTDBFromOWL() {
Dataset dataset = TDBFactory.createDataset(newTripleStoreLocation);
dataset.begin(ReadWrite.WRITE);
try {
//getting the model inside the transaction
Model model = dataset.getDefaultModel();
FileManager fileManager=FileManager.get();
Model holder=fileManager.readModel(model, newOWLFileLocation);
//committing dataset
dataset.commit();
model.close();
holder.close();
} finally {
dataset.end();
dataset.close();
}
}
After I create the triple store, the files created are locked by my application server (Glassfish), and I can't delete them until I manually stop Glassfish and it releases its lock. As shown in the above code, I think I am closing everything, so I don't get why a lock is maintained on the files.
When you call Dataset#close(), the implementation delegates that call to an underlying
DatasetGraphBase#close(), which then ultimately delegates to DatasetGraphTDB#_close().
This results in calls to TripleTable#close() and QuadTable#close(). Both of these call (several) NodeTupleTable#close(). Continuing with the indirection, this calls NodeTable#close() and TupleTable#close(). The former is an interface, so we'd need to make a proper guess as to which class is run in your implementation. The latter iterates through a collection of TupleIndex objects and calls close() on each of them. TupleIndex is, also, an interface.
There is only one meaningful heirarchy of descendents from TupleIndex that results in something which can lock a file, which leads us to TupleIndexRecord#close(). We can then follow a particular implementation of RangeIndex called BPlusTree all the way down until we see actual ownership of the MappedByteBuffer
Ultimately, while reading the implementation of BlockAccessMapped#close(), it seems like the entire heirarchy is closing things properly, down to the final classes, but that this longstanding bug may be the culprit. From the documentation:
once a file has been mapped a number of operations on that file will
fail until the mapping has been released (e.g. delete, truncating to a
size less than the mapped area). However the programmer can't control
accurately the time at which the unmapping takes place --- typically
it depends on the processing of finalization or a PhantomReference
queue.
So there you have it. Despite Jena's best efforts, one cannot yet control when that file will be unmapped in Java. This ends up being the tradeoff for memory-mapped file IO in java.

Deep level access control in DataMapper ORM

Introduction
I'm currently building an access control system in my DataMapper ORM installation (with CodeIgniter 2.*). I have the initial injection of the User's rights (Root/Anonymous layers too) working perfectly. When a User logs in the DataMapper calls that are done in the system will automatically be marked with the Userrights the User has.
So until this point it works perfectly, but now I'm a bit in a bind. The problem is that I need some way to catch and filter each method-call on the Object that is instantiated.
I have two special calls so I can disable the Userrights-checks too. This is particularly handy at the exact moment I want to login a User and need to do initial checks;
DataMapper::disable_userrights();
$this->_user = new User($this->session->userdata('_user_id'));
$this->_userrights = ($this->_user ? $this->_user->userrights(TRUE) : NULL);
DataMapper::enable_userrights();
The above makes sure I can do the initial User (and it's Userrights) injection. Inside the DataMapper library I use the $CI =& get_instance(); to access the _ globals I use. The general rule in this installment I'm building is that $this->_ is reserved for a "globals" system that always gets loaded (or can sometimes be NULL/FALSE) so I can easily access information that's almost always required on each page/call.
Details
Ok, so image the above my logged-in User has the Userrights: Create/Read/Update on the User Entity. So now if I call a simple:
$test = new User();
$test->get_where('name', 'Allendar');
The $_rights Array inside the DataMapper instance will know that the current logged-in User is allowed to perform certain tasks on "this" instance;
protected $_rights = array(
'Create' => TRUE,
'Read' => TRUE,
'Update' => TRUE,
'Delete' => FALSE,
);
The issue
Now comes my problem. I want to control these Userrights by validating them over each action that is performed. I have the following ideas;
Super redundant; make a global validation method that is executed at the start of each other method in the DataMapper Class.
Problem 1: I have to spam the whole DataMapper Class with the same calls
Problem 2: I have no control over DataMapper extension methods
Problem 3: How to detect relational includes? They should be validated too
Low level binding on certain Core DataMapper calls where I can clearly detect what kind of action is executed on the database (C/R/U/D).
So I'm aiming for Option 2 (and 1.) Problem 2), as it will also solve 1.) Problem 2.
The problem is that DataMapper is so massive and it's pretty complex to discern what actually happens when on it's deepest calling level. Furthermore it looks like all methods are very scattered and hardly ever use each other ($this->get() is often not used to do an eventual call to get a dataset).
So my goal is:
User (logged-in, Anonymous, Root) makes a DataMapper istance
$user_test = new User;
User wants to get $user-test (Read)
$user_test->get(1);
DataMapper will validate the actual call that is done at the database
IF it is only SELECT; OK
IF something else than SELECT (or JOINs to other Model that the User doesn't have access to that/those Models, it will fail with a clear error message)
IF JOINed Models also validate; OK
Return the actual instance;
IF OK: continue DataMapper's normal workflow
IF not OK: inform the User and return the normal empty DataMapper instance of that Model
Furthermore, for this system I think I will need to add some customization for the raw_sql (etc.) SQL calls so that I have to inject the rights manually related to that SQL statement or only allow the Root User to do those things.
Recap
I'm curious if someone ever attempted something like this in DataMapper or has some hints how I can use/intercept those lowest level calls in DataMapper.
If I can get some clearance on the deepest level of DataMapper's actual final query-call I can probably get a long way myself too.
I would like to suggest not to do this in Datamapper itself (mainly due to the complexity of the code, as you have already noticed yourself).
Instead, use a base model, and have that extend Datamapper. Then add the code to the base model required for your ACL checks, and then overload every Datamapper method that needs an ACL check. Have it call your ACL, deal with an access denied, and if access is granted, simply return the result of parent::method();.
Instead of extending Datamapper, your application models should then extend this base model, so they will inherit the ACL features.