Android 12 Device Owner Provisioning with QR code - kotlin

I have an application, that can be successfully setup as Device Owner on devices up to Android 12 via QR code and now I add two activity like this link for android 12: Android 12 Device Owner Provisioning.
and the device owner setup is completed but when I want to run my app I have this error:(and I don't have any pending intent)
java.lang.IllegalArgumentException: io.phoenixdev.afw.emm: Targeting S+ (version 31 and above) requires that one of FLAG_IMMUTABLE or FLAG_MUTABLE be specified when creating a PendingIntent.
Strongly consider using FLAG_IMMUTABLE, only use FLAG_MUTABLE if some functionality depends on the PendingIntent being mutable, e.g. if it needs to be used with inline replies or bubbles.```

this is because one or a few of your PendingIntent is created without the Mutability flag. Can you check if you are passing the Mutability flag(either FLAG_MUTABLE or FLAG_IMMUTABLE)? In case you are not modifying the pending intent once created, I would recommend adding FLAG_IMMUTABLE.
Here is the method I use to get flags for Pending Intent so that you don't have to do Android 12 checks everywhere.
public static int getPendingIntentFlag(boolean mutable) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) {
return mutable ? PendingIntent.FLAG_MUTABLE :
PendingIntent.FLAG_IMMUTABLE;
}
return 0;
}
Simply call getPendingIntentFlag(false) for immutable intents like this
val pendingIntent = PendingIntent.getBroadcast(
context, 0, intent,
PendingIntent.FLAG_UPDATE_CURRENT or
Utils.getPendingIntentFlag(false)
)
This way either android 12 or not, you need not worry about this flag.

Related

Neo4j 3.5's embeded database does not seem to persist data

I am trying to build a small command line tool that will store data in a neo4j graph. To do this I have started experimenting with Neo4j3.5's embedded databases. After putting together the following example I have found that either the nodes I am creating are not being saved to the database or the method of database creation is overwriting my previous run.
The Example:
fun main() {
//Spin up data base
val graphDBFactory = GraphDatabaseFactory()
val graphDB = graphDBFactory.newEmbeddedDatabase(File("src/main/resources/neo4j"))
registerShutdownHook(graphDB)
val tx = graphDB.beginTx()
graphDB.createNode(Label.label("firstNode"))
graphDB.createNode(Label.label("secondNode"))
val result = graphDB.execute("MATCH (a) RETURN COUNT(a)")
println(result.resultAsString())
tx.success()
}
private fun registerShutdownHook(graphDb: GraphDatabaseService) {
// Registers a shutdown hook for the Neo4j instance so that it
// shuts down nicely when the VM exits (even if you "Ctrl-C" the
// running application).
Runtime.getRuntime().addShutdownHook(object : Thread() {
override fun run() {
graphDb.shutdown()
}
})
}
I would expect that every time I run main the resulting query count will increase by 2.
That is currently not the case and I can find nothing in the docs that references a different method of opening an already created embedded database. Am I trying to use the embedded database incorrectly or am I missing something? Any help or info would be appreciated.
build Info:
Kotlin jvm 1.4.21
Neo4j-comunity-3.5.35
Transactions in neo4j 3.x have a 3 stage model
create
success / failure
close
you missed the third, which would then commit or rollback.
You can use Kotlin's use as Transaction is an AutoCloseable

How to mimic Optaplanner 7's MoveIteratorFactoryToMoveSelectorBridge in Optaplanner 8

We have implemented a custom construction heuristic with Optaplanner 7. We didn't use a simple CustomPhaseCommand; Instead, we extend MoveSelectorConfig and override buildBaseMoveSelector to return our own MoveFactory wrapped in a MoveIteratorFactoryToMoveSelectorBridge. We decided to do so, because it gives us the following advantages:
global termination config is supported out of the box
type safe configuration from code (no raw Strings)
With Optaplanner 8 the method buildBaseMoveSelector is gone from the MoveSelectorConfig API and building a custom config class seems to be prevented in the new implementation of MoveSelectorFactory.
Is it still possible to inject a proper custom construction heuristic into the Optaplanner 8 configuration and if yes, how? Or should we be using a CustomPhaseCommand with a custom self-implemented termination?
EDIT:
For clarity, in Optaplanner 7 we had the following snippet in our Optaplanner-config (defined in kotlin code):
ConstructionHeuristicPhaseConfig().apply {
foragerConfig = ConstructionHeuristicForagerConfig().apply {
pickEarlyType = FIRST_FEASIBLE_SCORE
}
entityPlacerConfig = QueuedEntityPlacerConfig().apply {
moveSelectorConfigList = listOf(
CustomMoveSelectorConfig().apply {
someProperty = 1
otherProperty = 0
}
)
}
},
CustomMoveSelectorConfig extends MoveSelectorConfig and overrides buildBaseMoveSelector:
class CustomMoveSelectorConfig(
var someProperty: Int = 0,
var otherProperty: Int = 0,
) : MoveSelectorConfig<CustomMoveSelectorConfig>() {
override fun buildBaseMoveSelector(
configPolicy: HeuristicConfigPolicy?,
minimumCacheType: SelectionCacheType?,
randomSelection: Boolean,
): MoveSelector {
return MoveIteratorFactoryToMoveSelectorBridge(
CustomMoveFactory(someProperty, otherProperty),
randomSelection
)
}
To summarize: We really need to plug our own MoveSelector with the custom factory. I think this is not possible with Optaplanner 8 at the moment.
Interesting extension.
Motivation for the changes in 8:
the buildBaseMoveSelector was not public API (the config package was not in the api package, we only guaranteed XML backwards compatibility for package config in 7). Now, we also guarantee API backwards compatibility for package config, so including programmatic configuration, because we moved all build* methods out of it.
In 8.2 or later we want to internalize the configuration in the SolverFactory, so we can build thousands of Solver instances faster. For example, loading classes wouldn't to be done only at SolverFactory build, once, no longer at every Solver build.
Anyway, let's first see if you can use the default way to override the moves of the CH, by explicitly configuring the queuedEntityPlacer and it's MoveIteratorFactory? https://docs.optaplanner.org/latestFinal/optaplanner-docs/html_single/#allocateEntityFromQueueConfiguration
I guess not, because you'd need mimic support... for the selected entity you need to generate n moves, but always the same entity during 1 placement (hence the need for mimicing).
Clearly, the changes in 8 prevent users from plugging in their own MoveSelectors (= which is an internal API, but anyway). We might be able to add an internal API to allow that again.

Consumable channel

Use Case
Android fragment that consumes items of T from a ReceiveChannel<T>. Once consumed, the Ts should be removed from the ReceiveChannel<T>.
I need a ReceiveChannel<T> that supports consuming items from it. It should function as a FIFO queue.
I currently attach to the channel from my UI like:
launch(uiJob) { channel.consumeEach{ /** ... */ } }
I detach by calling uiJob.cancel().
Desired behavior:
val channel = Channel<Int>(UNLIMITED)
channel.send(1)
channel.send(2)
// ui attaches, receives `1` and `2`
channel.send(3) // ui immediately receives `3`
// ui detaches
channel.send(4)
channel.send(5)
// ui attaches, receiving `4` and `5`
Unfortunately, when I detach from the channel, the channel is closed. This causes .send(4) and .send(5) to throw exceptions because the channel is closed. I want to be able to detach from the channel and have it remain usable. How can I do this?
Channel<Int>(UNLIMITED) fits my use case perfect, except that is closes the channel when it is unsubscribed from. I want the channel to remain open. Is this possible?
Channel.consumeEach method calls Channel.consume method which has this line in documentation:
Makes sure that the given block consumes all elements from the given channel by always invoking cancel after the execution of the block.
So the solution is to simply not use consume[Each]. For example you can do:
launch(uiJob) { for (it in channel) { /** ... */ } }
You can use BroadcastChannel. However, you need to specify a limited size (such as 1), as UNLIMITED and 0 (for rendez-vous) are not supported by BroadcastChannel.
You can also use ConflatedBroadcastChannel which always gives the latest value it had to new subscribers, like LiveData is doing.
BTW, is it a big deal if you new Fragment instance receives only the latest value? If not, then just go with ConflatedBroadcastChannel. Otherwise, none of BroacastChannels may suit your use case (try it and see if you get the behavior you're looking for).

App Folder files not visible after un-install / re-install

I noticed this in the debug environment where I have to do many re-installs in order to test persistent data storage, initial settings, etc... It may not be relevant in production, but I mention this anyway just to inform other developers.
Any files created by an app in its App Folder are not 'visible' to queries after manual un-install / re-install (from IDE, for instance). The same applies to the 'Encoded DriveID' - it is no longer valid.
It is probably 'by design' but it effectively creates 'orphans' in the app folder until manually cleaned by 'drive.google.com > Manage Apps > [yourapp] > Options > Delete hidden app data'. It also creates problem if an app relies on finding of files by metadata, title, ... since these seem to be gone. As I said, not a production problem, but it can create some frustration during development.
Can any of friendly Googlers confirm this? Is there any other way to get to these files after re-install?
Try this approach:
Use requestSync() in onConnected() as:
#Override
public void onConnected(Bundle connectionHint) {
super.onConnected(connectionHint);
Drive.DriveApi.requestSync(getGoogleApiClient()).setResultCallback(syncCallback);
}
Then, in its callback, query the contents of the drive using:
final private ResultCallback<Status> syncCallback = new ResultCallback<Status>() {
#Override
public void onResult(#NonNull Status status) {
if (!status.isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
query = new Query.Builder()
.addFilter(Filters.and(Filters.eq(SearchableField.TITLE, "title"),
Filters.eq(SearchableField.TRASHED, false)))
.build();
Drive.DriveApi.query(getGoogleApiClient(), query)
.setResultCallback(metadataCallback);
}
};
Then, in its callback, if found, retrieve the file using:
final private ResultCallback<DriveApi.MetadataBufferResult> metadataCallback =
new ResultCallback<DriveApi.MetadataBufferResult>() {
#SuppressLint("SetTextI18n")
#Override
public void onResult(#NonNull DriveApi.MetadataBufferResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
MetadataBuffer mdb = result.getMetadataBuffer();
for (Metadata md : mdb) {
Date createdDate = md.getCreatedDate();
DriveId driveId = md.getDriveId();
}
readFromDrive(driveId);
}
};
Job done!
Hope that helps!
It looks like Google Play services has a problem. (https://stackoverflow.com/a/26541831/2228408)
For testing, you can do it by clearing Google Play services data (Settings > Apps > Google Play services > Manage Space > Clear all data).
Or, at this time, you need to implement it by using Drive SDK v2.
I think you are correct that it is by design.
By inspection I have concluded that until an app places data in the AppFolder folder, Drive does not sync down to the device however much to try and hassle it. Therefore it is impossible to check for the existence of AppFolder placed by another device, or a prior implementation. I'd assume that this was to try and create a consistent clean install.
I can see that there are a couple of strategies to work around this:
1) Place dummy data on AppFolder and then sync and recheck.
2) Accept that in the first instance there is the possibility of duplicates, as you cannot access the existing file by definition you will create a new copy, and use custom metadata to come up with a scheme to differentiate like-named files and choose which one you want to keep (essentially implement your conflict merge strategy across the two different files).
I've done the second, I have an update number to compare data from different devices and decide which version I want so decide whether to upload, download or leave alone. As my data is an SQLite DB I also have some code to only sync once updates have settled down and I deliberately consider people updating two devices at once foolish and the results are consistent but undefined as to which will win.

Detecting the platform of a Windows Store App

Is there a possibility to ask at runtime if a Windows Store app (compiled for ARM and x86/64) is executed currently on an ARM-device or more specific on a Microsoft Surface Tablet from within c# or is it necessary to compile two Versions of the same app to behave different on different plattforms?
This can be done via the following code (according to this SO post):-
[DllImport("kernel32.dll")]
internal static extern void GetNativeSystemInfo(ref SystemInfo lpSystemInfo);
internal static bool IsArmBased()
{
var sysInfo = new SystemInfo();
GetNativeSystemInfo(ref sysInfo);
return sysInfo.wProcessorArchitecture == ProcessorArchitectureArm; //ushort 5
}
This does pass the WACK test, test I wouldn't count on it being around forever. Think very hard about why you need this information (is it just for stats, or are you changing the behaviour of your app, if so why!?)
using Windows.ApplicationModel;
Package package = Package.Current;
PackageId packageId = package.Id;
String arch = String.Format("{0}", packageId.Architecture);
This will return "X86" or "ARM", depending on the underlying hardware.