How to clear object store in Mule 3 - mule

In my application I use timestamp to query data. The query use the time of now and that last succes run time. I do this to make sure there is no data will be lost in between. This approach also has drawbacks, of course, which is that if the application has any issue for a long time or will crash, a lot of data has to be retrieved at once. Because the application uses the timestamp of the last success run, of course.
Internally, the application runs like the following. A query is executed based on the timestamp and the received payload is used in a foreach loop. Now during testing I run into the issue that the retrieved data is always the same. Even though I change the value from object store to hard value. I keep getting the same payload. It seems that the value used is from the cache. I tried with the operation "Dispose store" and “Remove” but that doesn't work either. Does anyone know how I can delete the object store cache locally but also on the Privet cloud edition?
Thanks for helping

Related

Mule is taking a long time for the simple select for the first execution

I am just using a HTTP listener and Select in mule flow. It is a get method, passing ID as an input, and the same ID is passed to select (input). It is taking 3 to 4 minutes of delay when we execute via mule for the first time, but in DB, it took only millisecond.
This delay only happens after adding the parameter in the select.
Someone help me, why there is a delay for the first time and how to resolve it?
Possible cause could be how you create Metadata. For example you use huge CSV file as example for your data structure. Mule reads whole file to have headers. It takes time.
Solution - if you create Metadata by example - use small examples with couple rows of data.
Usually the main points that cause performance issues in first executions are:
JVM and Mule Runtime warmup
Time to establish connections
The first one can not be avoided. For the second one usually a connection pool is used to mitigate it somewhat. Having said that 4 minutes is a very excessive time for either of those. You need to do some performance analysis, adding logs before and after operation in the flow, enabling debug logs for the database connector and even using a Java profiler connected to the Mule JVM to understand what could be happening.
You also have to consider if there is a high number of records that need to be processed, even if the database can answer quickly, it might take some time to format.

Bigquery Cache Not Working

I noticed that BigQuery no longer cache the same query even I have chosen to use cache in the GUI (both Alpha and Classic). I didn't edit the query at all, just keep clicking run query button and every time GUI executed the query without using cache results.
It happens to my PHP script as well. Before, it was enable to use cache and came back with results very quick and now it executes the query every time even the same query has been executed minutes ago. I can confirm the behaviour in the logs.
I am wondering if there is anything changed in the last few weeks? Or some kind of account level settings control this? Because it was working fine for me.
As per official docs here cache is disable when:
...any of the tables referenced by the query have recently received
streaming inserts...
Even if you are streaming to one partition, and then querying to another, this will invalidate caching functionality for the whole table. There is this feature request opened where it is requested to be able to hit cache when doing streaming inserts to one partition but querying a different partition.
EDIT***:
After some investigation I've found out that some months ago there was an issue going on which was allowing to hit the cache even streaming inserts were being made. This was not expected behavior, and therefore it got solved in May. I guess this is the change you have experienced and what you are talking about.
Docs have not changed related to this, and they aren't/weren't incorrect. Just the previous behavior was the incorrect one.

Azure functions output caching

I am creating Azure functions to return data from a database (Azure AS). I will be returning same data for all the requests, so caching the output seems like a good idea as the data changes only once a day
What are my options here?
Options listed from most simple to most complex:
One option is to use static variables - but since the process can get recycled very quickly (assume every few minutes), that may not help much.
Cache via storage (Blob / Table). Your function can first try to read from the table, if missing, it can then read from the database and save back to the table. You could have a second timer function that deletes old cache entries every N hours.
I'd recommend starting here.
Azure Functions can still run arbitrary code, you could call out to any other caching service (ie, Redis) and use the same patterns that you'd use in ASP.Net.

Store Global Data Android App With Service

I have an android app that consists of several activities / fragments and one service. In one of the activities I create a new variable which I need to access from most of the other activities and the service. What is the best way to handle this so that even if the app is closed and reopened the value persists. Currently I pass the variable to my service and then each activity has to use a Messenger to query the service and get the value back. I am wondering if there is a more efficient way of doing this that doesn't require that each activity bind to the service to get the one value.
Possible Solutions:
1. Singleton - Will this survive the app being closed?
2. Extending Application and storing the value there - Seems like this is discouraged especially for such a simple use case.
3. Database the field locally and then just query it when needed, might be ok but might also be overkill.
4. Combination of 1 and 3 where I have a singleton which returns the value if it has it and if it doesn't then it will query the db and get the value. This way the db only has to be queried once as long as the app is running and the value will be persisted through app closes.
Any thoughts?
Thanks,
Nathan
Singleton - Will this survive the app being closed?
It will live as long as your process lives.
Extending Application and storing the value there - Seems like this is discouraged especially for such a simple use case.
It adds no value over the singleton option in this case.
Database the field locally and then just query it when needed, might be ok but might also be overkill
If you need the data to survive your process being terminated, you will want to persist it somehow (database, file, SharedPreferences). However, your option 4 (using a singleton cache) will be more efficient.

Is there a way to query the Apple System Log incrementally?

I want to display some contents of the system log using the Apple System Log API. Is there a way to query the log database incrementally, ie. only get the messages that are new since the last query? The aslresponse_next call only seems to iterate over messages that existed at the time of the query.
I think I could fake incremental querying by setting a minimum time filter, but the approach seems prone to various timing issues, so I’d rather have something more official if possible.