Load external properties files into EJB 3 app running on WebLogic 11 - properties

Am researching the best way to load external properties files from and EJB 3 app whose EAR file is deployed to WebLogic.
Was thinking about using an init servlet but I read somewhere that it would be too slow (e.g. my message handler might receive a message from my JMS queue before the init servlet runs).
Suppose I have multiple property files or one file here:
~/opt/conf/
So far, I feel that the best possible solution is by using a Web Logic application lifecycle event where the code to read the properties files during pre-start:
import weblogic.application.ApplicationLifecycleListener;
import weblogic.application.ApplicationLifecycleEvent;
public class MyListener extends ApplicationLifecycleListener {
public void preStart(ApplicationLifecycleEvent evt) {
// Load properties files
}
}
See: http://download.oracle.com/docs/cd/E13222_01/wls/docs90/programming/lifecycle.html
What would happen if the server is already running, would post start be a viable solution?
Can anyone think of any alternative ways that are better?

It really depends on how often you want the properties to be reloaded. One approach I have taken is to have a properties file wrapper (singleton) that has a configurable parameter that defines how often the files should be reloaded. I would then always read properties through that wrapper and it would reload the properties ever 15 minutes (similar to Log4J's ConfigureAndWatch). That way, if I wanted to, I can change properties without changing the state of a deployed application.
This also allows you to load properties from a database, instead of a file. That way you can have a level of confidence that properties are consistent across the nodes in a cluster and it reduces complexity associated with managing a config file for each node.
I prefer that over tying it to a lifecycle event. If you weren't ever going to change them, then make them static constants somewhere :)
Here is an example implementation to give you an idea:
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.*;
/**
* User: jeffrey.a.west
* Date: Jul 1, 2011
* Time: 8:43:55 AM
*/
public class ReloadingProperties
{
private final String lockObject = "LockMe";
private long lastLoadTime = 0;
private long reloadInterval;
private String filePath;
private Properties properties;
private static final Map<String, ReloadingProperties> instanceMap;
private static final long DEFAULT_RELOAD_INTERVAL = 1000 * 60 * 5;
public static void main(String[] args)
{
ReloadingProperties props = ReloadingProperties.getInstance("myProperties.properties");
System.out.println(props.getProperty("example"));
try
{
Thread.sleep(6000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
System.out.println(props.getProperty("example"));
}
static
{
instanceMap = new HashMap(31);
}
public static ReloadingProperties getInstance(String filePath)
{
ReloadingProperties instance = instanceMap.get(filePath);
if (instance == null)
{
instance = new ReloadingProperties(filePath, DEFAULT_RELOAD_INTERVAL);
synchronized (instanceMap)
{
instanceMap.put(filePath, instance);
}
}
return instance;
}
private ReloadingProperties(String filePath, long reloadInterval)
{
this.reloadInterval = reloadInterval;
this.filePath = filePath;
}
private void checkRefresh()
{
long currentTime = System.currentTimeMillis();
long sinceLastLoad = currentTime - lastLoadTime;
if (properties == null || sinceLastLoad > reloadInterval)
{
System.out.println("Reloading!");
lastLoadTime = System.currentTimeMillis();
Properties newProperties = new Properties();
FileInputStream fileIn = null;
synchronized (lockObject)
{
try
{
fileIn = new FileInputStream(filePath);
newProperties.load(fileIn);
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
if (fileIn != null)
{
try
{
fileIn.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
properties = newProperties;
}
}
}
public String getProperty(String key, String defaultValue)
{
checkRefresh();
return properties.getProperty(key, defaultValue);
}
public String getProperty(String key)
{
checkRefresh();
return properties.getProperty(key);
}
}

Figured it out...
See the corresponding / related post on Stack Overflow.

Related

stop polling files when rabbitmq is down: spring integration

I'm working on a project where we are polling files from a sftp server and streaming it out into a object on the rabbitmq queue. Now when the rabbitmq is down it still polls and deletes the file from the server and losses the file while sending it on queue when rabbitmq is down. I'm using ExpressionEvaluatingRequestHandlerAdvice to remove the file on successful transformation. My code looks like this:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpProperties.getSftpHost());
factory.setPort(sftpProperties.getSftpPort());
factory.setUser(sftpProperties.getSftpPathUser());
factory.setPassword(sftpProperties.getSftpPathPassword());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpRemoteFileTemplate sftpRemoteFileTemplate() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#InboundChannelAdapter(channel = TransformerChannel.TRANSFORMER_OUTPUT, autoStartup = "false",
poller = #Poller(value = "customPoller"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(sftpRemoteFileTemplate,
null);
messageSource.setRemoteDirectory(sftpProperties.getSftpDirPath());
messageSource.setFilter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.txt"));
return messageSource;
}
#Bean
#Transformer(inputChannel = TransformerChannel.TRANSFORMER_OUTPUT,
outputChannel = SFTPOutputChannel.SFTP_OUTPUT,
adviceChain = "deleteAdvice")
public org.springframework.integration.transformer.Transformer transformer() {
return new SFTPTransformerService("UTF-8");
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice deleteAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpressionString(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(false);
return advice;
}
I don't want the files to get removed/polled from the remote sftp server when the rabbitmq server is down. How can i achieve this ?
UPDATE
Apologies for not mentioning that I'm using spring cloud stream rabbit binder. And here is the transformer service:
public class SFTPTransformerService extends StreamTransformer {
public SFTPTransformerService(String charset) {
super(charset);
}
#Override
protected Object doTransform(Message<?> message) throws Exception {
String fileName = message.getHeaders().get("file_remoteFile", String.class);
Object fileContents = super.doTransform(message);
return new customFileDTO(fileName, (String) fileContents);
}
}
UPDATE-2
I added TransactionSynchronizationFactory on the customPoller as suggested. Now it doesn't poll file when rabbit server is down, but when the server is up, it keeps on polling the same file over and over again!! I cannot figure it out why? I guess i cannot use PollerSpec cause im on 4.3.2 version.
#Bean(name = "customPoller")
public PollerMetadata pollerMetadataDTX(StartStopTrigger startStopTrigger,
CustomTriggerAdvice customTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setAdviceChain(Collections.singletonList(customTriggerAdvice));
pollerMetadata.setTrigger(startStopTrigger);
pollerMetadata.setMaxMessagesPerPoll(Long.valueOf(sftpProperties.getMaxMessagePoll()));
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
syncProcessor.setBeforeCommitChannel(
applicationContext.getBean(TransformerChannel.TRANSFORMER_OUTPUT, MessageChannel.class));
syncProcessor
.setAfterCommitChannel(
applicationContext.getBean(SFTPOutputChannel.SFTP_OUTPUT, MessageChannel.class));
syncProcessor.setAfterCommitExpression(new SpelExpressionParser().parseExpression(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])"));
DefaultTransactionSynchronizationFactory defaultTransactionSynchronizationFactory =
new DefaultTransactionSynchronizationFactory(syncProcessor);
pollerMetadata.setTransactionSynchronizationFactory(defaultTransactionSynchronizationFactory);
return pollerMetadata;
}
I don't know if you need this info but my CustomTriggerAdvice and StartStopTrigger looks like this :
#Component
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
#Autowired private StartStopTrigger startStopTrigger;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
if (startStopTrigger.getStart()) {
startStopTrigger.stop();
}
} else {
if (!startStopTrigger.getStart()) {
startStopTrigger.stop();
}
}
return result;
}
}
public class StartStopTrigger implements Trigger {
private PeriodicTrigger startTrigger;
private boolean start;
public StartStopTrigger(PeriodicTrigger startTrigger, boolean start) {
this.startTrigger = startTrigger;
this.start = start;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (!start) {
return null;
}
start = true;
return startTrigger.nextExecutionTime(triggerContext);
}
public void stop() {
start = false;
}
public void start() {
start = true;
}
public boolean getStart() {
return this.start;
}
}
Well, would be great to see what your SFTPTransformerService and determine how it is possible to perform an onSuccessExpression when there should be an exception in case of down broker.
You also should not only throw an exception do not perform delete, but consider to add a RequestHandlerRetryAdvice to re-send the file to the RabbitMQ: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/messaging-endpoints-chapter.html#retry-advice
UPDATE
So, well, since Gary guessed that you use Spring Cloud Stream to send message to the Rabbit Binder after your internal process (very sad that you didn't share that information originally), you need to take a look to the Binder error handling on the matter: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#_retry_with_the_rabbitmq_binder
And that is true that ExpressionEvaluatingRequestHandlerAdvice is applied only for the SFTPTransformerService and nothing more. The downstream error (in the Binder) is not included in this process already.
UPDATE 2
Yeah... I think Gary is right, and we don't have choice unless configure a TransactionSynchronizationFactory on the customPoller level instead of that ExpressionEvaluatingRequestHandlerAdvice: ExpressionEvaluatingRequestHandlerAdvice .
The DefaultTransactionSynchronizationFactory can be configured with the ExpressionEvaluatingTransactionSynchronizationProcessor, which has similar goal as the mentioned ExpressionEvaluatingRequestHandlerAdvice, but on the transaction level which will include your process starting with the SFTP Channel Adapter and ending on the Rabbit Binder level with the send to AMQP attempts.
See Reference Manual for more information: https://docs.spring.io/spring-integration/reference/html/transactions.html#transaction-synchronization.
The point with the ExpressionEvaluatingRequestHandlerAdvice (and any AbstractRequestHandlerAdvice) that they have a boundary only around handleRequestMessage() method, therefore only during the component they are declared.

AbstractStringBuilder.ensureCapacityInternal get NullPointerException in storm bolt

online system, the storm Bolt get NullPointerException,though I think I check it before line 61; It gets NullPointerException once in a while;
import ***.KeyUtils;
import ***.redis.PipelineHelper;
import ***.redis.PipelinedCacheClusterClient;
import **.redis.R2mClusterClient;
import org.apache.commons.lang3.StringUtils;
import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.IRichBolt;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.tuple.Tuple;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import java.util.Map;
/**
* RedisBolt batch operate
*/
public class RedisBolt implements IRichBolt {
static final long serialVersionUID = 737015318988609460L;
private static ApplicationContext applicationContext;
private static long logEmitNumber = 0;
private static StringBuffer totalCmds = new StringBuffer();
private Logger logger = LoggerFactory.getLogger(getClass());
private OutputCollector _collector;
private R2mClusterClient r2mClusterClient;
#Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
_collector = outputCollector;
if (applicationContext == null) {
applicationContext = new ClassPathXmlApplicationContext("spring/spring-config-redisbolt.xml");
}
if (r2mClusterClient == null) {
r2mClusterClient = (R2mClusterClient) applicationContext.getBean("r2mClusterClient");
}
}
#Override
public void execute(Tuple tuple) {
String log = tuple.getString(0);
String lastCommands = tuple.getString(1);
try {
//log count
if (StringUtils.isNotEmpty(log)) {
logEmitNumber++;
}
if (StringUtils.isNotEmpty(lastCommands)) {
if(totalCmds==null){
totalCmds = new StringBuffer();
}
totalCmds.append(lastCommands);//line 61
}
//日志数量控制
int numberLimit = 1;
String flow_log_limit = r2mClusterClient.get(KeyUtils.KEY_PIPELINE_LIMIT);
if (StringUtils.isNotEmpty(flow_log_limit)) {
try {
numberLimit = Integer.parseInt(flow_log_limit);
} catch (Exception e) {
numberLimit = 1;
logger.error("error", e);
}
}
if (logEmitNumber >= numberLimit) {
StringBuffer _totalCmds = new StringBuffer(totalCmds);
try {
//pipeline submit
PipelinedCacheClusterClient pip = r2mClusterClient.pipelined();
String[] commandArray = _totalCmds.toString().split(KeyUtils.REDIS_CMD_SPILT);
PipelineHelper.cmd(pip, commandArray);
pip.sync();
pip.close();
totalCmds = new StringBuffer();
} catch (Exception e) {
logger.error("error", e);
}
logEmitNumber = 0;
}
} catch (Exception e) {
logger.error(new StringBuffer("====RedisBolt error for log=[ ").append(log).append("] \n commands=[").append(lastCommands).append("]").toString(), e);
_collector.reportError(e);
_collector.fail(tuple);
}
_collector.ack(tuple);
}
#Override
public void cleanup() {
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
}
#Override
public Map<String, Object> getComponentConfiguration() {
return null;
}
}
exception info:
java.lang.NullPointerException at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:113) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415) at java.lang.StringBuffer.append(StringBuffer.java:237) at com.jd.jr.dataeye.storm.bolt.RedisBolt.execute(RedisBolt.java:61) at org.apache.storm.daemon.executor$fn__5044$tuple_action_fn__5046.invoke(executor.clj:727) at org.apache.storm.daemon.executor$mk_task_receiver$fn__4965.invoke(executor.clj:459) at org.apache.storm.disruptor$clojure_handler$reify__4480.onEvent(disruptor.clj:40) at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472) at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451) at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) at org.apache.storm.daemon.executor$fn__5044$fn__5057$fn__5110.invoke(executor.clj:846) at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745)
can anyone give me some advice to find the reason.
That is really odd thing to happen. Please read the code for two classes.
https://github.com/openjdk-mirror/jdk7u-jdk/blob/master/src/share/classes/java/lang/AbstractStringBuilder.java
https://github.com/openjdk-mirror/jdk7u-jdk/blob/master/src/share/classes/java/lang/StringBuffer.java
AbstractStringBuilder has constructor with no args which doesn't allocate the field 'value', which makes accessing the 'value' field being NPE. Any constructors in StringBuffer use that constructor. So maybe some odd thing happens in serialization/deserialization and unfortunately 'value' field in AbstractStringBuilder is being null.
Maybe initializing totalCmds in prepare() would be better, and also you need to consider synchronization (thread-safety) between bolts. prepare() can be called per bolt instance so fields are thread-safe, but class fields are not thread-safe.
I think I find the problem maybe;
the key point is
"StringBuffer _totalCmds = new StringBuffer(totalCmds);" and " totalCmds.append(lastCommands);//line 61"
when new a object, It takes serval steps:
(1) allocate memory and return reference
(2) initialize
if append after (1) and before (2) then the StringBuffer.java extends AbstractStringBuilder.java
/**
* The value is used for character storage.
*/
char[] value;
value is not initialized;so this will get null:
#Override
public synchronized void ensureCapacity(int minimumCapacity) {
if (minimumCapacity > value.length) {
expandCapacity(minimumCapacity);
}
}
this blot has a another question, some data maybe lost under a multithreaded environment

Unpredictable result of DriveId.getResourceId() in Google Drive Android API

The issue is that the 'resourceID' from 'DriveId.getResourceId()' is not available (returns NULL) on newly created files (product of 'DriveFolder.createFile(GAC, meta, cont)'). If the file is retrieved by a regular list or query procedure, the 'resourceID' is correct.
I suspect it is a timing/latency issue, but it is not clear if there is an application action that would force refresh. The 'Drive.DriveApi.requestSync(GAC)' seems to have no effect.
UPDATE (07/22/2015)
Thanks to the prompt response from Steven Bazyl (see comments below), I finally have a satisfactory solution using Completion Events. Here are two minified code snippets that deliver the ResourceId to the app as soon as the newly created file is propagated to the Drive:
File creation, add change subscription:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "_X_";
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null,
new ExecutionOptions.Builder()
.setNotifyOnCompletion(true)
.build()
)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
DriveId driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
DriveFile file = Drive.DriveApi.getFile(getGoogleApiClient(), driveId);
file.addChangeSubscription(getGoogleApiClient());
}
}
});
}
}
Event Service, catches the completion:
public class ChngeSvc extends DriveEventService {
private static final String TAG = "_X_";
#Override
public void onCompletion(CompletionEvent event) { super.onCompletion(event);
DriveId driveId = event.getDriveId();
Log.d(TAG, "onComplete: " + driveId.getResourceId());
switch (event.getStatus()) {
case CompletionEvent.STATUS_CONFLICT: Log.d(TAG, "STATUS_CONFLICT"); event.dismiss(); break;
case CompletionEvent.STATUS_FAILURE: Log.d(TAG, "STATUS_FAILURE"); event.dismiss(); break;
case CompletionEvent.STATUS_SUCCESS: Log.d(TAG, "STATUS_SUCCESS "); event.dismiss(); break;
}
}
}
Under normal circumstances (wifi), I get the ResourceId almost immediately.
20:40:53.247﹕Created a empty file: DriveId:CAESABiiAiDGsfO61VMoAA==
20:40:54.305: onComplete, ResourceId: 0BxOS7mTBMR_bMHZRUjJ5NU1ZOWs
... done for now.
ORIGINAL POST, deprecated, left here for reference.
I let this answer sit for a year hoping that GDAA will develop a solution that works. The reason for my nagging is simple. If my app creates a file, it needs to broadcast this fact to its buddies (other devices, for instance) with an ID that is meaningful (that is ResourceId). It is a trivial task under the REST Api where ResourceId comes back as soon as the file is successfully created.
Needles to say that I understand the GDAA philosophy of shielding the app from network primitives, caching, batching, ... But clearly, in this situation, the ResourceID is available long before it is delivered to the app.
Originally, I implemented Cheryl Simon's suggestion and added a ChangeListener on a newly created file, hoping to get the ResourceID when the file is propagated. Using classic CreateEmptyFileActivity from android-demos, I smacked together the following test code:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "CreateEmptyFileActivity";
final private ChangeListener mChgeLstnr = new ChangeListener() {
#Override
public void onChange(ChangeEvent event) {
Log.d(TAG, "event: " + event + " resId: " + event.getDriveId().getResourceId());
}
};
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
DriveId driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).addChangeListener(getGoogleApiClient(), mChgeLstnr);
}
}
});
}
}
... and was waiting for something to happen. File was happily uploaded to the Drive within seconds, but no onChange() event. 10 minutes, 20 minutes, ... I could not find any way how to make the ChangeListener to wake up.
So the only other solution, I could come up was to nudge the GDAA. So I implemented a simple handler-poker that tickles the metadata until something happens:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "CreateEmptyFileActivity";
final private ChangeListener mChgeLstnr = new ChangeListener() {
#Override
public void onChange(ChangeEvent event) {
Log.d(TAG, "event: " + event + " resId: " + event.getDriveId().getResourceId());
}
};
static DriveId driveId;
private static final int ENOUGH = 4; // nudge 4x, 1+2+3+4 = 10seconds
private static int mWait = 1000;
private int mCnt;
private Handler mPoker;
private final Runnable mPoke = new Runnable() { public void run() {
if (mPoker != null && driveId != null && driveId.getResourceId() == null && (mCnt++ < ENOUGH)) {
MetadataChangeSet meta = new MetadataChangeSet.Builder().build();
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).updateMetadata(getGoogleApiClient(), meta).setResultCallback(
new ResultCallback<DriveResource.MetadataResult>() {
#Override
public void onResult(DriveResource.MetadataResult result) {
if (result.getStatus().isSuccess() && result.getMetadata().getDriveId().getResourceId() != null)
Log.d(TAG, "resId COOL " + result.getMetadata().getDriveId().getResourceId());
else
mPoker.postDelayed(mPoke, mWait *= 2);
}
}
);
} else {
mPoker = null;
}
}};
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).addChangeListener(getGoogleApiClient(), mChgeLstnr);
mCnt = 0;
mPoker = new Handler();
mPoker.postDelayed(mPoke, mWait);
}
}
});
}
}
And voila, 4 seconds (give or take) later, the ChangeListener delivers a new shiny ResourceId. Of course, the ChangeListener becomes thus obsolete, since the poker routine gets the ResourceId as well.
So this is the answer for those who can't wait for the ResourceId. Which brings up the follow-up question:
Why do I have to tickle metadata (or re-commit content), very likely creating unnecessary network traffic, to get onChange() event, when I see clearly that the file has been propagated a long time ago, and GDAA has the ResourceId available?
ResourceIds become available when the newly created resource is committed to the server. In the case of a device that is offline, this could be arbitrarily long after the initial file creation. It will happen as soon as possible after the creation request though, so you don't need to do anything to speed it along.
If you really need it right away, you could conceivably use the change notifications to listen for the resourceId to change.

Gradle : how to use BuildConfig in an android-library with a flag that gets set in an app

My (gradle 1.10 and gradle plugin 0.8)-based android project consists of a big android-library that is a dependency for 3 different android-apps
In my library, I would love to be able to use a structure like this
if (BuildConfig.SOME_FLAG) {
callToBigLibraries()
}
as proguard would be able to reduce the size of the produced apk, based on the final value of SOME_FLAG
But I can't figure how to do it with gradle as :
* the BuildConfig produced by the library doesn't have the same package name than the app
* I have to import the BuildConfig with the library package in the library
* The apk of an apps includes the BuildConfig with the package of the app but not the one with the package of the library.
I tried without success to play with BuildTypes and stuff like
release {
// packageNameSuffix "library"
buildConfigField "boolean", "SOME_FLAG", "true"
}
debug {
//packageNameSuffix "library"
buildConfigField "boolean", "SOME_FLAG", "true"
}
What is the right way to builds a shared BuildConfig for my library and my apps whose flags will be overridden at build in the apps?
As a workaround, you can use this method, which uses reflection to get the field value from the app (not the library):
/**
* Gets a field from the project's BuildConfig. This is useful when, for example, flavors
* are used at the project level to set custom fields.
* #param context Used to find the correct file
* #param fieldName The name of the field-to-access
* #return The value of the field, or {#code null} if the field is not found.
*/
public static Object getBuildConfigValue(Context context, String fieldName) {
try {
Class<?> clazz = Class.forName(context.getPackageName() + ".BuildConfig");
Field field = clazz.getField(fieldName);
return field.get(null);
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (NoSuchFieldException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
}
return null;
}
To get the DEBUG field, for example, just call this from your Activity:
boolean debug = (Boolean) getBuildConfigValue(this, "DEBUG");
I have also shared this solution on the AOSP Issue Tracker.
Update: With newer versions of the Android Gradle plugin publishNonDefault is deprecated and has no effect anymore. All variants are now published.
The following solution/workaround works for me. It was posted by some guy in the google issue tracker:
Try setting publishNonDefault to true in the library project:
android {
...
publishNonDefault true
...
}
And add the following dependencies to the app project that is using the library:
dependencies {
releaseCompile project(path: ':library', configuration: 'release')
debugCompile project(path: ':library', configuration: 'debug')
}
This way, the project that uses the library includes the correct build type of the library.
You can't do what you want, because BuildConfig.SOME_FLAG isn't going to get propagated properly to your library; build types themselves aren't propagated to libraries -- they're always built as RELEASE. This is bug https://code.google.com/p/android/issues/detail?id=52962
To work around it: if you have control over all of the library modules, you could make sure that all the code touched by callToBigLibraries() is in classes and packages that you can cleave off cleanly with ProGuard, then use reflection so that you can access them if they exist and degrade gracefully if they don't. You're essentially doing the same thing, but you're making the check at runtime instead of compile time, and it's a little harder.
Let me know if you're having trouble figuring out how to do this; I could provide a sample if you need it.
I use a static BuildConfigHelper class in both the app and the library, so that I can have the packages BuildConfig set as final static variables in my library.
In the application, place a class like this:
package com.yourbase;
import com.your.application.BuildConfig;
public final class BuildConfigHelper {
public static final boolean DEBUG = BuildConfig.DEBUG;
public static final String APPLICATION_ID = BuildConfig.APPLICATION_ID;
public static final String BUILD_TYPE = BuildConfig.BUILD_TYPE;
public static final String FLAVOR = BuildConfig.FLAVOR;
public static final int VERSION_CODE = BuildConfig.VERSION_CODE;
public static final String VERSION_NAME = BuildConfig.VERSION_NAME;
}
And in the library:
package com.your.library;
import android.support.annotation.Nullable;
import java.lang.reflect.Field;
public class BuildConfigHelper {
private static final String BUILD_CONFIG = "com.yourbase.BuildConfigHelper";
public static final boolean DEBUG = getDebug();
public static final String APPLICATION_ID = (String) getBuildConfigValue("APPLICATION_ID");
public static final String BUILD_TYPE = (String) getBuildConfigValue("BUILD_TYPE");
public static final String FLAVOR = (String) getBuildConfigValue("FLAVOR");
public static final int VERSION_CODE = getVersionCode();
public static final String VERSION_NAME = (String) getBuildConfigValue("VERSION_NAME");
private static boolean getDebug() {
Object o = getBuildConfigValue("DEBUG");
if (o != null && o instanceof Boolean) {
return (Boolean) o;
} else {
return false;
}
}
private static int getVersionCode() {
Object o = getBuildConfigValue("VERSION_CODE");
if (o != null && o instanceof Integer) {
return (Integer) o;
} else {
return Integer.MIN_VALUE;
}
}
#Nullable
private static Object getBuildConfigValue(String fieldName) {
try {
Class c = Class.forName(BUILD_CONFIG);
Field f = c.getDeclaredField(fieldName);
f.setAccessible(true);
return f.get(null);
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
}
Then, anywhere in your library where you want to check BuildConfig.DEBUG, you can check BuildConfigHelper.DEBUG and access it from anywhere without a context, and the same for the other properties. I did it this way so that the library will work with all my applications, without needing to pass a context in or set the package name some other way, and the application class only needs the import line changed to suit when adding it into a new application
Edit: I'd just like to reiterate, that this is the easiest (and only one listed here) way to get the values to be assigned to final static variables in the library from all of your applications without needing a context or hard coding the package name somewhere, which is almost as good as having the values in the default library BuildConfig anyway, for the minimal upkeep of changing that import line in each application.
For the case where the applicationId is not the same as the package (i.e. multiple applicationIds per project) AND you want to access from a library project:
Use Gradle to store the base package in resources.
In main/AndroidManifest.xml:
android {
applicationId "com.company.myappbase"
// note: using ${applicationId} here will be exactly as above
// and so NOT necessarily the applicationId of the generated APK
resValue "string", "build_config_package", "${applicationId}"
}
In Java:
public static boolean getDebug(Context context) {
Object obj = getBuildConfigValue("DEBUG", context);
if (obj instanceof Boolean) {
return (Boolean) o;
} else {
return false;
}
}
private static Object getBuildConfigValue(String fieldName, Context context) {
int resId = context.getResources().getIdentifier("build_config_package", "string", context.getPackageName());
// try/catch blah blah
Class<?> clazz = Class.forName(context.getString(resId) + ".BuildConfig");
Field field = clazz.getField(fieldName);
return field.get(null);
}
use both
my build.gradle
// ...
productFlavors {
internal {
// applicationId "com.elevensein.sein.internal"
applicationIdSuffix ".internal"
resValue "string", "build_config_package", "com.elevensein.sein"
}
production {
applicationId "com.elevensein.sein"
}
}
I want to call like below
Boolean isDebug = (Boolean) BuildConfigUtils.getBuildConfigValue(context, "DEBUG");
BuildConfigUtils.java
public class BuildConfigUtils
{
public static Object getBuildConfigValue (Context context, String fieldName)
{
Class<?> buildConfigClass = resolveBuildConfigClass(context);
return getStaticFieldValue(buildConfigClass, fieldName);
}
public static Class<?> resolveBuildConfigClass (Context context)
{
int resId = context.getResources().getIdentifier("build_config_package",
"string",
context.getPackageName());
if (resId != 0)
{
// defined in build.gradle
return loadClass(context.getString(resId) + ".BuildConfig");
}
// not defined in build.gradle
// try packageName + ".BuildConfig"
return loadClass(context.getPackageName() + ".BuildConfig");
}
private static Class<?> loadClass (String className)
{
Log.i("BuildConfigUtils", "try class load : " + className);
try {
return Class.forName(className);
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
return null;
}
private static Object getStaticFieldValue (Class<?> clazz, String fieldName)
{
try { return clazz.getField(fieldName).get(null); }
catch (NoSuchFieldException e) { e.printStackTrace(); }
catch (IllegalAccessException e) { e.printStackTrace(); }
return null;
}
}
For me this is the ONLY ONE AND ACCEPTABLE* SOLUTION TO determine the ANDROID APPLICATION BuildConfig.class:
// base entry point
// abstract application
// which defines the method to obtain the desired class
// the definition of the application is contained in the library
// that wants to access the method or in a superior library package
public abstract class BasApp extends android.app.Application {
/*
* GET BUILD CONFIG CLASS
*/
protected Class<?> getAppBuildConfigClass();
// HELPER METHOD TO CAST CONTEXT TO BASE APP
public static BaseApp getAs(android.content.Context context) {
BaseApp as = getAs(context, BaseApp.class);
return as;
}
// HELPER METHOD TO CAST CONTEXT TO SPECIFIC BASEpp INHERITED CLASS TYPE
public static <I extends BaseApp> I getAs(android.content.Context context, Class<I> forCLass) {
android.content.Context applicationContext = context != null ?context.getApplicationContext() : null;
return applicationContext != null && forCLass != null && forCLass.isAssignableFrom(applicationContext.getClass())
? (I) applicationContext
: null;
}
// STATIC HELPER TO GET BUILD CONFIG CLASS
public static Class<?> getAppBuildConfigClass(android.content.Context context) {
BaseApp as = getAs(context);
Class buildConfigClass = as != null
? as.getAppBuildConfigClass()
: null;
return buildConfigClass;
}
}
// FINAL APP WITH IMPLEMENTATION
// POINTING TO DESIRED CLASS
public class MyApp extends BaseApp {
#Override
protected Class<?> getAppBuildConfigClass() {
return somefinal.app.package.BuildConfig.class;
}
}
USAGE IN LIBRARY:
Class<?> buildConfigClass = BaseApp.getAppBuildConfigClass(Context);
if(buildConfigClass !- null) {
// do your job
}
*there are couple of things need to be watched out:
getApplicationContext() - could return a context which is not an App ContexWrapper implementation - see what Applicatio class extends & get to know of the possibilities of context wrapping
the class returned by final app could be loaded by different class loaders than those who will use it - depends of loader implementation and some principals typical (chierarchy, visibility) for loaders
everything depends on the implemmentation of as in this case simple DELEGATION!!! - the solution could be more sophisticetaded - i wanted only to show here the usage of DELEGATION pattern :)
** why i downwoted all of reflection based patterns because they all have weak points and they all in some certain conditions will fail:
Class.forName(className); - because of not speciified loader
context.getPackageName() + ".BuildConfig"
a) context.getPackageName() - "by default - else see b)" returns not package defined in manifest but application id (somtimes they both are the same), see how the manifest package property is used and its flow - at the end apt tool will replace it with applicaton id (see ComponentName class for example what the pkg stands for there)
b) context.getPackageName() - will return what the implementaio wants to :P
*** what to change in my solution to make it more flawless
replace class with its name that will drop the problems wchich could appear when many classes loaded with different loaders accessing / or are used to obtain a final result involving class (get to know what describes the equality between two classes (for a compiler at runtime) - in short a class equality defines not a self class but a pair which is constituted by the loader and the class. (some home work - try load a inner class with different loader and access it by outer class loaded with different loader) - it would turns out that we will get illegal access error :) even the inner class is in the same package has all modificators allowing access to it outer class :) compiler/linker "VM" treats them as two not related classes...

What is the reason that Policy.getPolicy() is considered as it will retain a static reference to the context and can cause memory leak

I just read some source code is from org.apache.cxf.common.logging.JDKBugHacks and also in
http://svn.apache.org/viewvc/tomcat/trunk/java/org/apache/catalina/core/JreMemoryLeakPreventionListener.java. In order to make my question clear not too broad. :)
I just ask one piece of code in them.
// Calling getPolicy retains a static reference to the context
// class loader.
try {
// Policy.getPolicy();
Class<?> policyClass = Class
.forName("javax.security.auth.Policy");
Method method = policyClass.getMethod("getPolicy");
method.invoke(null);
} catch (Throwable e) {
// ignore
}
But I didn't understand this comment. "Calling getPolicy retains a static reference to the context class loader". And they trying to use JDKBugHacks to work around it.
UPDATE
I overlooked the static block part. Here it is. This is the key. Actually it already has policy cached. So why cache contextClassLoader also? In comment, it claims #deprecated as of JDK version 1.4 -- Replaced by java.security.Policy.
I have double checked the code of java/security/Policy.java. It really removed the cached classloader. So my doubt is valid! :)
#Deprecated
public abstract class Policy {
private static Policy policy;
private static ClassLoader contextClassLoader;
static {
contextClassLoader = java.security.AccessController.doPrivileged
(new java.security.PrivilegedAction<ClassLoader>() {
public ClassLoader run() {
return Thread.currentThread().getContextClassLoader();
}
});
};
I also add the getPolicy source code.
public static Policy getPolicy() {
java.lang.SecurityManager sm = System.getSecurityManager();
if (sm != null) sm.checkPermission(new AuthPermission("getPolicy"));
return getPolicyNoCheck();
}
static Policy getPolicyNoCheck() {
if (policy == null) {
synchronized(Policy.class) {
if (policy == null) {
String policy_class = null;
policy_class = java.security.AccessController.doPrivileged
(new java.security.PrivilegedAction<String>() {
public String run() {
return java.security.Security.getProperty
("auth.policy.provider");
}
});
if (policy_class == null) {
policy_class = "com.sun.security.auth.PolicyFile";
}
try {
final String finalClass = policy_class;
policy = java.security.AccessController.doPrivileged
(new java.security.PrivilegedExceptionAction<Policy>() {
public Policy run() throws ClassNotFoundException,
InstantiationException,
IllegalAccessException {
return (Policy) Class.forName
(finalClass,
true,
contextClassLoader).newInstance();
}
});
} catch (Exception e) {
throw new SecurityException
(sun.security.util.ResourcesMgr.getString
("unable to instantiate Subject-based policy"));
}
}
}
}
return policy;
}
Actually I dig deeper, I find some interesting thing. Someone report a bug to apache CXF about the org.apache.cxf.common.logging.JDKBugHacks for this piece code recently.
In order for disabling url caching, JDKBugHacks runs:
URL url = new URL("jar:file://dummy.jar!/");
URLConnection uConn = url.openConnection();
uConn.setDefaultUseCaches(false);
When having the java.protocol.handler.pkgs system property set, that can lead to deadlocks between the system classloader and the file protocol Handler in particular situations (for instance if the file protocol URLStreamHandler is a signleton).
Besides that, the code above is really there for the sake of setting defaultUseCaches to false only, so actually opening a connection can be avoided, to speed up the execution.
So the fix is
URL url = new URL("jar:file://dummy.jar!/");
URLConnection uConn = new URLConnection(url) {
#Override
public void connect() throws IOException {
// NOOP
}
};
uConn.setDefaultUseCaches(false);
It's normal that JDK or apache cxf to have some minor bugs. And normally they will fix it.
javax.security.auth.login.Configuration has the same issues with Policy but it's not Deprecated.
The Policy class in java 6 contains a static reference to a classloader that is initialized to the current threads context classloader on the first access to the class:
private static ClassLoader contextClassLoader;
static {
contextClassLoader =
(ClassLoader)java.security.AccessController.doPrivileged
(new java.security.PrivilegedAction() {
public Object run() {
return Thread.currentThread().getContextClassLoader();
}
});
};
Tomcats lifecycle listener is making sure to to initialize this class from within a known environment where the context classloader is set to the system classloader. If this class was first accessed from within a webapp, it would retain a reference to the webapps classloader. This would prevent the webapps classes from getting garbage collected, creating a leak of perm gen space.