How to properly test flux from sink (processor)? - testing

I have processor like class, which internally uses sink. I have made extremely simplified one to showcase my question:
import reactor.core.publisher.Sinks;
import reactor.test.StepVerifier;
import java.time.Duration;
public class TestBed {
public static void main(String[] args) {
class StringProcessor {
public final Sinks.Many<String> sink = Sinks.many().multicast().directBestEffort();
public void httpPostWebhookController(String inputData) {
sink.emitNext(
inputData.toLowerCase() + " " + inputData.toUpperCase(),
(signalType, emitResult) -> {
System.out.println("error, signalType=" + signalType + "; emitResult=" + emitResult);
return false;
}
);
}
}
final StringProcessor stringProcessor = new StringProcessor();
final StepVerifier stepVerifier = StepVerifier.create(stringProcessor.sink.asFlux())
.expectSubscription()
.expectNext("asdf ASDF")
.expectNext("qw QW")
.thenCancel();
stringProcessor.httpPostWebhookController("asdf");
stringProcessor.httpPostWebhookController("Qw");
stepVerifier.verify(Duration.ofSeconds(2));
}
}
My stepVerified does not subscribe and when it subscribe (upon verify(Duration) call), it misses testing signals. I cannot move verify call before httpPostWebhookController method call, because, it is blocking and will fail because no signal comes.
How to use StepVerifier in such scenario?

As I have asked on udemy course (instructor Vinoth Selvaraj), solution is to use verifyLater call. It will cause to trigger subscription and it does not block. Fixed test code:
final StringProcessor stringProcessor = new StringProcessor();
final StepVerifier stepVerifier = StepVerifier.create(stringProcessor.sink.asFlux().log())
.expectSubscription()
.expectNext("asdf ASDF")
.expectNext("qw QW")
.thenCancel()
.verifyLater();
stringProcessor.httpPostWebhookController("asdf");
stringProcessor.httpPostWebhookController("Qw");
stepVerifier.verify(Duration.ofSeconds(2));

Related

Spring Integration testing a Files.inboundAdapter flow

I have this flow that I am trying to test but nothing works as expected. The flow itself works well but testing seems a bit tricky.
This is my flow:
#Configuration
#RequiredArgsConstructor
public class FileInboundFlow {
private final ThreadPoolTaskExecutor threadPoolTaskExecutor;
private String filePath;
#Bean
public IntegrationFlow fileReaderFlow() {
return IntegrationFlows.from(Files.inboundAdapter(new File(this.filePath))
.filterFunction(...)
.preventDuplicates(false),
endpointConfigurer -> endpointConfigurer.poller(
Pollers.fixedDelay(500)
.taskExecutor(this.threadPoolTaskExecutor)
.maxMessagesPerPoll(15)))
.transform(new UnZipTransformer())
.enrichHeaders(this::headersEnricher)
.transform(Message.class, this::modifyMessagePayload)
.route(Map.class, this::channelsRouter)
.get();
}
private String channelsRouter(Map<String, File> payload) {
boolean isZip = payload.values()
.stream()
.anyMatch(file -> isZipFile(file));
return isZip ? ZIP_CHANNEL : XML_CHANNEL; // ZIP_CHANNEL and XML_CHANNEL are PublishSubscribeChannel
}
#Bean
public SubscribableChannel xmlChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(XML_CHANNEL);
return channel;
}
#Bean
public SubscribableChannel zipChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(ZIP_CHANNEL);
return channel;
}
//There is a #ServiceActivator on each channel
#ServiceActivator(inputChannel = XML_CHANNEL)
public void handleXml(Message<Map<String, File>> message) {
...
}
#ServiceActivator(inputChannel = ZIP_CHANNEL)
public void handleZip(Message<Map<String, File>> message) {
...
}
//Plus an #Transformer on the XML_CHANNEL
#Transformer(inputChannel = XML_CHANNEL, outputChannel = BUS_CHANNEL)
private List<BusData> xmlFileToIngestionMessagePayload(Map<String, File> xmlFilesByName) {
return xmlFilesByName.values()
.stream()
.map(...)
.collect(Collectors.toList());
}
}
I would like to test multiple cases, the first one is checking the message payload published on each channel after the end of fileReaderFlow.
So I defined this test classe:
#SpringBootTest
#SpringIntegrationTest
#ExtendWith(SpringExtension.class)
class FileInboundFlowTest {
#Autowired
private MockIntegrationContext mockIntegrationContext;
#TempDir
static Path localWorkDir;
#BeforeEach
void setUp() {
copyFileToTheFlowDir(); // here I copy a file to trigger the flow
}
#Test
void checkXmlChannelPayloadTest() throws InterruptedException {
Thread.sleep(1000); //waiting for the flow execution
PublishSubscribeChannel xmlChannel = this.getBean(XML_CHANNEL, PublishSubscribeChannel.class); // I extract the channel to listen to the message sent to it.
xmlChannel.subscribe(message -> {
assertThat(message.getPayload()).isInstanceOf(Map.class); // This is never executed
});
}
}
As expected that test does not work because the assertThat(message.getPayload()).isInstanceOf(Map.class); is never executed.
After reading the documentation I didn't find any hint to help me solved that issue. Any help would be appreciated! Thanks a lot
First of all that channel.setBeanName(XML_CHANNEL); does not effect the target bean. You do this on the bean creation phase and dependency injection container knows nothing about this setting: it just does not consult with it. If you really would like to dictate an XML_CHANNEL for bean name, you'd better look into the #Bean(name) attribute.
The problem in the test that you are missing the fact of async logic of the flow. That Files.inboundAdapter() works if fully different thread and emits messages outside of your test method. So, even if you could subscribe to the channel in time, before any message is emitted to it, that doesn't mean your test will work correctly: the assertThat() will be performed on a different thread. Therefore no real JUnit report for your test method context.
So, what I'd suggest to do is:
Have Files.inboundAdapter() stopped in the beginning of the test before any setup you'd like to do in the test. Or at least don't place files into that filePath, so the channel adapter doesn't emit messages.
Take the channel from the application context and if you wish subscribe or use a ChannelInterceptor.
Have an async barrier, e.g. CountDownLatch to pass to that subscriber.
Start the channel adapter or put file into the dir for scanning.
Wait for the async barrier before verifying some value or state.

BinaryInvalidTypeException in Ignite Remote Filter

The following code is based on a combination of Ingite's CacheQueryExample and CacheContinuousQueryExample.
The code starts a fat Ignite client. Three organizations are created in the cache and we are listening to the updates to the cache. The remote filter is set to trigger the continuous query if the organization name is "Google". Peer class loading is enabled by the default examples xml config file (example-ignite.xml), so the expectation is that the remote node is aware of the Organization class.
However the following exceptions are shown in the Ignite server's console (one for each cache entry) and all three records are returned to the client in the continuous query's event handler instead of just the "Google" record. If the filter is changed to check on the key instead of the value, the correct behavior is observed and a single record is returned to the local listener.
[08:28:43,302][SEVERE][sys-stripe-1-#2][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
[08:28:51,819][SEVERE][sys-stripe-2-#3][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
[08:28:52,692][SEVERE][sys-stripe-3-#4][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
To run the code
Start an ignite server using examples/config/example-ignite.xml as the configuration file.
Replace the content of ignite's CacheContinuousQueryExample.java with the following code. You may have to change the path to the configuration file to an absolute path.
package org.apache.ignite.examples.datagrid;
import javax.cache.Cache;
import javax.cache.configuration.Factory;
import javax.cache.event.CacheEntryEvent;
import javax.cache.event.CacheEntryEventFilter;
import javax.cache.event.CacheEntryUpdatedListener;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.affinity.AffinityKey;
import org.apache.ignite.cache.query.ContinuousQuery;
import org.apache.ignite.cache.query.QueryCursor;
import org.apache.ignite.cache.query.ScanQuery;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.examples.ExampleNodeStartup;
import org.apache.ignite.examples.model.Organization;
import org.apache.ignite.examples.model.Person;
import org.apache.ignite.lang.IgniteBiPredicate;
import java.util.Collection;
/**
* This examples demonstrates continuous query API.
* <p>
* Remote nodes should always be started with special configuration file which
* enables P2P class loading: {#code 'ignite.{sh|bat} examples/config/example-ignite.xml'}.
* <p>
* Alternatively you can run {#link ExampleNodeStartup} in another JVM which will
* start node with {#code examples/config/example-ignite.xml} configuration.
*/
public class CacheContinuousQueryExample {
/** Organizations cache name. */
private static final String ORG_CACHE = CacheQueryExample.class.getSimpleName() + "Organizations";
/**
* Executes example.
*
* #param args Command line arguments, none required.
* #throws Exception If example execution failed.
*/
public static void main(String[] args) throws Exception {
Ignition.setClientMode(true);
try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
System.out.println();
System.out.println(">>> Cache continuous query example started.");
CacheConfiguration<Long, Organization> orgCacheCfg = new CacheConfiguration<>(ORG_CACHE);
orgCacheCfg.setCacheMode(CacheMode.PARTITIONED); // Default.
orgCacheCfg.setIndexedTypes(Long.class, Organization.class);
// Auto-close cache at the end of the example.
try {
ignite.getOrCreateCache(orgCacheCfg);
// Create new continuous query.
ContinuousQuery<Long, Organization> qry = new ContinuousQuery<>();
// Callback that is called locally when update notifications are received.
qry.setLocalListener(new CacheEntryUpdatedListener<Long, Organization>() {
#Override public void onUpdated(Iterable<CacheEntryEvent<? extends Long, ? extends Organization>> evts) {
for (CacheEntryEvent<? extends Long, ? extends Organization> e : evts)
System.out.println("Updated entry [key=" + e.getKey() + ", val=" + e.getValue() + ']');
}
});
// This filter will be evaluated remotely on all nodes.
// Entry that pass this filter will be sent to the caller.
qry.setRemoteFilterFactory(new Factory<CacheEntryEventFilter<Long, Organization>>() {
#Override public CacheEntryEventFilter<Long, Organization> create() {
return new CacheEntryEventFilter<Long, Organization>() {
#Override public boolean evaluate(CacheEntryEvent<? extends Long, ? extends Organization> e) {
//return e.getKey() == 3;
return e.getValue().name().equals("Google");
}
};
}
});
ignite.getOrCreateCache(ORG_CACHE).query(qry);
// Populate caches.
initialize();
Thread.sleep(2000);
}
finally {
// Distributed cache could be removed from cluster only by #destroyCache() call.
ignite.destroyCache(ORG_CACHE);
}
}
}
/**
* Populate cache with test data.
*/
private static void initialize() {
IgniteCache<Long, Organization> orgCache = Ignition.ignite().cache(ORG_CACHE);
// Clear cache before running the example.
orgCache.clear();
// Organizations.
Organization org1 = new Organization("ApacheIgnite");
Organization org2 = new Organization("Apple");
Organization org3 = new Organization("Google");
orgCache.put(org1.id(), org1);
orgCache.put(org2.id(), org2);
orgCache.put(org3.id(), org3);
}
}
Here is an interim workaround that involves using and deserializing binary objects. Hopefully, someone can post a proper solution.
Here is the main() function modified to work with BinaryObjects instead of the Organization object:
public static void main(String[] args) throws Exception {
Ignition.setClientMode(true);
try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
System.out.println();
System.out.println(">>> Cache continuous query example started.");
CacheConfiguration<Long, Organization> orgCacheCfg = new CacheConfiguration<>(ORG_CACHE);
orgCacheCfg.setCacheMode(CacheMode.PARTITIONED); // Default.
orgCacheCfg.setIndexedTypes(Long.class, Organization.class);
// Auto-close cache at the end of the example.
try {
ignite.getOrCreateCache(orgCacheCfg);
// Create new continuous query.
ContinuousQuery<Long, BinaryObject> qry = new ContinuousQuery<>();
// Callback that is called locally when update notifications are received.
qry.setLocalListener(new CacheEntryUpdatedListener<Long, BinaryObject>() {
#Override public void onUpdated(Iterable<CacheEntryEvent<? extends Long, ? extends BinaryObject>> evts) {
for (CacheEntryEvent<? extends Long, ? extends BinaryObject> e : evts) {
Organization org = e.getValue().deserialize();
System.out.println("Updated entry [key=" + e.getKey() + ", val=" + org + ']');
}
}
});
// This filter will be evaluated remotely on all nodes.
// Entry that pass this filter will be sent to the caller.
qry.setRemoteFilterFactory(new Factory<CacheEntryEventFilter<Long, BinaryObject>>() {
#Override public CacheEntryEventFilter<Long, BinaryObject> create() {
return new CacheEntryEventFilter<Long, BinaryObject>() {
#Override public boolean evaluate(CacheEntryEvent<? extends Long, ? extends BinaryObject> e) {
//return e.getKey() == 3;
//return e.getValue().name().equals("Google");
return e.getValue().field("name").equals("Google");
}
};
}
});
ignite.getOrCreateCache(ORG_CACHE).withKeepBinary().query(qry);
// Populate caches.
initialize();
Thread.sleep(2000);
}
finally {
// Distributed cache could be removed from cluster only by #destroyCache() call.
ignite.destroyCache(ORG_CACHE);
}
}
}
Peer class loading is enabled ... so the expectation is that the remote node is aware of the Organization class.
This is the problem. You can't peer class load "model" objects, i.e., objects used to create the table.
Two solutions:
Deploy the model class(es) to the server ahead of time. The rest of the code -- the filters -- can be peer class loaded
As #rgb1380 demonstrates, you can use BinaryObjects, which is the underlying data format
Another small point, to use "autoclose" you need to structure your code like this:
// Auto-close cache at the end of the example.
try (var cache = ignite.getOrCreateCache(orgCacheCfg)) {
// do stuff
}

Spring AOP Pointcut if condition

I am facing an issue with pointcuts, I am trying to enable the #Around when the log.isDebugEnabled is true for this I am trying the following code:
#Pointcut("within(org.apache.commons.logging.impl.Log4JLogger..*)")
public boolean isDebugEnabled() {
return log.isDebugEnabled();
}
and for testing purposes, I have two aspects configured
#AfterThrowing(value = "!isDebugEnabled()", throwing = "exception")
and
#Around(value = "isDebugEnabled()")
But all the times when I try to execute the code it always goes to #AfterThrowing, and it is not clear for me what I am doing wrong!
I am using aspectJWeaver 1.8.9, with Spring MVC 4.3!
Here is a sample class emulating the issue:
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.AfterThrowing;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.springframework.stereotype.Component;
import org.springframework.util.StopWatch;
#Component
#Aspect
public class SampleAspect {
private static final Log log = LogFactory.getLog(SampleAspect.class);
#Pointcut("within(org.apache.commons.logging.impl.Log4JLogger..*)")
public boolean isDebugEnabled() {
return log.isDebugEnabled();
}
#AfterThrowing(value = " !isDebugEnabled()", throwing = "exception")
public void getCalledOnException(JoinPoint joinPoint, Exception exception) {
log.error("Method " + joinPoint.getSignature() + " Throws the exception " + exception.getStackTrace());
}
//Never execute around method even when log.isDebugEnabled() = true
#Around(value = "isDebugEnabled()")
public Object aroundTest(ProceedingJoinPoint proceedingJoinPoint) throws Throwable {
StopWatch stopWatch = new StopWatch();
stopWatch.start();
final Object proceed;
try {
proceed = proceedingJoinPoint.proceed();
} catch (Exception e) {
throw e;
}
stopWatch.stop();
log.debug("It took " + stopWatch.getTotalTimeSeconds() + " seconds to be proceed");
return proceed;
}
}
edit,
I tried to use if() from aspectJ, but it didn't work in my project either.
#Pointcut("call(* *.*(int)) && args(i) && if()")
public static boolean someCallWithIfTest(int i) {
return i > 0;
}
Not sure if I need to add a different import or so, but I didn't manage to make it work.
Couple of points from documentation
== Spring AOP Capabilities and Goals
Spring AOP currently supports only method execution join points
(advising the execution of methods on Spring beans)
=== Declaring a Pointcut
In the #AspectJ annotation-style of AOP, a pointcut signature is provided
by a regular method definition, and the pointcut expression is
indicated by using the #Pointcut annotation (the method serving as the
pointcut signature must have a void return type).
Apache commons classes are not managed by Spring container . So the following will not be honoured.
#Pointcut("within(org.apache.commons.logging.impl.Log4JLogger..*)")
Following pointcut method is not valid
public boolean isDebugEnabled() {
return log.isDebugEnabled();
}

Spring WebFlux (Flux): how to publish dynamically

I am new to Reactive programming and Spring WebFlux. I want to make my App 1 publish Server Sent event through Flux and my App 2 listen on it continuously.
I want Flux publish on-demand (e.g. when something happens). All the example I found is to use Flux.interval to periodically publish event, and there seems no way to append/modify the content in Flux once it is created.
How can I achieve my goal? Or I am totally wrong conceptually.
Publish "dynamically" using FluxProcessor and FluxSink
One of the techniques to supply data manually to the Flux is using FluxProcessor#sink method as in the following example
#SpringBootApplication
#RestController
public class DemoApplication {
final FluxProcessor processor;
final FluxSink sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.processor = DirectProcessor.create().serialize();
this.sink = processor.sink();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
sink.next("Hello World #" + counter.getAndIncrement());
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return processor.map(e -> ServerSentEvent.builder(e).build());
}
}
Here, I created DirectProcessor in order to support multiple subscribers, that will listen to the data stream. Also, I provided additional FluxProcessor#serialize which provide safe support for multiproducer (invocation from different threads without violation of Reactive Streams spec rules, especially rule 1.3). Finally, by calling "http://localhost:8080/send" we will see the message Hello World #1 (of course, only in case if you connected to the "http://localhost:8080" previously)
Update For Reactor 3.4
With Reactor 3.4 you have a new API called reactor.core.publisher.Sinks. Sinks API offers a fluent builder for manual data-sending which lets you specify things like the number of elements in the stream and backpressure behavior, number of supported subscribers, and replay capabilities:
#SpringBootApplication
#RestController
public class DemoApplication {
final Sinks.Many sink;
final AtomicLong counter;
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
public DemoApplication() {
this.sink = Sinks.many().multicast().onBackpressureBuffer();
this.counter = new AtomicLong();
}
#GetMapping("/send")
public void test() {
EmitResult result = sink.tryEmitNext("Hello World #" + counter.getAndIncrement());
if (result.isFailure()) {
// do something here, since emission failed
}
}
#RequestMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent> sse() {
return sink.asFlux().map(e -> ServerSentEvent.builder(e).build());
}
}
Note, message sending via Sinks API introduces a new concept of emission and its result. The reason for such API is the fact that the Reactor extends Reactive-Streams and has to follow the backpressure control. That said if you emit more signals than was requested, and the underlying implementation does not support buffering, your message will not be delivered. Therefore, the result of tryEmitNext returns the EmitResult which indicates if the message was sent or not.
Also, note, that by default Sinsk API gives a serialized version of Sink, which means you don't have to care about concurrency. However, if you know in advance that the emission of the message is serial, you may build a Sinks.unsafe() version which does not serialize given messages
Just another idea, using EmitterProcessor as a gateway to flux
import reactor.core.publisher.EmitterProcessor;
import reactor.core.publisher.Flux;
public class MyEmitterProcessor {
EmitterProcessor<String> emitterProcessor;
public static void main(String args[]) {
MyEmitterProcessor myEmitterProcessor = new MyEmitterProcessor();
Flux<String> publisher = myEmitterProcessor.getPublisher();
myEmitterProcessor.onNext("A");
myEmitterProcessor.onNext("B");
myEmitterProcessor.onNext("C");
myEmitterProcessor.complete();
publisher.subscribe(x -> System.out.println(x));
}
public Flux<String> getPublisher() {
emitterProcessor = EmitterProcessor.create();
return emitterProcessor.map(x -> "consume: " + x);
}
public void onNext(String nextString) {
emitterProcessor.onNext(nextString);
}
public void complete() {
emitterProcessor.onComplete();
}
}
More info, see here from Reactor doc. There is a recommendation from the document itself that "Most of the time, you should try to avoid using a Processor. They are harder to use correctly and prone to some corner cases." BUT I don't know which kind of corner case.

Unpredictable result of DriveId.getResourceId() in Google Drive Android API

The issue is that the 'resourceID' from 'DriveId.getResourceId()' is not available (returns NULL) on newly created files (product of 'DriveFolder.createFile(GAC, meta, cont)'). If the file is retrieved by a regular list or query procedure, the 'resourceID' is correct.
I suspect it is a timing/latency issue, but it is not clear if there is an application action that would force refresh. The 'Drive.DriveApi.requestSync(GAC)' seems to have no effect.
UPDATE (07/22/2015)
Thanks to the prompt response from Steven Bazyl (see comments below), I finally have a satisfactory solution using Completion Events. Here are two minified code snippets that deliver the ResourceId to the app as soon as the newly created file is propagated to the Drive:
File creation, add change subscription:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "_X_";
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null,
new ExecutionOptions.Builder()
.setNotifyOnCompletion(true)
.build()
)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
DriveId driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
DriveFile file = Drive.DriveApi.getFile(getGoogleApiClient(), driveId);
file.addChangeSubscription(getGoogleApiClient());
}
}
});
}
}
Event Service, catches the completion:
public class ChngeSvc extends DriveEventService {
private static final String TAG = "_X_";
#Override
public void onCompletion(CompletionEvent event) { super.onCompletion(event);
DriveId driveId = event.getDriveId();
Log.d(TAG, "onComplete: " + driveId.getResourceId());
switch (event.getStatus()) {
case CompletionEvent.STATUS_CONFLICT: Log.d(TAG, "STATUS_CONFLICT"); event.dismiss(); break;
case CompletionEvent.STATUS_FAILURE: Log.d(TAG, "STATUS_FAILURE"); event.dismiss(); break;
case CompletionEvent.STATUS_SUCCESS: Log.d(TAG, "STATUS_SUCCESS "); event.dismiss(); break;
}
}
}
Under normal circumstances (wifi), I get the ResourceId almost immediately.
20:40:53.247﹕Created a empty file: DriveId:CAESABiiAiDGsfO61VMoAA==
20:40:54.305: onComplete, ResourceId: 0BxOS7mTBMR_bMHZRUjJ5NU1ZOWs
... done for now.
ORIGINAL POST, deprecated, left here for reference.
I let this answer sit for a year hoping that GDAA will develop a solution that works. The reason for my nagging is simple. If my app creates a file, it needs to broadcast this fact to its buddies (other devices, for instance) with an ID that is meaningful (that is ResourceId). It is a trivial task under the REST Api where ResourceId comes back as soon as the file is successfully created.
Needles to say that I understand the GDAA philosophy of shielding the app from network primitives, caching, batching, ... But clearly, in this situation, the ResourceID is available long before it is delivered to the app.
Originally, I implemented Cheryl Simon's suggestion and added a ChangeListener on a newly created file, hoping to get the ResourceID when the file is propagated. Using classic CreateEmptyFileActivity from android-demos, I smacked together the following test code:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "CreateEmptyFileActivity";
final private ChangeListener mChgeLstnr = new ChangeListener() {
#Override
public void onChange(ChangeEvent event) {
Log.d(TAG, "event: " + event + " resId: " + event.getDriveId().getResourceId());
}
};
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
DriveId driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).addChangeListener(getGoogleApiClient(), mChgeLstnr);
}
}
});
}
}
... and was waiting for something to happen. File was happily uploaded to the Drive within seconds, but no onChange() event. 10 minutes, 20 minutes, ... I could not find any way how to make the ChangeListener to wake up.
So the only other solution, I could come up was to nudge the GDAA. So I implemented a simple handler-poker that tickles the metadata until something happens:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "CreateEmptyFileActivity";
final private ChangeListener mChgeLstnr = new ChangeListener() {
#Override
public void onChange(ChangeEvent event) {
Log.d(TAG, "event: " + event + " resId: " + event.getDriveId().getResourceId());
}
};
static DriveId driveId;
private static final int ENOUGH = 4; // nudge 4x, 1+2+3+4 = 10seconds
private static int mWait = 1000;
private int mCnt;
private Handler mPoker;
private final Runnable mPoke = new Runnable() { public void run() {
if (mPoker != null && driveId != null && driveId.getResourceId() == null && (mCnt++ < ENOUGH)) {
MetadataChangeSet meta = new MetadataChangeSet.Builder().build();
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).updateMetadata(getGoogleApiClient(), meta).setResultCallback(
new ResultCallback<DriveResource.MetadataResult>() {
#Override
public void onResult(DriveResource.MetadataResult result) {
if (result.getStatus().isSuccess() && result.getMetadata().getDriveId().getResourceId() != null)
Log.d(TAG, "resId COOL " + result.getMetadata().getDriveId().getResourceId());
else
mPoker.postDelayed(mPoke, mWait *= 2);
}
}
);
} else {
mPoker = null;
}
}};
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).addChangeListener(getGoogleApiClient(), mChgeLstnr);
mCnt = 0;
mPoker = new Handler();
mPoker.postDelayed(mPoke, mWait);
}
}
});
}
}
And voila, 4 seconds (give or take) later, the ChangeListener delivers a new shiny ResourceId. Of course, the ChangeListener becomes thus obsolete, since the poker routine gets the ResourceId as well.
So this is the answer for those who can't wait for the ResourceId. Which brings up the follow-up question:
Why do I have to tickle metadata (or re-commit content), very likely creating unnecessary network traffic, to get onChange() event, when I see clearly that the file has been propagated a long time ago, and GDAA has the ResourceId available?
ResourceIds become available when the newly created resource is committed to the server. In the case of a device that is offline, this could be arbitrarily long after the initial file creation. It will happen as soon as possible after the creation request though, so you don't need to do anything to speed it along.
If you really need it right away, you could conceivably use the change notifications to listen for the resourceId to change.