I don't see any await synchronous api on com.google.android.gms.tasks.Task. Am I missing something? I am trying to migrate to use the new *Client classes in Play services. I already designed my code to run in another thread and use PendingResult.await. My code was like this:
val pendingResult = Auth.GoogleSignInApi.silentSignIn(TwinkleApplication.instance.gapiClient)
val account = pendingResult.await(10, TimeUnit.SECONDS)
I wish to use this, but don't know how to continue.
val signin = GoogleSignIn.getClient(ctx, Global.getGSO())
val task = signin.silentSignIn()
The Tasks class includes a static await method - in Java I'm doing this:
GoogleSignInClient googleSignInClient = GoogleSignIn.getClient(context, gso);
Task<GoogleSignInAccount> task = googleSignInClient.silentSignIn();
try {
GoogleSignInAccount account = Tasks.await(task);
...
} catch (ExecutionException e) {
// task failed
} catch (InterruptedException e) {
// an interrupt occurred while waiting for the task to finish
}
Related
I am currently using sql dependency notification to detect changes in a table and process them. I am having a problem where the notification gets called while its still in the middle of completing the first request which causes duplicate processing
private void ProcessData()
{
try
{
m_Guids = new List<Guid>();
using (SqlCommand command = new SqlCommand("SP_XXX_SELECT", m_sqlConn))
{
command.CommandType = CommandType.StoredProcedure;
command.Notification = null;
SqlDependency dependency = new SqlDependency(command);
dependency.OnChange += new OnChangeEventHandler(OnDependencyChange);
SqlDependency.Start(m_ConnectionString, m_QueueName);
if (m_sqlConn.State == ConnectionState.Closed)
{
m_sqlConn.Open();
}
using (SqlDataReader reader = command.ExecuteReader())
{
if (reader.HasRows)
{
while (reader.Read())
{
m_Guids.Add(reader.GetGuid(0));
}
}
}
Console.WriteLine(m_Guids.Count.ToString());
ProcessGuids();
}
}
}
catch (Exception ex)
{
//SendFailureEmail
}
}
private void OnDependencyChange(object sender, SqlNotificationEventArgs e)
{
SqlDependency dependency = sender as SqlDependency;
dependency.OnChange -= OnDependencyChange;
ProcessData();
}
public void OnStart()
{
SqlDependency.Stop(m_ConnectionString, m_QueueName);
SqlDependency.Start(m_ConnectionString, m_QueueName);
m_sqlConn = new SqlConnection(m_ConnectionString);
}
ProcessData method gets called again while its still in the middle of processing (processGuids) Should I subscribe to the event after processing all the data?
If I don't subscribe until processing is complete, what happens to the data that was changed during the process, which I believe doesn't get notified until next change happens?. What is the correct way of doing this or am I doing something wrong.
Thanks
SqlDependency.OnChange is called not only on data change.
In the OnDependencyChange you must check e.Type/e.Source/e.Info.
F.e., combination of {Type = Subscribe, Source = Statement, Info = Invalid} means "Statement not ready for notification, no notification started".
See Creating a Query for Notification for SQL statement requirements for notification. You must follow these requirements in SELECT statements in your SP.
Additional requirements for stored procedures are not well documented. Known restrictions for SP:
Use of SET NOCOUNT (ON and OFF) is prohibited.
Use of RETURN is prohibited.
I am new to developing plugins, and was wondering what causes a test plugin to hang when started i.e. Eclipse is unresponsive.
I know that my code is working as I developed a voice recognition plugin to write to the screen what is said and when I open notepad everything I say is printed to notepad.
So I was wondering, am I missing something in the plugin life-cycle that causes the IDE to hang when my plugin is started?
package recognise.handlers;
public class SampleHandler extends AbstractHandler {
public SampleHandler() {
}
/**
* the command has been executed, so extract extract the needed information
* from the application context.
*/
public Object execute(ExecutionEvent event) throws ExecutionException {
boolean finish = false;
IWorkbenchWindow window = HandlerUtil.getActiveWorkbenchWindowChecked(event);
MessageDialog.openInformation(
window.getShell(),
"Recognise",
"Starting Recognition");
TakeInput start = new TakeInput();
//Stage a = new Stage();
//SceneManager scene = new SceneManager();
try {
start.startVoiceRecognition(finish);
//scene.start(a);
} catch (IOException | AWTException e) {
e.printStackTrace();
}
return null;
}
}
Does the start.startVoiceRecognition() need to be threaded?
Thanks in advance and let me know if you would like to see my manifest/activator etc.
Conclusion
Added a job separate to the UI thread
/*
* Start a new job separate to the main thread so the UI will not
* become unresponsive when the plugin has started
*/
public void runVoiceRecognitionJob() {
Job job = new Job("Voice Recognition Job") {
#Override
protected IStatus run(IProgressMonitor monitor) {
TakeInput start = new TakeInput();
try {
start.startVoiceRecognition(true);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (AWTException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// use this to open a Shell in the UI thread
return Status.OK_STATUS;
}
};
job.setUser(true);
job.schedule();
}
As shown start.startVoiceRecognition() is running in the UI thread, and it will block the UI thread until it is finished and the app will be unresponsive during that time. So if it is doing a significant amount of work either use a Thread or use an Eclipse Job (which runs work in a background thread managed by Eclipse).
To unblock your UI you have to use Display thread.
/**
* the command has been executed, so extract extract the needed information
* from the application context.
*/
public Object execute(ExecutionEvent event) throws ExecutionException {
Display.getDefault().asyncExec(new Runnable() {
public void run() {
boolean finish = false;
IWorkbenchWindow window = HandlerUtil.getActiveWorkbenchWindowChecked(event);
MessageDialog.openInformation(
window.getShell(),
"Recognise",
"Starting Recognition");
TakeInput start = new TakeInput();
//Stage a = new Stage();
//SceneManager scene = new SceneManager();
try {
start.startVoiceRecognition(finish);
//scene.start(a);
} catch (IOException | AWTException e) {
e.printStackTrace();
}
MessageDialog.openInformation(shell, "Your Popup ",
"Your job has finished.");
}
});
return null;
}
You can use Display.getDefault().asyncExec() as mentioned above, so your UI will be unblocked, while your non UI code will be executing.
is there anyway to stop the actor system from shutting down and starting up between tests?
I keep getting akka exceptions complaining about the actor system being down.
I can mock/stub to get rid of the reliance on the fake app but it needs a bit of work - hoping to be able to just start one static test application up and run different things in the app.
Eg I have a (crappy) test like this - can I somehow re-use the running app between tests? it still seems to shut down somewhere along the line.
running(Fixtures.testSvr, HTMLUNIT, browser -> new JavaTestKit(system) {{
F.Promise<TestResponseObject> resultPromise = client.makeRequest("request", "parameterObject", system.dispatcher());
boolean gotUnmarshallingException = false;
try {
Await.result(resultPromise.wrapped(), TotesTestFixtures.timeout.duration());
} catch (Exception e) {
if ((e instanceof exceptions.UnmarshallingException)) {
gotUnmarshallingException = true;
}
}
if(gotUnmarshallingException == false) fail();
}});
You can try to get rid of the running method (it stops the testserver at the end) and initialize a testserver by yourself, but I don't know if Akka will be available to you:
#BeforeClass
public static void start() {
testServer = testServer(PORT, fakeApplication(inMemoryDatabase()));
testServer.start();
// Maybe you dont ned this...
try {
testbrowser = new TestBrowser(HTMLUNIT, "http://localhost:" + PORT);
} catch (Exception e) {
}
}
#Test
public void testOne() {
new JavaTestKit() {
// (...)
}
}
#AfterClass
public static void stop() {
testServer.stop();
}
I'm wondering if anyone has encountered this before:
I handle a command, and in the handler, I save an event to the eventstore (joliver).
Right after dispatching, the handler for the same command is handled again.
I know its the same command because the guid on the command is the same.
After five tries, nservicebus says the command failed due to the maximum retries.
So obviously the command failed, but I don't get any indication of what failed.
I've put the contents of the dispatcher in a try catch, but there is no error caught. After the code exits the dispatcher, the event handler will always fire as if something errored out.
Tracing through the code, the events are saved to the database (I see the row), the dispatcher runs, and the Dispatched column is set to true, and then the handler handles the command again, the process repeats, and another row gets inserted into the commits table.
Just what could be failing? Am I not setting a success flag somewhere in the event store?
If I decouple the eventstore from nServicebus, both will run as expected with no retries and failures.
The dispatcher:
public void Dispatch(Commit commit)
{
for (var i = 0; i < commit.Events.Count; i++)
{
try
{
var eventMessage = commit.Events[i];
var busMessage = (T)eventMessage.Body;
//bus.Publish(busMessage);
}
catch (Exception ex)
{
throw ex;
}
}
}
The Wireup.Init()
private static IStoreEvents WireupEventStore()
{
return Wireup.Init()
.LogToOutputWindow()
.UsingSqlPersistence("EventStore")
.InitializeStorageEngine()
.UsingBinarySerialization()
//.UsingJsonSerialization()
// .Compress()
//.UsingAsynchronousDispatchScheduler()
// .DispatchTo(new NServiceBusCommitDispatcher<T>())
.UsingSynchronousDispatchScheduler()
.DispatchTo(new DelegateMessageDispatcher(DispatchCommit))
.Build();
}
I had a transaction scope opened on the save that I never closed.
public static void Save(AggregateRoot root)
{
// we can call CreateStream(StreamId) if we know there isn't going to be any data.
// or we can call OpenStream(StreamId, 0, int.MaxValue) to read all commits,
// if no commits exist then it creates a new stream for us.
using (var scope = new TransactionScope())
using (var eventStore = WireupEventStore())
using (var stream = eventStore.OpenStream(root.Id, 0, int.MaxValue))
{
var events = root.GetUncommittedChanges();
foreach (var e in events)
{
stream.Add(new EventMessage { Body = e });
}
var guid = Guid.NewGuid();
stream.CommitChanges(guid);
root.MarkChangesAsCommitted();
scope.Complete(); // <-- missing this
}
}
I have a self-hosted service that processes long running jobs submitted by a client over net.tcp binding. While the job is running (within a Task), the service will push status updates to the client via a one-way callback. This works fine, however when I attempt to invoke another callback to notify the client the job has completed (also one-way), the callback is never received/invoked on the client. I do not receive any exceptions in this process.
My Callback contract looks like this:
public interface IWorkflowCallback
{
[OperationContract(IsOneWay = true)]
[ApplySharedTypeResolverAttribute]
void UpdateStatus(WorkflowJobStatusUpdate StatusUpdate);
[OperationContract(IsOneWay = true)]
[ApplySharedTypeResolverAttribute]
void NotifyJobCompleted(WorkflowJobCompletionNotice Notice);
}
Code from the service that invokes the callbacks: (not in the service implementation itself, but called directly from the service implementation)
public WorkflowJobTicket AddToQueue(WorkflowJobRequest Request)
{
if (this.workflowEngine.WorkerPoolFull)
{
throw new QueueFullException();
}
var user = ServiceUserManager.CurrentUser;
var context = OperationContext.Current;
var workerId = this.workflowEngine.RunWorkflowJob(user, Request, new Object[]{new DialogServiceExtension(context)});
var workerjob = this.workflowEngine.FindJob(workerId);
var ticket = new WorkflowJobTicket()
{
JobRequestId = Request.JobRequestId,
JobTicketId = workerId
};
user.RegisterTicket<IWorkflowCallback>(ticket);
workerjob.WorkflowJobCompleted += this.NotifyJobComplete;
workerjob.Status.PropertyChanged += this.NotifyJobStatusUpdate;
this.notifyQueueChanged();
return ticket;
}
protected void NotifyJobStatusUpdate(object sender, PropertyChangedEventArgs e)
{
var user = ServiceUserManager.GetInstance().GetUserWithTicket((sender as WorkflowJobStatus).JobId);
Action<IWorkflowCallback> action = (callback) =>
{
ICommunicationObject communicationCallback = (ICommunicationObject)callback;
if (communicationCallback.State == CommunicationState.Opened)
{
try
{
var updates = (sender as WorkflowJobStatus).GetUpdates();
callback.UpdateStatus(updates);
}
catch (Exception)
{
communicationCallback.Abort();
}
}
};
user.Invoke<IWorkflowCallback>(action);
}
protected void NotifyJobComplete(WorkflowJob job, EventArgs e)
{
var user = ServiceUserManager.GetInstance().GetUserWithTicket(job.JobId);
Action<IWorkflowCallback> action = (callback) =>
{
ICommunicationObject communicationCallback = (ICommunicationObject)callback;
if (communicationCallback.State == CommunicationState.Opened)
{
try
{
var notice = new WorkflowJobCompletionNotice()
{
Ticket = user.GetTicket(job.JobId),
RuntimeOptions = job.RuntimeOptions
};
callback.NotifyJobCompleted(notice);
}
catch (Exception)
{
communicationCallback.Abort();
}
}
};
user.Invoke<IWorkflowCallback>(action);
}
In the user.Invoke<IWorkflowCallback>(action) method, the Action is passed an instance of the callback channel via OperationContext.GetCallbackChannel<IWorkflowCallback>().
I can see that the task that invokes the job completion notice is executed by the the service, yet I do not receive the call on the client end. Further, the update callback is able to be invoked successfully after a completion notice is sent, so it does not appear that the channel is quietly faulting.
Any idea why, out of these two callbacks that are implemented almost identically, only one works?
Thanks in advance for any insight.