Hangfire - Recurring job can’t be scheduled, see inner exception for details - hangfire

I have an app; which is live on three different servers, using a loadbalancer for user distribution.
The app uses its own queue and I have added a filter for jobs to keep their original queue in case they fail at some point. But then again, it continues to act like the app is not running. The error is like below;
System.InvalidOperationException: Recurring job can't be scheduled, see inner exception for details.
---> Hangfire.Common.JobLoadException: Could not load the job. See inner exception for the details.
---> System.IO.FileNotFoundException: Could not resolve assembly 'My.Api'.
at System.TypeNameParser.ResolveAssembly(String asmName, Func`2 assemblyResolver, Boolean throwOnError, StackCrawlMark& stackMark)
at System.TypeNameParser.ConstructType(Func`2 assemblyResolver, Func`4 typeResolver, Boolean throwOnError, Boolean ignoreCase, StackCrawlMark& stackMark)
at System.TypeNameParser.GetType(String typeName, Func`2 assemblyResolver, Func`4 typeResolver, Boolean throwOnError, Boolean ignoreCase, StackCrawlMark& stackMark)
at System.Type.GetType(String typeName, Func`2 assemblyResolver, Func`4 typeResolver, Boolean throwOnError)
at Hangfire.Common.TypeHelper.DefaultTypeResolver(String typeName)
at Hangfire.Storage.InvocationData.DeserializeJob()
--- End of inner exception stack trace ---
at Hangfire.Storage.InvocationData.DeserializeJob()
at Hangfire.RecurringJobEntity..ctor(String recurringJobId, IDictionary`2 recurringJob, ITimeZoneResolver timeZoneResolver, DateTime now)
--- End of inner exception stack trace ---
at Hangfire.Server.RecurringJobScheduler.ScheduleRecurringJob(BackgroundProcessContext context, IStorageConnection connection, String recurringJobId, RecurringJobEntity recurringJob, DateTime now)
What can be the issue here? The apps are running. And once I trigger the recurring jobs, they are good to go, until they show the above error.
This is my AppStart file;
private IEnumerable<IDisposable> GetHangfireServers()
{
Hangfire.GlobalConfiguration.Configuration
.SetDataCompatibilityLevel(CompatibilityLevel.Version_170)
.UseSimpleAssemblyNameTypeSerializer()
.UseRecommendedSerializerSettings()
.UseSqlServerStorage(HangfireServer, new SqlServerStorageOptions
{
CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
QueuePollInterval = TimeSpan.Zero,
UseRecommendedIsolationLevel = true,
DisableGlobalLocks = true
});
yield return new BackgroundJobServer(new BackgroundJobServerOptions {
Queues = new[] { "myapp" + GetEnvironmentName() },
ServerName = "MyApp" + ConfigurationHelper.GetAppSetting("Environment")
});
}
public void Configuration(IAppBuilder app)
{
var container = new Container();
container.Options.DefaultScopedLifestyle = new AsyncScopedLifestyle();
RegisterTaskDependencies(container);
container.RegisterWebApiControllers(System.Web.Http.GlobalConfiguration.Configuration);
container.Verify();
var configuration = new HttpConfiguration();
configuration.DependencyResolver = new SimpleInjectorWebApiDependencyResolver(container);
/* HANGFIRE CONFIGURATION */
if (Environment == "Production")
{
GlobalJobFilters.Filters.Add(new PreserveOriginalQueueAttribute());
Hangfire.GlobalConfiguration.Configuration.UseActivator(new SimpleInjectorJobActivator(container));
Hangfire.GlobalConfiguration.Configuration.UseLogProvider(new Api.HangfireArea.Helpers.CustomLogProvider(container.GetInstance<Core.Modules.LogModule>()));
app.UseHangfireAspNet(GetHangfireServers);
app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new[] { new DashboardAuthorization() },
AppPath = GetBackToSiteURL(),
DisplayStorageConnectionString = false
});
AddOrUpdateJobs();
}
/* HANGFIRE CONFIGURATION */
app.UseWebApi(configuration);
WebApiConfig.Register(configuration);
}
public static void AddOrUpdateJobs()
{
var queueName = "myapp" + GetEnvironmentName();
RecurringJob.AddOrUpdate<HangfireArea.BackgroundJobs.AttachmentCreator>(
"MyApp_MyTask",
(service) => service.RunMyTask(),
"* * * * *", queue: queueName, timeZone: TimeZoneInfo.FindSystemTimeZoneById("Turkey Standard Time"));
}
What can be the problem here?

Turns out, Hangfire itself does not work great when multiple apps use the same sql schema. To solve this problem I used Hangfire.MAMQSqlExtension. It is a third-party extension but the repo says that it is officially recognized by Hangfire.
If you are using the same schema for multiple apps, you have to use this extension in all your apps, otherwise you'll face the error mentioned above.
If your apps have different versions alive at the same time (e.g. production, test, development) this app itself does not fully work for failed jobs. If a job fails, regular Hangfire will not respect it's original queue, hence will move it to the default queue. Which will eventually create problems if your app only works with your app's queue or if the default queue is shared. To solve that issue, to force Hangfire to respect the original queue attribute, I used this solution. Which works great, and you get to name your app's queue depending on your web.config or appsettings.json.
My previous answer was deleted for some reason? This solves the problem and there's no other way. Do not delete the answer, for people who will experience this issue.

Another option I found was to use Hangfire's background process https://www.hangfire.io/overview.html#background-process.
public class CleanTempDirectoryProcess : IBackgroundProcess
{
public void Execute(BackgroundProcessContext context)
{
Directory.CleanUp(Directory.GetTempDirectory());
context.Wait(TimeSpan.FromHours(1));
}
}
And set the delay. This solved the issue for me as I need to the job to run repeatedly. I'm not sure of the implications this might have with the dashboard.

You can create Job Filters that will do the same as Retry by placing the Queue.
The difference is that you cannot wait to run the job. It will run immediately.
public class AutomaticRetryQueueAttribute : JobFilterAttribute, IApplyStateFilter, IElectStateFilter
{
private string queue;
private int attempts;
private readonly object _lockObject = new object();
private readonly ILog _logger = LogProvider.For<AutomaticRetryQueueAttribute>();
public AutomaticRetryQueueAttribute(int Attempts = 10, string Queue = "Default")
{
queue = Queue;
attempts = Attempts;
}
public int Attempts
{
get { lock (_lockObject) { return attempts; } }
set
{
if (value < 0)
{
throw new ArgumentOutOfRangeException(nameof(value), #"Attempts value must be equal or greater than zero.");
}
lock (_lockObject)
{
attempts = value;
}
}
}
public string Queue
{
get { lock (_lockObject) { return queue; } }
set
{
lock (_lockObject)
{
queue = value;
}
}
}
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
var newState = context.NewState as EnqueuedState;
if (!string.IsNullOrWhiteSpace(queue) && newState != null && newState.Queue != Queue)
{
newState.Queue = String.Format(Queue, context.BackgroundJob.Job.Args.ToArray());
}
if ((context.NewState is ScheduledState || context.NewState is EnqueuedState) &&
context.NewState.Reason != null &&
context.NewState.Reason.StartsWith("Retry attempt"))
{
transaction.AddToSet("retries", context.BackgroundJob.Id);
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
if (context.OldStateName == ScheduledState.StateName)
{
transaction.RemoveFromSet("retries", context.BackgroundJob.Id);
}
}
public void OnStateElection(ElectStateContext context)
{
var failedState = context.CandidateState as FailedState;
if (failedState == null)
{
// This filter accepts only failed job state.
return;
}
var retryAttempt = context.GetJobParameter<int>("RetryCount") + 1;
if (retryAttempt <= Attempts)
{
ScheduleAgainLater(context, retryAttempt, failedState);
}
else
{
_logger.ErrorException($"Failed to process the job '{context.BackgroundJob.Id}': an exception occurred.", failedState.Exception);
}
}
private void ScheduleAgainLater(ElectStateContext context, int retryAttempt, FailedState failedState)
{
context.SetJobParameter("RetryCount", retryAttempt);
const int maxMessageLength = 50;
var exceptionMessage = failedState.Exception.Message.Length > maxMessageLength
? failedState.Exception.Message.Substring(0, maxMessageLength - 1) + "…"
: failedState.Exception.Message;
// If attempt number is less than max attempts, we should
// schedule the job to run again later.
var reason = $"Retry attempt {retryAttempt} of {Attempts}: {exceptionMessage}";
context.CandidateState = (IState)new EnqueuedState { Reason = reason };
if (context.CandidateState is EnqueuedState enqueuedState)
{
enqueuedState.Queue = String.Format(Queue, context.BackgroundJob.Job.Args.ToArray());
}
_logger.WarnException($"Failed to process the job '{context.BackgroundJob.Id}': an exception occurred. Retry attempt {retryAttempt} of {Attempts} will be performed.", failedState.Exception);
}
}

Detele the old hangfire data base and recreate the db with new name
Or Use in Memory strorage method (UseInMemoryStorage)

Related

Upgrade Solution to use FluentValidation Ver 10 Exception Issue

Please I need your help to solve FluentValidation issue. I have an old desktop application which I wrote a few years ago. I used FluentValidation Ver 4 and Now I'm trying to upgrade this application to use .Net framework 4.8 and FluentValidation Ver 10, but unfortunately, I couldn't continue because of an exception that I still cannot fix.
I have this customer class:
class Customer : MyClassBase
{
string _CustomerName = string.Empty;
public string CustomerName
{
get { return _CustomerName; }
set
{
if (_CustomerName == value)
return;
_CustomerName = value;
}
}
class CustomerValidator : AbstractValidator<Customer>
{
public CustomerValidator()
{
RuleFor(obj => obj.CustomerName).NotEmpty().WithMessage("{PropertyName} is Empty");
}
}
protected override IValidator GetValidator()
{
return new CustomerValidator();
}
}
This is my base class:
class MyClassBase
{
public MyClassBase()
{
_Validator = GetValidator();
Validate();
}
protected IValidator _Validator = null;
protected IEnumerable<ValidationFailure> _ValidationErrors = null;
protected virtual IValidator GetValidator()
{
return null;
}
public IEnumerable<ValidationFailure> ValidationErrors
{
get { return _ValidationErrors; }
set { }
}
public void Validate()
{
if (_Validator != null)
{
var context = new ValidationContext<Object>(_Validator);
var results = _Validator.Validate(context); **// <======= Exception is here in this line**
_ValidationErrors = results.Errors;
}
}
public virtual bool IsValid
{
get
{
if (_ValidationErrors != null && _ValidationErrors.Count() > 0)
return false;
else
return true;
}
}
}
When I run the application test I get the below exception:
System.InvalidOperationException HResult=0x80131509 Message=Cannot
validate instances of type 'CustomerValidator'. This validator can
only validate instances of type 'Customer'. Source=FluentValidation
StackTrace: at
FluentValidation.ValidationContext1.GetFromNonGenericContext(IValidationContext context) in C:\Projects\FluentValidation\src\FluentValidation\IValidationContext.cs:line 211 at FluentValidation.AbstractValidator1.FluentValidation.IValidator.Validate(IValidationContext
context)
Please, what is the issue here and How can I fix it?
Thank you
Your overall implementation isn't what I'd consider normal usage however the problem is that you're asking FV to validate the validator instance, rather than the customer instance:
var context = new ValidationContext<Object>(_Validator);
var results = _Validator.Validate(context);
It should start working if you change it to:
var context = new ValidationContext<object>(this);
var results = _Validator.Validate(context);
You're stuck with using the object argument for the validation context unless you introduce a generic argument to the base class, or create it using reflection.

stop polling files when rabbitmq is down: spring integration

I'm working on a project where we are polling files from a sftp server and streaming it out into a object on the rabbitmq queue. Now when the rabbitmq is down it still polls and deletes the file from the server and losses the file while sending it on queue when rabbitmq is down. I'm using ExpressionEvaluatingRequestHandlerAdvice to remove the file on successful transformation. My code looks like this:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpProperties.getSftpHost());
factory.setPort(sftpProperties.getSftpPort());
factory.setUser(sftpProperties.getSftpPathUser());
factory.setPassword(sftpProperties.getSftpPathPassword());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpRemoteFileTemplate sftpRemoteFileTemplate() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#InboundChannelAdapter(channel = TransformerChannel.TRANSFORMER_OUTPUT, autoStartup = "false",
poller = #Poller(value = "customPoller"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(sftpRemoteFileTemplate,
null);
messageSource.setRemoteDirectory(sftpProperties.getSftpDirPath());
messageSource.setFilter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.txt"));
return messageSource;
}
#Bean
#Transformer(inputChannel = TransformerChannel.TRANSFORMER_OUTPUT,
outputChannel = SFTPOutputChannel.SFTP_OUTPUT,
adviceChain = "deleteAdvice")
public org.springframework.integration.transformer.Transformer transformer() {
return new SFTPTransformerService("UTF-8");
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice deleteAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpressionString(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(false);
return advice;
}
I don't want the files to get removed/polled from the remote sftp server when the rabbitmq server is down. How can i achieve this ?
UPDATE
Apologies for not mentioning that I'm using spring cloud stream rabbit binder. And here is the transformer service:
public class SFTPTransformerService extends StreamTransformer {
public SFTPTransformerService(String charset) {
super(charset);
}
#Override
protected Object doTransform(Message<?> message) throws Exception {
String fileName = message.getHeaders().get("file_remoteFile", String.class);
Object fileContents = super.doTransform(message);
return new customFileDTO(fileName, (String) fileContents);
}
}
UPDATE-2
I added TransactionSynchronizationFactory on the customPoller as suggested. Now it doesn't poll file when rabbit server is down, but when the server is up, it keeps on polling the same file over and over again!! I cannot figure it out why? I guess i cannot use PollerSpec cause im on 4.3.2 version.
#Bean(name = "customPoller")
public PollerMetadata pollerMetadataDTX(StartStopTrigger startStopTrigger,
CustomTriggerAdvice customTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setAdviceChain(Collections.singletonList(customTriggerAdvice));
pollerMetadata.setTrigger(startStopTrigger);
pollerMetadata.setMaxMessagesPerPoll(Long.valueOf(sftpProperties.getMaxMessagePoll()));
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
syncProcessor.setBeforeCommitChannel(
applicationContext.getBean(TransformerChannel.TRANSFORMER_OUTPUT, MessageChannel.class));
syncProcessor
.setAfterCommitChannel(
applicationContext.getBean(SFTPOutputChannel.SFTP_OUTPUT, MessageChannel.class));
syncProcessor.setAfterCommitExpression(new SpelExpressionParser().parseExpression(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])"));
DefaultTransactionSynchronizationFactory defaultTransactionSynchronizationFactory =
new DefaultTransactionSynchronizationFactory(syncProcessor);
pollerMetadata.setTransactionSynchronizationFactory(defaultTransactionSynchronizationFactory);
return pollerMetadata;
}
I don't know if you need this info but my CustomTriggerAdvice and StartStopTrigger looks like this :
#Component
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
#Autowired private StartStopTrigger startStopTrigger;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
if (startStopTrigger.getStart()) {
startStopTrigger.stop();
}
} else {
if (!startStopTrigger.getStart()) {
startStopTrigger.stop();
}
}
return result;
}
}
public class StartStopTrigger implements Trigger {
private PeriodicTrigger startTrigger;
private boolean start;
public StartStopTrigger(PeriodicTrigger startTrigger, boolean start) {
this.startTrigger = startTrigger;
this.start = start;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (!start) {
return null;
}
start = true;
return startTrigger.nextExecutionTime(triggerContext);
}
public void stop() {
start = false;
}
public void start() {
start = true;
}
public boolean getStart() {
return this.start;
}
}
Well, would be great to see what your SFTPTransformerService and determine how it is possible to perform an onSuccessExpression when there should be an exception in case of down broker.
You also should not only throw an exception do not perform delete, but consider to add a RequestHandlerRetryAdvice to re-send the file to the RabbitMQ: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/messaging-endpoints-chapter.html#retry-advice
UPDATE
So, well, since Gary guessed that you use Spring Cloud Stream to send message to the Rabbit Binder after your internal process (very sad that you didn't share that information originally), you need to take a look to the Binder error handling on the matter: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#_retry_with_the_rabbitmq_binder
And that is true that ExpressionEvaluatingRequestHandlerAdvice is applied only for the SFTPTransformerService and nothing more. The downstream error (in the Binder) is not included in this process already.
UPDATE 2
Yeah... I think Gary is right, and we don't have choice unless configure a TransactionSynchronizationFactory on the customPoller level instead of that ExpressionEvaluatingRequestHandlerAdvice: ExpressionEvaluatingRequestHandlerAdvice .
The DefaultTransactionSynchronizationFactory can be configured with the ExpressionEvaluatingTransactionSynchronizationProcessor, which has similar goal as the mentioned ExpressionEvaluatingRequestHandlerAdvice, but on the transaction level which will include your process starting with the SFTP Channel Adapter and ending on the Rabbit Binder level with the send to AMQP attempts.
See Reference Manual for more information: https://docs.spring.io/spring-integration/reference/html/transactions.html#transaction-synchronization.
The point with the ExpressionEvaluatingRequestHandlerAdvice (and any AbstractRequestHandlerAdvice) that they have a boundary only around handleRequestMessage() method, therefore only during the component they are declared.

Unpredictable result of DriveId.getResourceId() in Google Drive Android API

The issue is that the 'resourceID' from 'DriveId.getResourceId()' is not available (returns NULL) on newly created files (product of 'DriveFolder.createFile(GAC, meta, cont)'). If the file is retrieved by a regular list or query procedure, the 'resourceID' is correct.
I suspect it is a timing/latency issue, but it is not clear if there is an application action that would force refresh. The 'Drive.DriveApi.requestSync(GAC)' seems to have no effect.
UPDATE (07/22/2015)
Thanks to the prompt response from Steven Bazyl (see comments below), I finally have a satisfactory solution using Completion Events. Here are two minified code snippets that deliver the ResourceId to the app as soon as the newly created file is propagated to the Drive:
File creation, add change subscription:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "_X_";
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null,
new ExecutionOptions.Builder()
.setNotifyOnCompletion(true)
.build()
)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
DriveId driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
DriveFile file = Drive.DriveApi.getFile(getGoogleApiClient(), driveId);
file.addChangeSubscription(getGoogleApiClient());
}
}
});
}
}
Event Service, catches the completion:
public class ChngeSvc extends DriveEventService {
private static final String TAG = "_X_";
#Override
public void onCompletion(CompletionEvent event) { super.onCompletion(event);
DriveId driveId = event.getDriveId();
Log.d(TAG, "onComplete: " + driveId.getResourceId());
switch (event.getStatus()) {
case CompletionEvent.STATUS_CONFLICT: Log.d(TAG, "STATUS_CONFLICT"); event.dismiss(); break;
case CompletionEvent.STATUS_FAILURE: Log.d(TAG, "STATUS_FAILURE"); event.dismiss(); break;
case CompletionEvent.STATUS_SUCCESS: Log.d(TAG, "STATUS_SUCCESS "); event.dismiss(); break;
}
}
}
Under normal circumstances (wifi), I get the ResourceId almost immediately.
20:40:53.247﹕Created a empty file: DriveId:CAESABiiAiDGsfO61VMoAA==
20:40:54.305: onComplete, ResourceId: 0BxOS7mTBMR_bMHZRUjJ5NU1ZOWs
... done for now.
ORIGINAL POST, deprecated, left here for reference.
I let this answer sit for a year hoping that GDAA will develop a solution that works. The reason for my nagging is simple. If my app creates a file, it needs to broadcast this fact to its buddies (other devices, for instance) with an ID that is meaningful (that is ResourceId). It is a trivial task under the REST Api where ResourceId comes back as soon as the file is successfully created.
Needles to say that I understand the GDAA philosophy of shielding the app from network primitives, caching, batching, ... But clearly, in this situation, the ResourceID is available long before it is delivered to the app.
Originally, I implemented Cheryl Simon's suggestion and added a ChangeListener on a newly created file, hoping to get the ResourceID when the file is propagated. Using classic CreateEmptyFileActivity from android-demos, I smacked together the following test code:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "CreateEmptyFileActivity";
final private ChangeListener mChgeLstnr = new ChangeListener() {
#Override
public void onChange(ChangeEvent event) {
Log.d(TAG, "event: " + event + " resId: " + event.getDriveId().getResourceId());
}
};
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
DriveId driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).addChangeListener(getGoogleApiClient(), mChgeLstnr);
}
}
});
}
}
... and was waiting for something to happen. File was happily uploaded to the Drive within seconds, but no onChange() event. 10 minutes, 20 minutes, ... I could not find any way how to make the ChangeListener to wake up.
So the only other solution, I could come up was to nudge the GDAA. So I implemented a simple handler-poker that tickles the metadata until something happens:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "CreateEmptyFileActivity";
final private ChangeListener mChgeLstnr = new ChangeListener() {
#Override
public void onChange(ChangeEvent event) {
Log.d(TAG, "event: " + event + " resId: " + event.getDriveId().getResourceId());
}
};
static DriveId driveId;
private static final int ENOUGH = 4; // nudge 4x, 1+2+3+4 = 10seconds
private static int mWait = 1000;
private int mCnt;
private Handler mPoker;
private final Runnable mPoke = new Runnable() { public void run() {
if (mPoker != null && driveId != null && driveId.getResourceId() == null && (mCnt++ < ENOUGH)) {
MetadataChangeSet meta = new MetadataChangeSet.Builder().build();
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).updateMetadata(getGoogleApiClient(), meta).setResultCallback(
new ResultCallback<DriveResource.MetadataResult>() {
#Override
public void onResult(DriveResource.MetadataResult result) {
if (result.getStatus().isSuccess() && result.getMetadata().getDriveId().getResourceId() != null)
Log.d(TAG, "resId COOL " + result.getMetadata().getDriveId().getResourceId());
else
mPoker.postDelayed(mPoke, mWait *= 2);
}
}
);
} else {
mPoker = null;
}
}};
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).addChangeListener(getGoogleApiClient(), mChgeLstnr);
mCnt = 0;
mPoker = new Handler();
mPoker.postDelayed(mPoke, mWait);
}
}
});
}
}
And voila, 4 seconds (give or take) later, the ChangeListener delivers a new shiny ResourceId. Of course, the ChangeListener becomes thus obsolete, since the poker routine gets the ResourceId as well.
So this is the answer for those who can't wait for the ResourceId. Which brings up the follow-up question:
Why do I have to tickle metadata (or re-commit content), very likely creating unnecessary network traffic, to get onChange() event, when I see clearly that the file has been propagated a long time ago, and GDAA has the ResourceId available?
ResourceIds become available when the newly created resource is committed to the server. In the case of a device that is offline, this could be arbitrarily long after the initial file creation. It will happen as soon as possible after the creation request though, so you don't need to do anything to speed it along.
If you really need it right away, you could conceivably use the change notifications to listen for the resourceId to change.

Load external properties files into EJB 3 app running on WebLogic 11

Am researching the best way to load external properties files from and EJB 3 app whose EAR file is deployed to WebLogic.
Was thinking about using an init servlet but I read somewhere that it would be too slow (e.g. my message handler might receive a message from my JMS queue before the init servlet runs).
Suppose I have multiple property files or one file here:
~/opt/conf/
So far, I feel that the best possible solution is by using a Web Logic application lifecycle event where the code to read the properties files during pre-start:
import weblogic.application.ApplicationLifecycleListener;
import weblogic.application.ApplicationLifecycleEvent;
public class MyListener extends ApplicationLifecycleListener {
public void preStart(ApplicationLifecycleEvent evt) {
// Load properties files
}
}
See: http://download.oracle.com/docs/cd/E13222_01/wls/docs90/programming/lifecycle.html
What would happen if the server is already running, would post start be a viable solution?
Can anyone think of any alternative ways that are better?
It really depends on how often you want the properties to be reloaded. One approach I have taken is to have a properties file wrapper (singleton) that has a configurable parameter that defines how often the files should be reloaded. I would then always read properties through that wrapper and it would reload the properties ever 15 minutes (similar to Log4J's ConfigureAndWatch). That way, if I wanted to, I can change properties without changing the state of a deployed application.
This also allows you to load properties from a database, instead of a file. That way you can have a level of confidence that properties are consistent across the nodes in a cluster and it reduces complexity associated with managing a config file for each node.
I prefer that over tying it to a lifecycle event. If you weren't ever going to change them, then make them static constants somewhere :)
Here is an example implementation to give you an idea:
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.*;
/**
* User: jeffrey.a.west
* Date: Jul 1, 2011
* Time: 8:43:55 AM
*/
public class ReloadingProperties
{
private final String lockObject = "LockMe";
private long lastLoadTime = 0;
private long reloadInterval;
private String filePath;
private Properties properties;
private static final Map<String, ReloadingProperties> instanceMap;
private static final long DEFAULT_RELOAD_INTERVAL = 1000 * 60 * 5;
public static void main(String[] args)
{
ReloadingProperties props = ReloadingProperties.getInstance("myProperties.properties");
System.out.println(props.getProperty("example"));
try
{
Thread.sleep(6000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
System.out.println(props.getProperty("example"));
}
static
{
instanceMap = new HashMap(31);
}
public static ReloadingProperties getInstance(String filePath)
{
ReloadingProperties instance = instanceMap.get(filePath);
if (instance == null)
{
instance = new ReloadingProperties(filePath, DEFAULT_RELOAD_INTERVAL);
synchronized (instanceMap)
{
instanceMap.put(filePath, instance);
}
}
return instance;
}
private ReloadingProperties(String filePath, long reloadInterval)
{
this.reloadInterval = reloadInterval;
this.filePath = filePath;
}
private void checkRefresh()
{
long currentTime = System.currentTimeMillis();
long sinceLastLoad = currentTime - lastLoadTime;
if (properties == null || sinceLastLoad > reloadInterval)
{
System.out.println("Reloading!");
lastLoadTime = System.currentTimeMillis();
Properties newProperties = new Properties();
FileInputStream fileIn = null;
synchronized (lockObject)
{
try
{
fileIn = new FileInputStream(filePath);
newProperties.load(fileIn);
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
if (fileIn != null)
{
try
{
fileIn.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
properties = newProperties;
}
}
}
public String getProperty(String key, String defaultValue)
{
checkRefresh();
return properties.getProperty(key, defaultValue);
}
public String getProperty(String key)
{
checkRefresh();
return properties.getProperty(key);
}
}
Figured it out...
See the corresponding / related post on Stack Overflow.

does nhibernate create implicit transactions within a TransactionScope?

I've created an integration test to verify that a repository handles Concurrency correcly. If I run the test without a TransactionScope, everything works as expect, but if I wrap the test in a TransactionScope, I get an error suggesting that there is a sudden need for distributed transactions (which lead me to believe that there is a second transaction being created). Here is the test:
[Test]
public void Commit_ItemToCommitContainsStaleData_ThrowsStaleObjectStateException()
{
using (new TransactionScope())
{
// arrange
RootUnitOfWorkFactory factory = CreateUnitOfWorkFactory();
const int Id = 1;
WorkItemRepository firstRepository = new WorkItemRepository(factory);
WorkItem itemToChange = WorkItem.Create(Id);
firstRepository.Commit(itemToChange);
WorkItemRepository secondRepository = new WorkItemRepository(factory);
WorkItem copyOfItemToChange = secondRepository.Get(Id);
// act
copyOfItemToChange.ChangeDescription("A");
secondRepository.Commit(copyOfItemToChange);
itemToChange.ChangeDescription("B");
// assert
Assert.Throws<StaleObjectStateException>(() => firstRepository.Commit(itemToChange));
}
}
This is the bottom of the error stack:
failed: NHibernate.Exceptions.GenericADOException : could not load an entity: [TfsTimeMachine.Domain.WorkItem#1][SQL: SELECT workitem0_.Id as Id1_0_, workitem0_.LastChanged as LastChan2_1_0_, workitem0_.Description as Descript3_1_0_ FROM [WorkItem] workitem0_ WHERE workitem0_.Id=?]
----> System.Data.SqlClient.SqlException : MSDTC on server 'ADM4200\SQLEXPRESS' is unavailable.
at NHibernate.Loader.Loader.LoadEntity(ISessionImplementor session, Object id, IType identifierType, Object optionalObject, String optionalEntityName, Object optionalIdentifier, IEntityPersister persister).
I'm running NUnit 2.1, so can someone tell me if Nhibernate creates implicit transactions if there is no session.BeginTransaction() before querying data, regardless of the session running within a TransactionScope?
I got this to work. The problem was (as stated in my comment) that two concurrent sessions were started within the same transactionscope and both started a new dbconnection which enlisted the same transaction, thus forcing DTC to kick in. The solution to this was to create a custom connection provider which ensured that the same connection was returned while inside a transactionscope. I then put this into play in my test and presto, I could test stale object state and rollback the data when the tests completes. Heres my implementation:
/// <summary>
/// A connection provider which returns the same db connetion while
/// there exists a TransactionScope.
/// </summary>
public sealed class AmbientTransactionAwareDriverConnectionProvider : IConnectionProvider
{
private readonly bool disposeDecoratedProviderWhenDisposingThis;
private IConnectionProvider decoratedProvider;
private IDbConnection maintainedConnectionThroughAmbientSession;
public AmbientTransactionAwareDriverConnectionProvider()
: this(new DriverConnectionProvider(), true)
{}
public AmbientTransactionAwareDriverConnectionProvider(IConnectionProvider decoratedProvider,
bool disposeDecoratedProviderWhenDisposingThis)
{
Guard.AssertNotNull(decoratedProvider, "decoratedProvider");
this.decoratedProvider = decoratedProvider;
this.disposeDecoratedProviderWhenDisposingThis = disposeDecoratedProviderWhenDisposingThis;
}
~AmbientTransactionAwareDriverConnectionProvider()
{
Dispose(false);
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
public void Configure(IDictionary<string, string> settings)
{
this.decoratedProvider.Configure(settings);
}
public void CloseConnection(IDbConnection conn)
{
if (Transaction.Current == null)
this.decoratedProvider.CloseConnection(conn);
}
public IDbConnection GetConnection()
{
if (Transaction.Current == null)
{
if (this.maintainedConnectionThroughAmbientSession != null)
this.maintainedConnectionThroughAmbientSession.Dispose();
return this.decoratedProvider.GetConnection();
}
if (this.maintainedConnectionThroughAmbientSession == null)
this.maintainedConnectionThroughAmbientSession = this.decoratedProvider.GetConnection();
return this.maintainedConnectionThroughAmbientSession;
}
private void Dispose(bool disposing)
{
if (this.maintainedConnectionThroughAmbientSession != null)
CloseConnection(this.maintainedConnectionThroughAmbientSession);
if (this.disposeDecoratedProviderWhenDisposingThis && this.decoratedProvider != null)
this.decoratedProvider.Dispose();
if (disposing)
{
this.decoratedProvider = null;
this.maintainedConnectionThroughAmbientSession = null;
}
}
public IDriver Driver
{
get { return this.decoratedProvider.Driver; }
}
}
I'm not sure if Hibernate is using transactions internally, but I also don't think that is your problem here.
It appears that the problem is that you are using two different data sources in the same transaction. In order to coordinate the transaction between both data sources for a two-phase commit, you would need to have DTC enabled. The fact that both data sources are actually the same database is immaterial.