Is it possible to serialize and de-serialize PublisherInterstitialAd object of Google DFP ads, so as to save and retrieve the object in SharedPreferences?
When I tried doing using Gson library, I got StackOverflowError. Please do suggest how best this can be done and where I would be going wrong in my current approach.
Thank you.
These are the methods am using for saving and retrieving the Ad object in SharedPreferences, where publisherInterstitialAd is the object in question.
public void saveInterstitialAd(PublisherInterstitialAd publisherInterstitialAd) {
Gson gson = new Gson();
String json = gson.toJson(publisherInterstitialAd);
mEditor.putString("InterstitialAd", json);
mEditor.commit();
}
public PublisherInterstitialAd getInterstitialAd(){
Gson gson = new Gson();
String json = mSharedPrefs.getString("InterstitialAd", "");
if(json.equals(""))
return null;
return gson.fromJson(json, PublisherInterstitialAd.class);
}
and this is the stack trace am getting:
UncaughtException
java.lang.StackOverflowError
at java.lang.Class.isArray(Class.java:1118)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:96)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:551)
at com.google.gson.internal.$Gson$Types.canonicalize($Gson$Types.java:109)
at com.google.gson.internal.$Gson$Types$WildcardTypeImpl.($Gson$Types.java:544)
at com.google.gson.internal.$Gson$Types.canonical
I think Google would argue that you shouldn't store ads long-term. They must be aware that their Ad objects aren't serializable or Parcelable, so that leads me to believe it is intentional.
That having been said, if you really must, you can always store the objects on your Application object (I really recommend against this, though, as that is a serious anti-pattern full of pitfalls). I'd recommend instead just asking for the interstitial in any activity that you might ultimately show it in (if, for example, your goal is to show an interstitial after an Activity is complete, maybe load the interstitial when the Activity is first created, then show it before you finish the activity, after the user has committed whatever action you wanted them to).
Related
we have a data frame which we want to write to s3 as parquet format and in overwrite mode.
every time we write the dataframe it's always a new folder. The code to write the s3 location is as follows:
df.write
.option("dateFormat", "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'")
.option("timestampFormat", "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'")
.option("maxRecordsPerFile", maxRecordsPerFile)
.mode("overwrite")
.format(format)
.save(output)
What we observe is, at times we get FilenotFoundException (full trace below). Can somebody help me understand
when i am writing to a new s3 location (meaning nobody is reading from the location); why does the writing program throw the below exception?
how to fix it? --i see couple of stackoverflows pointing to this exception. But they say that it happens when you try to read when write is happening. But my case is not like that. i dont read when write happens.
my spark is 2.3.2 ; EMR-5.18.1 ; the code is written in scala
I am using s3:// as output folder path. Should i change it to some s3n or s3a ? will that help?
Caused by: java.io.FileNotFoundException: No such file or directory 's3://BUCKET/snapshots/FOLDER/_bid_9223370368440344985/part-00020-693dfbcb-74e9-45b0-b892-0b19fa92365c-c000.snappy.parquet'
It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:131)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:104)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:101)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:853)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:853)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
I finally was able to solve the problem
The df : DataFrame was formed on the same s3 folder to which the same is being written in overwrite mode.
So during the overwrite; the source folder is getting cleared --which was resulting into the error
Hope this helps somebody.
Snappy job written in Scala aborts with exception:
java.lang.ClassCastException: com.....$Class1 cannot be cast to com.....$Class1.
Class1 is custom class that is stored in RDD. Interesting thing is this error is thrown while casting same class. So far, no patterns are found.
In the job, we fetch data from hbase, enrich data with analytical metadata using Dataframes and push it to a table in SnappyData. We are using Snappydata 1.2.0.1.
Not sure why is this happening.
Below is Stack Trace:
Job aborted due to stage failure: Task 76 in stage 42.0 failed 4 times, most recent failure: Lost task 76.3 in stage 42.0 (TID 3550, HostName, executor XX.XX.x.xxx(10360):7872): java.lang.ClassCastException: cannot be cast to
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:86)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$2.hasNext(WholeStageCodegenExec.scala:571)
at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$1.hasNext(WholeStageCodegenExec.scala:514)
at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$1$$anon$1.hasNext(InMemoryRelation.scala:132)
at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:233)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1006)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:997)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:936)
at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:997)
at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:700)
at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:41)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.sql.execution.WholeStageCodegenRDD.computeInternal(WholeStageCodegenExec.scala:557)
at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$1.(WholeStageCodegenExec.scala:504)
at org.apache.spark.sql.execution.WholeStageCodegenRDD.compute(WholeStageCodegenExec.scala:503)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:41)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:103)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:58)
at org.apache.spark.scheduler.Task.run(Task.scala:126)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:326)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.spark.executor.SnappyExecutor$$anon$2$$anon$3.run(SnappyExecutor.scala:57)
at java.lang.Thread.run(Thread.java:748)
Classes are not unique by name. They're unique by name + classloader.
ClassCastException of the kind you're seeing happens when you pass data between parts of the app where one or both parts are loaded in a separate classloader.
You might need to clean up your classpath, you might need to resolve the classes from the same classloader, or you might have to serialize the data (especially if you have features that rely on reloading code at runtime).
I am trying easyMock test few classes / interface methods. Methods with parameters, trying to capture the parameter, but getting one or other error. If I manage to only record one expectation, it doesn't even capture anything in the parameter pipe, where if i use the following approach, I am getting error as follows the code.
#Test
public void testFireChannelInitializer() throws Exception
{
expect(c.pipeline()).andReturn(pipeline).times(1);
channelListener.fireChannelInitializer(EasyMock.capture(pipe), serverHandler);
EasyMock.replay(c, pipeline, channelListener);
initializer.initChannel(c);
verifyAll();
assertEquals(4, pipe.getValues().size());
assertTrue(pipe.getValues().get(0) instanceof LoggingHandler);
assertTrue(pipe.getValues().get(0) instanceof ObjectEncoder);
assertTrue(pipe.getValues().get(0) instanceof ObjectDecoder);
assertTrue(pipe.getValues().get(0) instanceof ServerHandler);
}
Results in Error
testFireChannelInitializer(com.obolus.generic.impl.DefaultChannelListenerTest)
Time elapsed: 3.812 sec <<< ERROR! java.lang.IllegalStateException: 2
matchers expected, 1 recorded. This exception usually occurs when
matchers are mixed with raw values when recording a method: foo(5,
eq(6)); // wrong You need to use no matcher at all or a matcher for
every single param: foo(eq(5), eq(6)); // right foo(5, 6); // also
right at
org.easymock.internal.ExpectedInvocation.createMissingMatchers(ExpectedInvocation.java:51)
at
org.easymock.internal.ExpectedInvocation.(ExpectedInvocation.java:40)
at org.easymock.internal.RecordState.invoke(RecordState.java:78) at
org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:40)
at
org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:94)
at
org.easymock.internal.ClassProxyFactory$MockMethodInterceptor.intercept(ClassProxyFactory.java:97)
at
com.obolus.generic.impl.DefaultChannelListener$$EnhancerByCGLIB$$2da02970.fireChannelInitializer()
at
com.obolus.generic.impl.DefaultChannelListenerTest.testFireChannelInitializer(DefaultChannelListenerTest.java:63)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483) at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at
org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at
org.junit.runners.ParentRunner.run(ParentRunner.java:300) at
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:242)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:137)
at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483) at
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Any idea whats wrong or How to use easy mock? No good documentation or examples around.
The easymock website has a user guide but they redid their website recently and the guide isn't as complete as it used to be.
I think your problem might be you have to do a capture AND an argument matcher.
From the user guide:
Matches any value but captures it in the Capture parameter for later access. You can do and(someMatcher(...), capture(c)) to capture a parameter from a specific call to the method. You can also specify a CaptureType telling that a given Capture should keep the first, the last, all or no captured values.
So you might need to do an and( capture(..), paramMatcher)
Also EasyMock has an annoying API "feature" where if you use one argument matcher in a method call, then all the arguments must also be wrapped in matchers even if it's eq(). I think that's what your exception is complaining about. So I think those are your two problems.
I'm not sure what your method signature looks like so I will assume it's
void fireChannelInitializer(Object, ServerHandler);
after using static imports to import EasyMock.*
channelListener.fireChannelInitializer(
and(capture(pipe), isA(Object.class)), //captures the argument to `pipe` Capture object
eq(serverHandler));
I have a setup with : Spring+Hibernate+JPA+Junit to test my code.
I have Oracle as database. When I test my code every test passed. This code is even in production.I wanted to use HSQLDB in memory for the junit instead of Oracle.Most of the tests still passed with HSQLDB, but one particular test always fail with HQSQLDB. (Works fin in Oracle).
here the line that cause trouble. (I used log4jdbc to output the sql statements).
creating batch for two statements:
1: delete from IMAGE_TAGS where IMAGE_TAGS_KEY=1
2: delete from IMAGE_TAGS where IMAGE_TAGS_KEY=2
in JPA, I'm looking on entity to delete them. Works fine on the first iteration, but when I arrive to the second item, it fail with this exception :
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:85)
at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:70)
at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:90)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
at org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:114)
at org.hibernate.jdbc.AbstractBatcher.prepareStatement(AbstractBatcher.java:109)
at org.hibernate.jdbc.AbstractBatcher.prepareBatchStatement(AbstractBatcher.java:244)
at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2666)
at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2911)
at org.hibernate.action.EntityDeleteAction.execute(EntityDeleteAction.java:97)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:273)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:265)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:189)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultAutoFlushEventListener.onAutoFlush(DefaultAutoFlushEventListener.java:64)
at org.hibernate.impl.SessionImpl.autoFlushIfRequired(SessionImpl.java:1185)
at org.hibernate.impl.SessionImpl.executeUpdate(SessionImpl.java:1283)
at org.hibernate.impl.QueryImpl.executeUpdate(QueryImpl.java:117)
at com.videotron.imagemanager.dao.ImageContentGroupDAOImpl.removeImageContentGroup(ImageContentGroupDAOImpl.java:143)
at com.videotron.imagemanager.dao.ImageContentGroupDAOImpl$$FastClassByCGLIB$$8ef3e206.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:627)
at com.videotron.imagemanager.dao.ImageContentGroupDAOImpl$$EnhancerByCGLIB$$a153b522.removeImageContentGroup(<generated>)
at com.videotron.imagemanager.service.ImageManagerServiceImpl.cleanContent(ImageManagerServiceImpl.java:270)
at com.videotron.imagemanager.service.ImageManagerServiceImpl$$FastClassByCGLIB$$82a03251.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:627)
at com.videotron.imagemanager.service.ImageManagerServiceImpl$$EnhancerByCGLIB$$1a5a3f0d.cleanContent(<generated>)
at com.videotron.imagemanager.image.impl.v1.ImageManagerImpl.cleanContent(ImageManagerImpl.java:1454)
at com.videotron.imagemanager.image.impl.v1.ImageManagerImpl$$FastClassByCGLIB$$4ded3863.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:698)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:96)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:260)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:631)
at com.videotron.imagemanager.image.impl.v1.ImageManagerImpl$$EnhancerByCGLIB$$c6ce4df.cleanContent(<generated>)
at com.videotron.imagemanager.service.ImageManagerServiceFacade.cleanContent(ImageManagerServiceFacade.java:350)
at com.videotron.imagemanager.service.ImageManagerServiceFacade$$FastClassByCGLIB$$558f388b.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:627)
at com.videotron.imagemanager.service.ImageManagerServiceFacade$$EnhancerByCGLIB$$fb734807.cleanContent(<generated>)
at com.videotron.imagemanager.service.ImageManagerServiceFacadeTest.testCleanContent(ImageManagerServiceFacadeTest.java:671)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:74)
at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:83)
at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:72)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:88)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
Cause of exception org.hibernate.StaleStateException is mainly because of
"Hibernate caches objects from the session. If the object was
modified, and Hibernate doesn’t know about it, it will throw this
exception".
Or
Flushing the data before committing the object may lead to clear all
object pending for persist.
I will suggest set log level of spring and log4jdbc to debug and enable show_sql of hibernate this willl gives you clear idea.
Update
As on first look i Think that:
On First iteration when you say Delete IMAGE_TAGS_KEY 1 and 2 hibernate fires this query so that you are getting this query logged by log4jdbc but still transaction is not commited so the changes is still not written in DB and on second iteration again you are deleting the same keys which is pending to get reflected.
I'm getting started using MassTransit and need to use the RuntimeServices to manage subscriptions and timeouts. The environment I'm installing into is an externally facing network divided up in to segments using firewalls.
Currently the application server where RuntimeServices is to be installed and the SQL Server do not have RPC ports open that would allow Distributed Transaction Coordinator (DTC) to work correctly.
The complete exception I am getting is listed below but the important part looks like it is System.Transactions.TransactionException: The operation is not valid for the state of the transaction.. I don't believe that the transaction is getting off the ground since DTC is not configured.
Although I should be able to ask for the correct ports to be opened I am reluctant to do so as I'm not that concerned about transactions for this purpose. So ideally I'd like to tell MassTransit or perhaps is it nHibernate that I don't require distributed transactions.
BTW my MS Message Queue is non-transactional.
Any help welcome with thanks,
Rob
Full exception stack trace:
MassTransit.Context.ServiceBusReceiveContext-'System.Action'1[[MassTransit.IConsumeContext, MassTransit, Version=2.6.416.0, Culture=neutral, PublicKeyToken=null]]' threw an exception consuming message 'MassTransit.Context.ReceiveContext' NHibernate.Exceptions.GenericADOException: could not execute query
[ select subscripti0_.CorrelationId as Correlat1_1_, subscripti0_.CurrentState as CurrentS2_1_, subscripti0_.ControlUri as ControlUri1_, subscripti0_.DataUri as
DataUri1_ from dbo.SubscriptionClientSaga subscripti0_ where subscripti0_.CurrentState=? ]
Name:p1 - Value:Active (State)
[SQL: select subscripti0_.CorrelationId as Correlat1_1_, subscripti0_.CurrentState as CurrentS2_1_, subscripti0_.ControlUri as ControlUri1_, subscripti0_.DataUri as DataUri1_ from dbo.SubscriptionClientSaga subscripti0_ where subscripti0_.CurrentState=?] ---> System.Transactions.TransactionException: The operation is not valid for the state of the transaction.
at System.Transactions.TransactionState.EnlistVolatile(InternalTransaction tx, IEnlistmentNotification enlistmentNotification, EnlistmentOptions enlistmentOptions, Transaction atomicTransaction)
at System.Transactions.Transaction.EnlistVolatile(IEnlistmentNotification enlistmentNotification, EnlistmentOptions enlistmentOptions)
at NHibernate.Transaction.AdoNetWithDistributedTransactionFactory.EnlistInDis
tributedTransactionIfNeeded(ISessionImplementor session)
at NHibernate.Impl.AbstractSessionImpl.EnlistInAmbientTransactionIfNeeded()
at NHibernate.Impl.AbstractSessionImpl.CheckAndUpdateSessionStatus()
at NHibernate.Impl.SessionImpl.get_Batcher()
at NHibernate.Loader.Loader.GetResultSet(IDbCommand st, Boolean autoDiscoverTypes, Boolean callable, RowSelection selection, ISessionImplementor session)
at NHibernate.Loader.Loader.DoQuery(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies)
at NHibernate.Loader.Loader.DoQueryAndInitializeNonLazyCollections(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies)
at NHibernate.Loader.Loader.DoList(ISessionImplementor session, QueryParameters queryParameters)
--- End of inner exception stack trace ---
at NHibernate.Loader.Loader.DoList(ISessionImplementor session, QueryParameters queryParameters)
... (elided for brevity!) ...
Transactions are automatically promoted to DTC in certain cases: http://msdn.microsoft.com/en-us/library/ms131083.aspx
You definitely want to avoid that from happening - it kills the performance, options:
Subscription service is not resource-intensive, host its database
locally.
Evaluate the scope used for message consumption, see if you can reduce it
If using MSMQ, use multicasting instead of subscription service
Consider using RabbitMQ instead - no subscription service required
https://groups.google.com/forum/?fromgroups#!forum/masstransit-discuss is where you can get help with MassTransit quickly.
Cheers,
ET.