Titan storage.backend=inmemory Causes Incompatible Data Type Exception - testing

I've been using titan-cassandra storage.backend=cassandra in my testing environment and recently learned that I can simply plop in storage.backend=inmemory to test with an in-memory graph. As this will drastically increase the speed of the hundreds of tests - this sounds great. But, making this change to my otherwise simple test configuration causes many of my once passing tests to fail. Here's my configuration:
storage.backend=inmemory
storage.hostname=localhost
storage.cassandra.keyspace=test
There are two errors that come up, the vast majority being:
java.lang.IllegalArgumentException: Incompatible data types for: variable
while much less commonly:
com.thinkaurelius.titan.core.TitanException: Could not commit transaction due to exception during persistence
Any idea on how I can resolve these issues?

I do my tests using in memory graph database too. While, I don't specify anything for hostname, I tried using, now, localhost, and it still works. I believe the problem might be somewhere else and you have fixed it by now.

Related

Anylogic: truncated class error during optimization

I state that I am a beginner, here is my problem.
My model works perfectly when I run it with a normal simulation. Now I'm trying to optimize some parameters using the optimization experiment, I've followed all the steps of the official tutorial, but it doesn't work because I get "Exception during discrete event execution:
Truncated class file". The strange thing is that, looking into the console displaying the error, I see that some lines are referred to an old version of my model, for example:
java.lang.ClassFormatError: Truncated class file
at coffe_maker.Main._m1_1_delayTime_xjal(Main.java:14070)
The current model's name is coffee_maker_v2_6 so I don't understand why I get this kind of error, do you know if it is normal? What am I doing wrong?
The most likely cause is that you have Java code left in an 'unused' configuration of a Delay block's "Delay time" expression (e.g., it now has a static value but you had Java code in the now-switched-out dynamic value).
Unfortunately, AnyLogic sometimes still includes the switched-out code in the compiled class, and this can sometimes cause strange runtime errors such as that one.
If this does look to be the case, temporarily switch to the offending switched-out configuration and delete it before switching back to the correct one.
I have resolved: the problem was that, in every delay block of my model, the delay time was linked with a database reference (type code), now I am trying by tyiping the probability distributions in the delays directly and now the optimization works

GHUnit gives allocate_pages() error after conversion to ARC in iOS Project

I've recently converted my iOS project to ARC. I have two targets in my project. One is the application itself, and the other is a set of GHUnit tests. I have approximately 200 tests that do quite a lot of work in terms of creating and modifying Core Data objects. The Core Data store used by the tests is an in memory store, and is thrown away once the tests are finished (i.e. it is not persisted anywhere).
When my tests have been running a while (they never reach the exact same point before the error is thrown, but it is always around the same tests) the application crashes with an EXC_BAD_ACCESS (Code=2, address=...)
The output in the console is as follows:
I've followed the instructions here in this answer, and set my main.m file of the GHUnit target to use the -fno-objc-arc compiler flag, but that doesn't seem to have helped.
I don't really understand what these errors mean, and searching for them doesn't seem to have helped. My only guess is that I am running out of memory, but I'm not sure why or how, considering ARC should be releasing objects for me.
I'd really appreciate any help anyone can give me to fix this! If you have any questions just leave me a comment and I will get back to you asap!
Thanks!
Chris,
First, As you have a memory exhaustion problem, you should look at your tests running under the Instruments allocation tool. Remember to turn on VM automatic snapshots. Then you should mark the heap multiple times as the tests execute.
Second, while this may be related to ARC, it quite possibly isn't. In general, ARC apps, because they can automatically release objects sooner, have a smaller footprint than MRR apps. The move to a new compiler with different options may just be uncovering a pre-existing problem.
Third, because you are using an in-memory database, my first test would be to just change it to a SQLite DB. It can have a much smaller footprint. (While you may choose to return to in-memory DBs later, we're trying to find the cause of your memory exhaustion. An in-memory DB can use a lot of RAM. Hence, lets take is out of the equation.
Once you've done the 1st and 3rd tasks above, please report back your results.
Andrew

Grails integration tests failing in a (seemingly) random and non-repeatable way

We are writing integration tests for our Grails 2.0.0 application with the help of the Fixtures and Buid-Test-Data plugins.
During testing, it was discovered that the integration test fail at certain times, and pass at other times. Running 'test-app' sometimes results in all tests passing, and sometimes results in some of our tests failing.
When the tests fail, they are caused by a unique constraint being violated during the insert of an instance of a domain class. This would indicate that there are still records in the test DB. I am running the H2 db, and have definitely got 'dbCreate = "create-drop"' in my DataSource.groovy.
Grails 2.0 integration test pollution? seems to indicate there is a significant test-pollution problem in Grails. Are there any solutions to this? Have I hit Grails-8530?
[Edit] the test-pollution seems to be caused by the unit tests. We have sort-of proved this by deleting the unit tests and successfully running 'test-app' repeatedly.
When I run into errors like this I like to try and find the unit test(s) that is causing the problem. This might be kinda tricky since yours seem to only be failing on occasion.
1) I'd look at unit tests that were recently added. If this problem just started happening then that's a good place to look.
2) Metaclassing seems to be good at causing these type of errors so I'd look for metaclassing that isn't setup/torn down properly. Not as much of an issue with 2.0 as with <= 1.3.7 but could be the problem.
3) I wrote a plugin that executes your tests in a random order. Which might not help you solve your current problem. But what might help you is it prints out all of your tests so you can take what it gives you and run grails test-app <pasted list of unit tests> IntegrationTestThatIsFailing then start removing unit tests to find the culprit(s). ( http://grails.org/plugin/random-test-order). I found a bug in this with 2.0 that I haven't had time to fix yet (integration tests fail when asserting on rendered view name) but it should still print out your test names for you (which is better than doing it yourself :)
The fact integration tests fail with a constraint violation due to existing records reminds me of a situation I once encountered with functional tests (selenium) executing in unpredictable order, some of them not cleaning up the database properly. Sure, the situation with functional tests is different, since it is more difficult to restore the database state (The testcase cannot rollback a transaction in another jvm).
Although integration tests usually roll back transactions, it is still possible to break this behavior if your code controls transactions (commits) explicitly.
First, I would try forcing execution order as mentioned by Jarred in 3). Assuming you can then reproduce the behavior, I would then check transactional behaviour next. Setting the logging level of org.hibernate.transaction to debug should show you where transaction boundaries are.
Sorry, don't yet have a good explanation why wiping out the unit tests helps getting rid of the symptoms besides a general "possibly metaclassing issues". :)

Should app crash or continue on normally while noting the problem?

Options:
1) When there is bad input, the app crashes and prints a message to the console saying what happened
2) When there is bad input, the app throws away the input and continues on as if nothing happened (though nothing the problem in a separate log file).
While 2 may seem like the obvious solution, the app is an engine and framework for game development, so if a user is writing something and does something wrong, it may be beneficial for that problem to be immediately obvious (app crashing) rather than it being ignored and the user potentially forgetting to check the log to see if there were any problems (may forget if the programmed behavior isn't very noticeable on screen, so he doesn't catch that it is missing).
There is no one-size-fits-all solution. It really depends on the situation and how bad the input is.
However, since you specifically mentioned this is for an engine or framework, then I would say it should never crash. It should raise exceptions or provide notable return codes or whatever is relevant for your environment, and then the application developer using your framework can decide how to handle. The framework itself should not make this decision for all apps that utilize the framework.
I would use exceptions if the language you are using allows them..
Since your framework will be used by other developers you shouldn't really constraint any approach, you should let the developers catch your exception (or errors) and manage what to do..
Generally speaking nothing should crash on user input. Whether the app can continue with the error logged or stop right there is something that is useful to be able to configure.
If it's too easy to ignore errors, people will just do so, instead of fixing them. On the other hand, sometimes an error is not something you can fix, or it's totally unrelated to what you're working on, and it's holding up your current task. So it depends a bit on who the user is.
Logging libraries often let you switch logs on and off by module and severity. It might be that you want something similar, to let users configure the "stop on error" behaviour for certain modules or only when above a certain level of severity.
Personally I would avoid the crash approach and opt for (2) that said make sure that the error is detected and logged and above all avoid any swallowing of errors (e.g. empty catch).
It is always helpful to have some kind of tracing/logging module, for instance later when you are doing performance tuning or general troubleshooting.
It depends on what the problem is. When I'm programming and writing error handling I use this as my mantra:
Is this exception really exceptional?
Meaning, is the error in input or whatever condition is "not normal" recoverable? In the case of a game, a File not Found exception on a texture could be recoverable and you could show a default texture so you know something broke.
However, if you have textures in a compressed file and you keep getting checksum errors, that would be an exceptional exception and I would crash the game with the details.
It really boils down to: can the application keep running without issue?
The one exception to this rule though (ha ha) is, if something is corrupted you can no longer trust your validation methods and you should crash as quickly as you can to prevent the corruption from spreading.

VB.NET SLOW Compile Time - No disk or CPU activity

We have a project for a client that is written in VB.NET. In one of the projects, we have about 100 modules, which are all VERY simple. They're extension methods that convert between object types. Here is a small snippet:
Public Module ScheduleExtensions
<System.Runtime.CompilerServices.Extension()> _
Public Function ToServicesData(ByVal source As Schedule) As ScheduleServicesData
If (source IsNot Nothing) Then
Dim target As New ScheduleServicesData
With target
.CenterId = source.CenterId
.EmployeeGuid = source.EmployeeGuid
.EndDateTime = source.EndDateTime
The problem is, this project alone takes 2+ minutes to build. I ran diskmon and filemon, and it doesn't access the file system while the build appears to hang. The CPU usage is also low during the majority of the execution. After about 2 minutes, the build finishes and there is disk and CPU activity. The problem can be reproduced on any machine (4 tried so far).
I went so far as to compile the project using the vbc command line, and the problem is there as well.
Is there something about VB.NET extension methods that lead to poor compile time? That is the only feature we're using that is more complex than looping/getting/setting, etc.
Performance problems that show no significant CPU or DISK activity are invariably related to Network waits, either network performance itself or, more likely, waiting for responses from Services on other systems. Now I do not see anything in the sample that should have that problem, so I must assume that the problem is coming from something else either in your project, or your project settings or your VS environment, or your system's environment.
You might try to get a tool that can monitor all network calls from your system and see what's going on.
It's hard to know what the problem would be based on a small sample like this. There is nothing inherently slow about extension method support in the compiler and we have numerous regression tests in this area. If there is a bug, it's likely a combination of several factors causing the problem.
If you have the time please file a bug on this issue at
http://connect.microsoft.com
It makes the bug much easier to investigate if you can provide a small sample which reproduces the problem.