GraphDB Free fails to respond - graphdb

I'm using GraphDB Free 8.4.1 in research project and sometimes it fails to respond to request. There are no errors, no exceptions in GraphDB logs, GraphDB is running as server+workbench with default configuration.
However, i have exception in component, that is requesting the GraphDB server.
The exception is:
org.eclipse.rdf4j.repository.RepositoryException: org.apache.http.NoHttpResponseException: The target server failed to respond
at org.eclipse.rdf4j.repository.http.HTTPRepositoryConnection.exportStatements(HTTPRepositoryConnection.java:287)
at org.eclipse.rdf4j.repository.http.HTTPRepositoryConnection.getStatements(HTTPRepositoryConnection.java:269)
at x.y.semantic.Repository.loadGraph(Repository.java:90)
The piece of code, where exception occurs:
ValueFactory factory = SimpleValueFactory.getInstance();
RepositoryConnection connection = repository.getConnection();
// exception happens in getStatements here:
RepositoryResult<Statement> result = connection.getStatements(null, null, null, true, factory.createIRI(contextURI));
Can you, please, help me to identify, where problem is?
Thank you very much!

GraphDB Free has a limitation of two concurrent connections. I suspect the issues is caused by GDB-1975 "GraphDB Free freezes when multiple concurrent queries are executed". You can easily reproduce the issue with:
for (int i = 0; i < 10; i++) {
RepositoryConnection repCon = repository.getConnection();
String queryString = "select * where {?s ?p ?o}";
try {
TupleQuery tupleQuery = repCon.prepareTupleQuery(QueryLanguage.SPARQL, queryString);
//stops working after the 5th iteration
TupleQueryResult queryResult = tupleQuery.evaluate();
} finally {
//this does not close the connection because the queryResult was not closed
repCon.close();
}
System.out.println(i);
}
The code does not close the queryResult iterator so the RDF4J leaves the connection open. I recommend you to use the autocloseable Java construct and always close all iterators or to upgrade to 8.6 or later version where the bug was fixed.

Related

Apache HTTPClient5 - How to Prevent Connection/Stream Refused

Problem Statement
Context
I'm a Software Engineer in Test running order permutations of Restaurant Menu Items to confirm that they succeed order placement w/ the POS
In short, this POSTs a JSON payload to an endpoint which then validates the order w/ a POS to define success/fail/other
Where POS, and therefore Transactions per Second (TPS), may vary, but each Back End uses the same core handling
This can be as high as ~22,000 permutations per item, in easily manageable JSON size, that need to be handled as quickly as possible
The Network can vary wildly depending upon the Restaurant, and/or Region, one is testing
E.g. where some have a much higher latency than others
Therefore, the HTTPClient should be able to intelligently negotiate the same content & endpoint regardless of this
Direct Problem
I'm using Apache's HTTP Client 5 w/ PoolingAsyncClientConnectionManager to execute both the GET for the Menu contents, and the POST to check if the order succeeds
This works out of the box, but sometimes loses connections w/ Stream Refused, specifically:
org.apache.hc.core5.http2.H2StreamResetException: Stream refused
No individual tuning seems to work across all network contexts w/ variable latency, that I can find
Following the stacktrace seems to indicate it is that the stream had closed already, therefore needs a way to keep it open or not execute an already-closed connection
if (connState == ConnectionHandshake.GRACEFUL_SHUTDOWN) {
throw new H2StreamResetException(H2Error.PROTOCOL_ERROR, "Stream refused");
}
Some Attempts to Fix Problem
Tried to use Search Engines to find answers but there are few hits for HTTPClient5
Tried to use official documentation but this is sparse
Changing max connections per route to a reduced number, shifting inactivity validations, or connection time to live
Where the inactivity checks may fix the POST, but stall the GET for some transactions
And that tuning for one region/restaurant may work for 1 then break for another, w/ only the Network as variable
PoolingAsyncClientConnectionManagerBuilder builder = PoolingAsyncClientConnectionManagerBuilder
.create()
.setTlsStrategy(getTlsStrategy())
.setMaxConnPerRoute(12)
.setMaxConnTotal(12)
.setValidateAfterInactivity(TimeValue.ofMilliseconds(1000))
.setConnectionTimeToLive(TimeValue.ofMinutes(2))
.build();
Shifting to a custom RequestConfig w/ different timeouts
private HttpClientContext getHttpClientContext() {
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(Timeout.of(10, TimeUnit.SECONDS))
.setResponseTimeout(Timeout.of(10, TimeUnit.SECONDS))
.build();
HttpClientContext httpContext = HttpClientContext.create();
httpContext.setRequestConfig(requestConfig);
return httpContext;
}
Initial Code Segments for Analysis
(In addition to the above segments w/ change attempts)
Wrapper handling to init and get response
public SimpleHttpResponse getFullResponse(String url, PoolingAsyncClientConnectionManager manager, SimpleHttpRequest req) {
try (CloseableHttpAsyncClient httpclient = getHTTPClientInstance(manager)) {
httpclient.start();
CountDownLatch latch = new CountDownLatch(1);
long startTime = System.currentTimeMillis();
Future<SimpleHttpResponse> future = getHTTPResponse(url, httpclient, latch, startTime, req);
latch.await();
return future.get();
} catch (IOException | InterruptedException | ExecutionException e) {
e.printStackTrace();
return new SimpleHttpResponse(999, CommonUtils.getExceptionAsMap(e).toString());
}
}
With actual handler and probing code
private Future<SimpleHttpResponse> getHTTPResponse(String url, CloseableHttpAsyncClient httpclient, CountDownLatch latch, long startTime, SimpleHttpRequest req) {
return httpclient.execute(req, getHttpContext(), new FutureCallback<SimpleHttpResponse>() {
#Override
public void completed(SimpleHttpResponse response) {
latch.countDown();
logger.info("[{}][{}ms] - {}", response.getCode(), getTotalTime(startTime), url);
}
#Override
public void failed(Exception e) {
latch.countDown();
logger.error("[{}ms] - {} - {}", getTotalTime(startTime), url, e);
}
#Override
public void cancelled() {
latch.countDown();
logger.error("[{}ms] - request cancelled for {}", getTotalTime(startTime), url);
}
});
}
Direct Question
Is there a way to configure the client such that it can handle for these variances on its own without explicitly modifying the configuration for each endpoint context?
Fixed w/ Combination of the below to Assure Connection Live/Ready
(Or at least is stable)
Forcing HTTP 1
HttpAsyncClients.custom()
.setConnectionManager(manager)
.setRetryStrategy(getRetryStrategy())
.setVersionPolicy(HttpVersionPolicy.FORCE_HTTP_1)
.setConnectionManagerShared(true);
Setting Effective Headers for POST
Specifically the close header
req.setHeader("Connection", "close, TE");
Note: Inactivity check helps, but still sometimes gets refusals w/o this
Setting Inactivity Checks by Type
Set POSTs to validate immediately after inactivity
Note: Using 1000 for both caused a high drop rate for some systems
PoolingAsyncClientConnectionManagerBuilder
.create()
.setValidateAfterInactivity(TimeValue.ofMilliseconds(0))
Set GET to validate after 1s
PoolingAsyncClientConnectionManagerBuilder
.create()
.setValidateAfterInactivity(TimeValue.ofMilliseconds(1000))
Given the Error Context
Tracing the connection problem in stacktrace to AbstractH2StreamMultiplexer
Shows ConnectionHandshake.GRACEFUL_SHUTDOWN as triggering the stream refusal
if (connState == ConnectionHandshake.GRACEFUL_SHUTDOWN) {
throw new H2StreamResetException(H2Error.PROTOCOL_ERROR, "Stream refused");
}
Which corresponds to
connState = streamMap.isEmpty() ? ConnectionHandshake.SHUTDOWN : ConnectionHandshake.GRACEFUL_SHUTDOWN;
Reasoning
If I'm understanding correctly:
The connections were being un/intentionally closed
However, they were not being confirmed ready before executing again
Which caused it to fail because the stream was not viable
Therefore the fix works because (it seems)
Given Forcing HTTP1 allows for a single context to manage
Where HttpVersionPolicy NEGOTIATE/FORCE_HTTP_2 had greater or equivalent failures across the spectrum of regions/menus
And it assures that all connections are valid before use
And POSTs are always closed due to the close header, which is unavailable to HTTP2
Therefore
GET is checked for validity w/ reasonable periodicity
POST is checked every time, and since it is forcibly closed, it is re-acquired before execution
Which leaves no room for unexpected closures
And otherwise the potential that it was incorrectly switching to HTTP2
Will accept this until a better answer comes along, as this is stable but sub-optimal.

Get broken constrains in OptaPanner with non-reversible accumulator

I am trying to obtains list of broken constrains from a problem instance in OptaPlanner. I am using OptaPlanner version 7.0.0.Final and drools for rules engine (also 7.0.0.Final). The problem is solved correctly and without any error, but when I try to obtain broken constrains I get a NullPointer exception.
As far as I have researched, I found out, that this only happens, when I use drools accumulator without reverse operation (like max or min). Further I have made a custom accumulator, which is the exact copy from org.drools.core.base.accumulators.LongSumAccumulateFunction and everything works as expected, but as soon as I change the supportsReverse() function to return false, the NullPointer exception rises.
I have managed to reconstruct this problem in one of the provided examples - CloudBalancing. This is the change to CloudBalancingHelloWorld, it's only purpose is to obtain list of broken constraints as mentioned in this post.
public class CloudBalancingHelloWorld {
public static void main(String[] args) {
// Build the Solver
SolverFactory<CloudBalance> solverFactory = SolverFactory.createFromXmlResource(
"org/optaplanner/examples/cloudbalancing/solver/cloudBalancingSolverConfig.xml");
Solver<CloudBalance> solver = solverFactory.buildSolver();
// Load a problem with 400 computers and 1200 processes
CloudBalance unsolvedCloudBalance = new CloudBalancingGenerator().createCloudBalance(400, 1200);
// Solve the problem
CloudBalance solvedCloudBalance = solver.solve(unsolvedCloudBalance);
// Display the result
System.out.println("\nSolved cloudBalance with 400 computers and 1200 processes:\n"
+ toDisplayString(solvedCloudBalance));
//
//A Piece of code added - start
//
ScoreDirector<CloudBalance> scoreDirector = solver.getScoreDirectorFactory().buildScoreDirector();
scoreDirector.setWorkingSolution(solvedCloudBalance);
Collection<ConstraintMatchTotal> constrains = scoreDirector.getConstraintMatchTotals();
System.out.println(constrains.size());
//
//A Piece of code added - end
//
}
public static String toDisplayString(CloudBalance cloudBalance) {
StringBuilder displayString = new StringBuilder();
for (CloudProcess process : cloudBalance.getProcessList()) {
CloudComputer computer = process.getComputer();
displayString.append(" ").append(process.getLabel()).append(" -> ")
.append(computer == null ? null : computer.getLabel()).append("\n");
}
return displayString.toString();
}
}
And this is the change to requiredCpoPowerTotal rule. Please note that I have done this only to demonstrate the problem. Basicaly I have changed sum to max.
rule "requiredCpuPowerTotal"
when
$computer : CloudComputer($cpuPower : cpuPower)
accumulate(
CloudProcess(
computer == $computer,
$requiredCpuPower : requiredCpuPower);
$requiredCpuPowerTotal : max($requiredCpuPower);
(Integer) $requiredCpuPowerTotal > $cpuPower
)
then
scoreHolder.addHardConstraintMatch(kcontext, $cpuPower - (Integer) $requiredCpuPowerTotal);
end
I am really confused, because the error does not happen during planing phase, but when the scoreDirector recomputes the score to obtain broken constrains it does. I mean the same calculations must have happened during the planning phase right?
Anyway here is the stack trace
Exception in thread "main" Exception executing consequence for rule "requiredCpuPowerTotal" in org.optaplanner.examples.cloudbalancing.solver: java.lang.NullPointerException
at org.drools.core.runtime.rule.impl.DefaultConsequenceExceptionHandler.handleException(DefaultConsequenceExceptionHandler.java:39)
at org.drools.core.common.DefaultAgenda.handleException(DefaultAgenda.java:1256)
at org.drools.core.phreak.RuleExecutor.innerFireActivation(RuleExecutor.java:438)
at org.drools.core.phreak.RuleExecutor.fireActivation(RuleExecutor.java:379)
at org.drools.core.phreak.RuleExecutor.fire(RuleExecutor.java:135)
at org.drools.core.phreak.RuleExecutor.evaluateNetworkAndFire(RuleExecutor.java:88)
at org.drools.core.concurrent.AbstractRuleEvaluator.internalEvaluateAndFire(AbstractRuleEvaluator.java:34)
at org.drools.core.concurrent.SequentialRuleEvaluator.evaluateAndFire(SequentialRuleEvaluator.java:43)
at org.drools.core.common.DefaultAgenda.fireLoop(DefaultAgenda.java:1072)
at org.drools.core.common.DefaultAgenda.internalFireAllRules(DefaultAgenda.java:1019)
at org.drools.core.common.DefaultAgenda.fireAllRules(DefaultAgenda.java:1011)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.internalFireAllRules(StatefulKnowledgeSessionImpl.java:1321)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1312)
at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1296)
at org.optaplanner.core.impl.score.director.drools.DroolsScoreDirector.getConstraintMatchTotals(DroolsScoreDirector.java:134)
at org.optaplanner.examples.cloudbalancing.app.CloudBalancingHelloWorld.main(CloudBalancingHelloWorld.java:52)
Caused by: java.lang.NullPointerException
at org.drools.core.base.accumulators.JavaAccumulatorFunctionExecutor$JavaAccumulatorFunctionContext.getAccumulatedObjects(JavaAccumulatorFunctionExecutor.java:208)
at org.drools.core.reteoo.FromNodeLeftTuple.getAccumulatedObjects(FromNodeLeftTuple.java:94)
at org.drools.core.common.AgendaItem.getObjectsDeep(AgendaItem.java:78)
at org.drools.core.reteoo.RuleTerminalNodeLeftTuple.getObjectsDeep(RuleTerminalNodeLeftTuple.java:359)
at org.optaplanner.core.api.score.holder.AbstractScoreHolder.extractJustificationList(AbstractScoreHolder.java:118)
at org.optaplanner.core.api.score.holder.AbstractScoreHolder.registerConstraintMatch(AbstractScoreHolder.java:88)
at org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScoreHolder.addHardConstraintMatch(HardSoftScoreHolder.java:53)
at org.optaplanner.examples.cloudbalancing.solver.Rule_requiredCpuPowerTotal1284553313.defaultConsequence(Rule_requiredCpuPowerTotal1284553313.java:14)
at org.optaplanner.examples.cloudbalancing.solver.Rule_requiredCpuPowerTotal1284553313DefaultConsequenceInvokerGenerated.evaluate(Unknown Source)
at org.optaplanner.examples.cloudbalancing.solver.Rule_requiredCpuPowerTotal1284553313DefaultConsequenceInvoker.evaluate(Unknown Source)
at org.drools.core.phreak.RuleExecutor.innerFireActivation(RuleExecutor.java:431)
... 13 more
Thank you for any help in advance.
That NPE sounds like a bug in Drools. The ConstraintMatch API should always just work. Very that you get it against the latest master version. If so, please create a jira for this with a minimal reproducer and we'll look into it.

How to make batch processing with Apex?

How can I create batch processing application with Apache Apex?
All the examples I've found were streaming applications, which means they are not ending and I would like my app to close once it has processed all the data.
Thanks
What is your use-case? Supporting batch natively is on the roadmap and is being worked on right now.
Alternately, till then, once you are sure that your processing is done, the input operator can send a signal as ShutdownException() and that will propogate through the DAG and shutdown the DAG.
Let us know if you need further details.
You can add an exit condition before running the app.
for example
public void testMapOperator() throws Exception
{
LocalMode lma = LocalMode.newInstance();
DAG dag = lma.getDAG();
NumberGenerator numGen = dag.addOperator("numGen", new NumberGenerator());
FunctionOperator.MapFunctionOperator<Integer, Integer> mapper
= dag.addOperator("mapper", new FunctionOperator.MapFunctionOperator<Integer, Integer>(new Square()));
ResultCollector collector = dag.addOperator("collector", new ResultCollector());
dag.addStream("raw numbers", numGen.output, mapper.input);
dag.addStream("mapped results", mapper.output, collector.input);
// Create local cluster
LocalMode.Controller lc = lma.getController();
lc.setHeartbeatMonitoringEnabled(false);
//Condition to exit the application
((StramLocalCluster)lc).setExitCondition(new Callable<Boolean>()
{
#Override
public Boolean call() throws Exception
{
return TupleCount == NumTuples;
}
});
lc.run();
Assert.assertEquals(sum, 285);
}
for the complete code refer https://github.com/apache/apex-malhar/blob/master/stream/src/test/java/org/apache/apex/malhar/stream/FunctionOperator/FunctionOperatorTest.java

Neo4j Java API Concurrency v2.0M3: Exception when iterating over relationships while other threads creating new relationships concurrently

What I try to achieve here is to get the number of relationships of a particular node, while other threads adding new relationships to it concurrently. I run my code in a unit test with
TestGraphDatabaseFactory().newImpermanentDatabase() graph service.
My code is executed by ~50 threads, and it looks something like this:
int numOfRels = 0;
try {
Iterable<Relationship> rels = parentNode.getRelationships(RelTypes.RUNS, Direction.OUTGOING);
while (rels.iterator().hasNext()) {
numOfRels++;
rels.iterator().next();
}
}
catch(Exception e) {
throw e;
}
// Enforce relationship limit
if (numOfRels > 10) {
// do something
}
Transaction tx = graph.beginTx();
try {
Node node = createMyNodeAndConnectToParentNode(...);
tx.success();
return node;
}
catch (Exception e) {
tx.failure();
}
finally {
tx.finish();
}
The problem is once a while I get a "ArrayIndexOutOfBoundsException: 1" in the try-catch block above (the one surrounding the getRelationships()). If I understand correctly Iterable is not thread-safe and causing this problem.
My question is what is the best way to iterate over constantly changing relationships and nodes using Neo4j's Java API?
I am getting the following errors:
Exception in thread "Thread-14" org.neo4j.helpers.ThisShouldNotHappenError: Developer: Stefan/Jake claims that: A property key id disappeared under our feet
at org.neo4j.kernel.impl.core.NodeProxy.setProperty(NodeProxy.java:188)
at com.inbiza.connio.neo4j.server.extensions.graph.AppEntity.createMyNodeAndConnectToParentNode(AppEntity.java:546)
at com.inbiza.connio.neo4j.server.extensions.graph.AppEntity.create(AppEntity.java:305)
at com.inbiza.connio.neo4j.server.extensions.TestEmbeddedConnioGraph$appCreatorThread.run(TestEmbeddedConnioGraph.java:61)
at java.lang.Thread.run(Thread.java:722)
Exception in thread "Thread-92" java.lang.ArrayIndexOutOfBoundsException: 1
at org.neo4j.kernel.impl.core.RelationshipIterator.fetchNextOrNull(RelationshipIterator.java:72)
at org.neo4j.kernel.impl.core.RelationshipIterator.fetchNextOrNull(RelationshipIterator.java:36)
at org.neo4j.helpers.collection.PrefetchingIterator.hasNext(PrefetchingIterator.java:55)
at com.inbiza.connio.neo4j.server.extensions.graph.AppEntity.create(AppEntity.java:243)
at com.inbiza.connio.neo4j.server.extensions.TestEmbeddedConnioGraph$appCreatorThread.run(TestEmbeddedConnioGraph.java:61)
at java.lang.Thread.run(Thread.java:722)
Exception in thread "Thread-12" java.lang.ArrayIndexOutOfBoundsException: 1
at org.neo4j.kernel.impl.core.RelationshipIterator.fetchNextOrNull(RelationshipIterator.java:72)
at org.neo4j.kernel.impl.core.RelationshipIterator.fetchNextOrNull(RelationshipIterator.java:36)
at org.neo4j.helpers.collection.PrefetchingIterator.hasNext(PrefetchingIterator.java:55)
at com.inbiza.connio.neo4j.server.extensions.graph.AppEntity.create(AppEntity.java:243)
at com.inbiza.connio.neo4j.server.extensions.TestEmbeddedConnioGraph$appCreatorThread.run(TestEmbeddedConnioGraph.java:61)
at java.lang.Thread.run(Thread.java:722)
Exception in thread "Thread-93" java.lang.ArrayIndexOutOfBoundsException
Exception in thread "Thread-90" java.lang.ArrayIndexOutOfBoundsException
Below is the method responsible of node creation:
static Node createMyNodeAndConnectToParentNode(GraphDatabaseService graph, final Node ownerAccountNode, final String suggestedName, Map properties) {
final String accountId = checkNotNull((String)ownerAccountNode.getProperty("account_id"));
Node appNode = graph.createNode();
appNode.setProperty("urn_name", App.composeUrnName(accountId, suggestedName.toLowerCase().trim()));
int nextId = nodeId.addAndGet(1); // I normally use getOrCreate idiom but to simplify I replaced it with an atomic int - that would do for testing
String urn = App.composeUrnUid(accountId, nextId);
appNode.setProperty("urn_uid", urn);
appNode.setProperty("id", nextId);
appNode.setProperty("name", suggestedName);
Index<Node> indexUid = graph.index().forNodes("EntityUrnUid");
indexUid.add(appNode, "urn_uid", urn);
appNode.addLabel(LabelTypes.App);
appNode.setProperty("version", properties.get("version"));
appNode.setProperty("description", properties.get("description"));
Relationship rel = ownerAccountNode.createRelationshipTo(appNode, RelTypes.RUNS);
rel.setProperty("date_created", fmt.print(new DateTime()));
return appNode;
}
I am looking at org.neo4j.kernel.impl.core.RelationshipIterator.fetchNextOrNull()
It looks like my test generates a condition where else if ( (status = fromNode.getMoreRelationships( nodeManager )).loaded() || lastTimeILookedThereWasMoreToLoad ) is not executed, and where currentTypeIterator state is changed in between.
RelIdIterator currentTypeIterator = rels[currentTypeIndex]; //<-- this is where is crashes
do
{
if ( currentTypeIterator.hasNext() )
...
...
while ( !currentTypeIterator.hasNext() )
{
if ( ++currentTypeIndex < rels.length )
{
currentTypeIterator = rels[currentTypeIndex];
}
else if ( (status = fromNode.getMoreRelationships( nodeManager )).loaded()
// This is here to guard for that someone else might have loaded
// stuff in this relationship chain (and exhausted it) while I
// iterated over my batch of relationships. It will only happen
// for nodes which have more than <grab size> relationships and
// isn't fully loaded when starting iterating.
|| lastTimeILookedThereWasMoreToLoad )
{
....
}
}
} while ( currentTypeIterator.hasNext() );
I also tested couple locking scenarios. The one below solves the issue. Not sure if I should use a lock every time I iterate over relationships based on this.
Transaction txRead = graph.beginTx();
try {
txRead.acquireReadLock(parentNode);
long numOfRels = 0L;
Iterable<Relationship> rels = parentNode.getRelationships(RelTypes.RUNS, Direction.OUTGOING);
while (rels.iterator().hasNext()) {
numOfRels++;
rels.iterator().next();
}
txRead.success();
}
finally {
txRead.finish();
}
I am very new to Neo4j and its source base; just testing as a potential data store for our product. I will appreciate if someone knowing Neo4j inside & out explains what is going on here.
This is a bug. The fix is captured in this pull request: https://github.com/neo4j/neo4j/pull/1011
Well I think this a bug. The Iterable returned by getRelationships() are meant to be immutable. When this method is called, all the available Nodes till that moment will be available in the iterator. (You can verify this from org.neo4j.kernel.IntArrayIterator)
I tried replicating it by having 250 threads trying to insert a relationship from a node to some other node. And having a main thread looping over the iterator for the first node. On careful analysis, the iterator only contains the relationships added when getRelationship() was last called. The issue never came up for me.
Can you please put your complete code, IMO there might some silly error. The reason it cannot happen is that the write locks are in place when adding a relationship and reads are hence synchronized.

"Collection was modified; enumeration operation may not execute" error on deployment

I'm getting this error when I deploy a VB.NET application and for the life of me I cannot figure out why.
I do not get this error when I run the app from the IDE and the test machine I am deploying it to has a similar configuration to the dev machine...Windows 7 & .NET 3.51 SP1 and 4.0.
The app bombs out when the main form is loaded after logging in. I've narrowed it down to the main form because if I load another form from login and then open the main form, this happens.
Linked below is a screenshot of the stack trace.
Any ideas? I'm really lost here.
Thanks.
I do not see a way for ShapeCollection.Dispose() to throw that exception. Although it is manipulating a List<> that can indeed throw that exception, the code should not trigger it:
private void Dispose(bool disposing)
{
if (!this.m_Disposed && disposing)
{
for (int i = this.m_Shapes.Count - 1; i >= 0; i--)
{
this.m_Shapes[i].Dispose();
}
this.m_Shapes.Clear();
this.m_Shapes = null;
}
this.m_Disposed = true;
}
Well, this is from the PowerPacks version that I have. There have been a couple of versions of it floating around, it used to be distributed separately. Make sure you didn't accidentally deploy an old version.