How to handle a failed write behind from the GridCacheWriteBehindStore? - ignite

GridCacheWriteBehindStore does not log the object type or its fields when a write to the underlying store fails. We accept that the cache and the underlying store may be out of sync when using write behind, but we NEED to know what failed.
A simple, and likely example is when there is a NOT NULL constraint on a field in a database table and no such check exists in the java layer.
Here is all you see ... Note also that it's only log level warn which also seems wrong.
[WARN ] 2016-11-30 14:23:17.178 [flusher-0-#57%null%]
GridCacheWriteBehindStore - Unable to update underlying store:
o.a.i.cache.store.jdbc.CacheJdbcPojoStore#3a60c416

You're right, exception is ignored there. This will be fixed in the upcoming 1.8: https://issues.apache.org/jira/browse/IGNITE-3770

Related

CrateIO 2.0.2 - Problems with Insert and Update

I've done an upgrade from Crate 1.1.4 to 2.0.2. After this I've also optimized all tables.
Crate runs at one server with one instance. I have not changed any default settings, except the node name and the cluster name.
But now I can't write anything at the database. Selects are good, but every write operation ends up with:
SQLActionException: INTERNAL_SERVER_ERROR 5000 UnavailableShardsException: [mytable][3] Not enough active copies to meet shard count of [ALL] (have 1, needed 2). Timeout: [1m], request: [ShardUpsertRequest{items=[Item{id='10'}], shardId=[my_table][3]}]
at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:107)
at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:319)
at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:254)
at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:839)
at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:836)
at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:142)
at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1656)
at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:848)
at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:271)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:250)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:242)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
at org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:550)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:527)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
It does not matter if I do the INSERT/UPDATE query with JDBC or directly at the Crate console.
Did anyone has a good idea how I can solve this problem?
Thanks!
CrateDB changed the default write consistency checks and also the default number_of_shards (to allow usage on a 1-node cluster with default table settings).
See https://crate.io/docs/reference/release_notes/2.0.1.html#breaking-changes
In your case, I guess your number_of_replicas is set to 1.
Setting it to 0 or 0-1 (auto-expand) will solve this.

Importing data from multi-value D3 database into SQL issues

Trying to use the mv.NET by bluefinity tools. Made some integration packages with it for importing data from a d3 multi-value database into MS SQL 2012 but seem to be having some trouble with the mapping.
For the VOYAGES table have some commentX fields in the D3 application that are acting quite unwieldy and the INSERT fails after a certain number of rows with the following message
>Error: 0xC0047062 at INSERT, mvNET Source[354]: System.Exception: Error #8: dataReader[0] = LTPAC002 ci.BufferColumnIndex = 52, ci.ColumnName = COMMGROUP(Error #8: dataReader[0] = LTPAC002 ci.BufferColumnIndex = 52, ci.ColumnName = COMMGROUP(The value is too large to fit in the column data area of the buffer.))
at mvNETDataSource.mvNETSource.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers, IntPtr ppBufferWirePacket)
Error: 0xC0047038 at INSERT, SSIS.Pipeline: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.The PrimeOutput method on mvNET Source returned error code 0x80131500.The component returned a failure code when the pipeline engine called PrimeOutput().The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.There may be error messages posted before this with more information about the failure.
The value is too large to fit in the column data area of the buffer. -> tried changing the input / outputs types but can't seem to get it right.
In the SQL table the columns are of type ntext.
In the .dtsx job the data type for the columns are of type Unicode String [DT_WSTR] with length 4000 , I guess these are auto-detected.
The import worked for other D3 files like this not sure why it fails for these comment fields.
Running the query on the mv.NET Data Manager ( on the d3 server) times out after 240 seconds so maybe this is the underlying issue?
Any ideas how to proceed? Thank you ~
Most like reason is column COMMGROUP does not have correct data type or some record in source do not fit in output type
To find error record (causing) you have to use on redirect row (property of component failing component ) and get the result set in some txt.csv /or tsv file .
then check data
The exception is being thrown from mv.NET so I suggest you call (or ask your reseller) to call Bluefinity support and ask them about this. You're paying for support, might as well use it. Those programs shouldn't be allowed to throw exceptions like that.
D3 doesn't export Unicode, that might be one issue. But if the Data Manager times-out then I suspect something is wrong in the connectivity into D3. Open a Connection Monitor from the Session Monitor and watch the connection when you make the request. I'm guessing it's either hanging or more probably it's falling into BASIC Debug.
Make sure all D3-side programs related to this are either all Flash-compiled, or all Not Flashed. Your app code will fall into Debug if it's not Flashed but MVNET.BP is.
If it's your program that's in Debug, fix it. If you're not sure which program it is, LIST-RUNTIME-ERRORS in DM.
If it's a MVNET.BP program, again work with Bluefinity. If you are using MVSP for connectivity then the Connection Monitor may be useless, you'll need to change that to an IP (Telnet) connection to see the raw data exchange.

Grid is in invalid state to perform this operation

How can I solve this problem when using ignite ?
java.lang.IllegalStateException: Grid is in invalid state to perform this operation. It either not started yet or has already being or have stopped [gridName=grid.cfg, state=STOPPED]
at org.apache.ignite.internal.GridKernalGatewayImpl.illegalState(GridKernalGatewayImpl.java:190)
at org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:90)
at org.apache.ignite.internal.cluster.ClusterGroupAdapter.guard(ClusterGroupAdapter.java:170)
at org.apache.ignite.internal.cluster.ClusterGroupAdapter.forPredicate(ClusterGroupAdapter.java:367)
at org.apache.ignite.internal.cluster.ClusterGroupAdapter.forServers(ClusterGroupAdapter.java:392)
at org.apache.ignite.internal.IgniteKernal.services(IgniteKernal.java:363)
at org.apache.ignite.IgniteSpringBean.services(IgniteSpringBean.java:156)
This exception means that you're trying to use Ignite after it was already stopped. You should check the logs for any exceptions and also your code - there can be a mistake or some race condition.

Invalid operation result set is closed errorcode 4470 sqlstate null - DB2 data extract

I am running a very simple query and trying to extract the results to a text file. The entire query is essentially what is below, I am selecting everything from one single table with one piece of where criteria which is limiting the data to one month's worth. After it has extracted around 1.2 gig this error shows up. Is there any way that I can work around this other than extracting smaller date ranges? I am trying to pull a couple of years worth of data so if I can only get it a few days at a time it will take a lot of manual work.
I am currently using the free trial of a DB2 query tool - Razor SQL if that makes a difference, I can probably purchase different software if it would help. I am trying to get IBM's tool but for some reason it freezes during the download so I am still working on that. I have searched about this error but everything I see seems much more complex than what I am doing and I can't tell if it applies or not. Thanks in advance.
select *
from MyTable
where date_col between date '2014-01-01' and date '2014-01-31'
I stumbled at this error too, found out it is related to db2jcc.jar (type 4) driver.
Excerpt: If there are no items in the result set left (or to begin with), the Result set is closed automatically and therefore the Exception. Suggestion is to handle it in the application, perhaps in my case, I started checking if(rs.next()) but otherwise, there is a work around. Check out the source link below for how you can set some properties to Data source and avoid exception.
Source :
"Invalid operation: result set is closed" error with Data Server Driver for JDBC
In my case, i missed some properties in WAS, after add allowNextOnExhaustedResultSet the issue is fixed.
1.Log in to the WebSphere Application Server administration console.
2.Select Resources > JDBC > Data sources > Application Center DataSource name > Custom properties and click New.
3.In the Name field, enter allowNextOnExhaustedResultSet.
4.In the Value field, type 1.
5.Change the type to java.lang.Integer.
6.Click OK.
Sometimes you need also check whether resultSetHoldability properties exists. Details refer to here.
I encountered this failure also when ugrading from JDBC Type 2 driver (db2java.zip) JDBC type 4 driver (db2jcc4.jar)
Statement statement = results.getStatement();
if (statement != null)
{
connection = statement.getConnection(); // ** failed here
statement.close();
}
Solution was to check if the statement is closed or not as follows.
Changed to:
Statement statement = results.getStatement();
if (statement != null && !statement.isClosed()) {
{
connection = statement.getConnection();
statement.close();
}
Creating property bellow with type Integer it's worked for me:
allowNextOnExhaustedResultSet:
I had the same issue on WAS 7 so i had to add and change few this on Admin Console.
This TeamWorksRuntimeException exception should be fixed by applying APAR JR50863 which is available on top of BPM V8.5.5 or included on BPM V8.5 refresh pack 6.
For the case that the APAR does not solve the problem, try following workaround:
Log in to the WebSphere Application Server admin console
Select Resources > JDBC > Data sources > DataSource name (TeamWorksDB) > Custom properties and click New
In the Name field, enter downgradeHoldCursorsUnderXa
In the Value field, type true
Change the type to java.lang.Boolean
Click OK to save your changes
Select custom property resultSetHoldability
In the Value field, type 1
Click OK to save your changes
Source of the Answer : https://developer.ibm.com/answers/questions/194821/invalid-operation-result-set-is-closed-errorcode-4/
Restarting the app may fix the problem if connection pool lost session to Db2. If using Tomcat then connection pool property of 'testonBorrow' may reestablish the connection to Db2.

Restart my delta loading after delete the infopackage in PSA by mistake

here i have got one issue.can some one please help me to resolve this.
i was trying to extract some data to DS 0FI_AP_6...
then in InfoPackage Monitor I can see like..
-->Requests (messages): Everything OK
-->Extraction (messages): Everything OK
-->Transfer (IDocs and TRFC): Missing messages or warnings
-->Info IDoc 2 : sent, not arrived ; IDoc ready for dispatch (ALE service)
Data Package 1 : 23752 Records arrived in BW
Data Package 2 : 15216 Records arrived in BW
Request IDoc : Application document posted
Info IDoc 1 : Application document posted
Info IDoc 3 : Application document posted
Info IDoc 4 : Application document posted
-->Processing (data packet): Everything OK
Data Package 1 ( 38672 Records ) : Everything OK
in Status Menu I am having message like...
Missing data packages for PSA Table
Diagnosis
Data packets are missing from PSA Table . BI processing does not
return any errors. The data transport from the source system to BI was
probably incorrect.
Procedure
Check the tRFC overview in the source system.
You access this log using the wizard or following the menu path
"Environment -> Transact. RFC -> Source System".
Error handling:
If the tRFC is incorrect, resolve the errors listed there.
Check that the source system is connected properly to BI. In
particular, check the remote user authorizations in BI.
Please suggest me how to resolve this issue...
thanks in advance for your help and quick reply is much appreciated.
But what the worst thing is I deleted the infopackage in PSA by mistake.
In the normal case, if I repeat the process again, the delta load would be OK, but now the delta load remains error.
so gurus,
1. how can I restart my delta loading correctly?
2. I want to modify the timestamp in the delta table, but how to do it ?
Go to T-Code RSA7 in the source system. This will tell you the date/timestamp that the delta is set to. If the date was changed to a range that no longer works then you will need to re-initialize the datasource in the BW system side. However, the Delta date may still be fine becauase it may have never been changed when you tried to first do your load because of the connection issues.
You can create a new infopackage and set the update to Initialize Datasource with Data Transfer. This will essentially run a full load from the datasource and then reset the delta pointer date/timestamp to when you ran it. This way you will capture all the data that you needed and anything that was already in the PSA should be overwritten.
Also note that you should delete or set the request status to red on the previous request that may contain bad data in the PSA.
From the original error it seems like you are having an RFC connection issue between the datasource and BW. Contact your BASIS support and have them check the connection to make sure it is good. To ensure that your datasource is extracting properly you can run t-code RSA3 on it in the source system. This will ensure that the extraction of data is working properly.