Failed to invoke chaincode name:"scbcch" , error: timeout expired while executing transaction - indexing

I am trying to fetch data from blockchain using query in chaincode. I have invoked around 2,50,000 records in blockchain and trying to fetch the data using query. When i run the chaincode and get the peer logs, I am getting the below error.
failed to invoke chaincode name:"scbcch" , error: timeout expired while executing transaction
When i do a query for lesser data my code works fine without these errors.
Can anybody please help me in solving the issue please.
I am using Hyperledger Fabric 1.4.
Here is my Query code:
queryString := fmt.Sprintf("{\"selector\":{\"_id\": {\"$gt\": null},\"$and\":[{\"terminationReportID\":{\"$ne\":\"%s\"}},{\"terminationReportFlag\":{\"$eq\":\"%s\"}},{\"effectiveDateOfAction\":{\"$gt\":\"%s\"}},{\"importDate\":{\"$eq\":\"%s\"}}]},\"fields\": [\"bankID\",\"effectiveDateOfAction\",\"costCentre\"],\"use_index\":[\"_design/indexTerminationReportDoc\",\"indexTerminationReportName\"]}","null", "Yes", "2018-10-31", lastImportDatekey)
queryResultss11, errtr := getQueryResultForQueryString(stub, queryString)
And my Indexing is:
{"index":{"fields":["terminationReportID","terminationReportFlag","effectiveDateOfAction","importDate"]},"ddoc":"indexTerminationReportDoc", "name":"indexTerminationReportName","type":"json"}
Can anyone please help me to figure out and resolve the issue. I am stuck with this for more than 3 days.
Is there anything I have to change on my index part. I am re-posting the same issue as I am not getting any support for this issue.

The issue is with the chaincode execution timeout.You can customize it in the docker file of your peers.
CORE_CHAINCODE_EXECUTETIMEOUT=80s

Related

Datastream Troubleshoot: "An unknown error occurred. Please try again. If the error persists, contact Google support"

We are trying to replicate data from AlloyDB to Bigquery using Datastream.
We Get "An unknown error occurred. Please try again. If the error persists, contact Google support."
In the Datastream console --> objects list, we see all source tables with Object Status "Failed" and Backfill status "Completed".
In Bigquery we see only a subset of the tables (not all the "Completed" objects were synced).
In the Logs Explorer I can see this error on BQ:
I also see this error: error: {
code: 11
message: "Unsupported primary key column either does not exist or is a pseudocolumn at [1:401]"
}
The column referred in the error is of type enum.
The desired situation is having all the AlloyDB tables replicated into Bigquery.
The error message is not very informative...
What does it mean?
What would be the best way to go about troubleshooting this?
We're actively working on making these error messages be more informative, and improvements are continuously being rolled out as we identify more edge cases. Assuming you followed all the steps in the documentation, then you may need to open a ticket with support for further investigation. If a support ticket isn't an option, you can still report the issue using the public issue tracker
I just had this same issue but connecting to a PostgreSQL in AWS RDS:
Beginning with Postgres 10, passwords are encrypted using SCRAM-SHA-256 in PostgreSQL. Google DataStream still expects MD5 password encryption, or it will generate an "unknown error" in the logs and fail the backfills.
You'll need to update your postgresql.conf (or RDS Cluster Parameter Group if you're using AWS like me):
password_encryption = 'MD5'
Restart the database and make sure the parameter has changed with:
SHOW password_encryption;
Reset the password of your users:
ALTER USER "{username}" with password '{password}';
More info from the PostgreSQL docs: https://www.postgresql.org/docs/current/auth-password.html

Getting 'ERR EXEC without MULTI' when using RedisIO from Apache Beam API.

I'm reading data from BigQuery and writing to Redis using RedisIO from Apache Beam API. Below is the code snippet.
pipeline.apply("Read Data From BigQuery",
BigQueryIO.readTableRows().withoutValidation()
.fromQuery(""))
.apply("Convert Table rows into Redis Entity",
ParDo.of(new RedisEntity()))
.apply("Write to Redis",
RedisIO.write().withEndpoint("localhost", 6379));
When trying to execute the code, I get 2,000 records written in redis and after that getting the below error.
redis.clients.jedis.exceptions.JedisDataException: EXEC without MULTI
at redis.clients.jedis.Pipeline.exec(Pipeline.java:139)
at org.apache.beam.sdk.io.redis.RedisIO$Write$WriteFn.processElement(RedisIO.java:419)
Kindly advice if I'm missing something or if there is a better way to do it.
Seem like a bug in RedisIO, I have submitted an issue to Beam, and have done a pull request to fix it. See if I guess it correctly. issues.apache.org/jira/browse/BEAM-5714

Error when connecting Hive with kibi?

I am using kibi-community-demo-full-4.6.4-linux-x64 version.
In datasource:
"connection_string": "jdbc:hive://localhost:10000/root",
"libpath": "/home/pare/Downloads/jar/",
"drivername": "org.apache.hadoop.hive.jdbc.HiveDriver",
"libs": "hive-jdbc-0.11.0.jar,hive-metastore-0.11.0.jar,libthrift-0.9.1.jar,hive-service-0.13.1.jar,hive-jdbc-1.2.1.2.3.2.0-2950-standalone.jar,hadoop-common-2.7.1.2.3.2.0-2950.jar",
After that when in queries I write a query it will show error like:
Queries Editor: Error 400 Bad Request: Error running static method java.lang.IllegalArgumentException: Bad URL format at org.apache.hive.jdbc.Utils.parseURL(Utils.java:185) at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:84)
What is the error can any one explain me how to solve it?
I am able to connect after changing the jars version.
and also I changed the driver name "org.apache.hive.jdbc.HiveDriver".

Getting exception while reading data from blob in azure

While I am trying to read the list of blob data on azure, I am getting the following error:
Function evaluation disabled because a previous function evaluation timed out. You must continue execution to reenable function evaluation.
How to resolve this?
Please see the following link. Your code likely has a endless loop. https://msdn.microsoft.com/en-us/library/ms234762.aspx

DB2 error code -991

I'm getting a return code of -991 upon running a db2 batch cobol program.
The program is attempting to fetch 65 rows within a cursor structure.
I cannot find anything on this particular error, does anyone know what it means ?
You probably want to look at the db2 documentation.
Error code -991 is here.
It says:
-991
CALL ATTACH WAS UNABLE TO ESTABLISH AN IMPLICIT CONNECT OR OPEN TO DB2. RC1= rc1 RC2= rc2
Explanation
Call attach attempted to perform an implicit connect and open as the result of an SQL statement. The connect or open failed with the returned values.
rc1
The value returned in FRBRC1 for the failed CONNECT or OPEN request.
rc2
The value returned in FRBRC2 for the failed CONNECT or OPEN request.
System action
The statement cannot be processed.
Programmer response
Verify that the application intended to use the call attachment facility (CAF) as the mechanism to connect to DB2®. For functions or stored procedures running in the WLM-established stored procedure address space the application must be link-edited with or dynamically allocate the RRS attachment language interface module (DSNRLI), not CAF.
SQLSTATE
57015
Hopefully that means something to you :)
In case you're still stuck, my google-fu is strong.
SQLCODE -991, Error: CALL ATTACH WAS UNABLE TO ESTABLISH AN IMPLICIT CONNECT OR OPEN TO DB2. RC1= RC2=
From http://theamericanprogrammer.com/programming/sqlcodes.shtml