does Ignite support read-through via REST API? - ignite

Ignite can be used to cache data from other databases.
When we request a value using the client, if this value is not in the cache, Ignite finds it in the database, returns it and stores it in the cache.
But when we request it through REST API and Ignite does not find the value in its cache, it simply returns null, and does not look for the value in the database.
Is there some setting to support read-through, when accessed via REST API, or it is supported only when accessed through clients?
does not find:
http://127.0.0.1:8080/ignite?cmd=get&key=33&cacheName=PersonCache&keyType=long&valueType=long
{"successStatus":0,"affinityNodeId":"33fa60c6-6dfe-4d3a-ae95-2c08c9e56f3f","sessionToken":null,"error":null,"response":null}
when accessed through the client, it does not find it, but pulls it up from the database:
java -jar ignite-loader.jar 127.0.0.1 PersonCache 33
Connected to Ignite on: 127.0.0.1
Connected to table: PersonCache
Cache size before operation: 2
Result query key 33 is a 3
Time elapsed query: 812
Cache size after operation: 3
and only now finds through api:
http://127.0.0.1:8080/ignite?cmd=get&key=33&cacheName=PersonCache&keyType=long&valueType=long
{"successStatus":0,"affinityNodeId":"33fa60c6-6dfe-4d3a-ae95-2c08c9e56f3f","sessionToken":null,"error":null,"response":"3"}

The issue is related to the bug in Ignite where skipStore=true is always applied for GET requests via REST regardless of the set options. I've filed a ticket for that.

Related

Ignite with backup count zero

I have set backup count of ignite cache to zero. I have created two server node(say s1 and s2) and one client node(c1). I have set cache mode as Partitioned. I have inserted data in ignite cache. Stopped server 2 and tried access data it is not getting data. If backup count is 0 then how to copy data from one server node other server node. Does ignite does automatically when we stop node.
The way Ignite manages this is with backups. If you set it to zero, you have no resilience and removing a node will result in data loss (unless you enable persistence). You can configure how Ignite responds to this situation with the Partition Loss Policy.

ERROR : FAILED: Error in acquiring locks: Error communicating with the metastore org.apache.hadoop.hive.ql.lockmgr.LockException

Getting the Error in acquiring locks, when trying to run count(*) on partitioned tables.
The table has 365 partitions when filtered on <= 350 partitions, the queries are working fine.
when tried to include more partitions for the query, it's failing with the error.
working on Hive-managed ACID tables, with the following default values
hive.support.concurrency=true //cannot make it as false, it's throwing <table> is missing from the ValidWriteIdList config: null, should be true for ACID read and write.
hive.lock.manager=org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager
hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
hive.txn.strict.locking.mode=false
hive.exec.dynamic.partition.mode=nonstrict
Tried increasing/decreasing values for these following with a beeline session.
hive.lock.numretries
hive.unlock.numretries
hive.lock.sleep.between.retries
hive.metastore.batch.retrieve.max={default 300} //changed to 10000
hive.metastore.server.max.message.size={default 104857600} // changed to 10485760000
hive.metastore.limit.partition.request={default -1} //did not change as -1 is unlimited
hive.metastore.batch.retrieve.max={default 300} //changed to 10000.
hive.lock.query.string.max.length={default 10000} //changed to higher value
Using the HDI-4.0 interactive-query-llap cluster, the meta-store is backed by default sql-server provided along.
The problem is NOT due to service tier of the hive metastore database.
It is most probably due to too many partitions in one query based on the symptom.
I meet the same issue several times.
In the hivemetastore.log, you shall able to see such error:
metastore.RetryingHMSHandler: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:254)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1608)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:578)
This is due to in Hive metastore, each partition involved in the hive query requires at most 8 parameters to acquire a lock.
Some possible workarounds:
Decompose the the query into multiple sub-queries to read from fewer
partitions.
Reduce the number of partitions by setting different partition keys.
Remove partitioning if partition keys don't have any filters.
Following are the parameters which manage the batch size for INSERT query generated by the direct SQL. Their default value is 1000. Set both of them to 100 (as a good starting point) in the Custom hive-site section of Hive configs via. Ambari and restart ALL Hive related components (including Hive metastore).
hive.direct.sql.max.elements.values.clause=100
hive.direct.sql.max.elements.in.clause=100
We also faced the same error in HDInsight and after doing many configuration changes similar to what you have done, the only thing that worked is scaling our Hive Metastore SQL DB server.
We had to scale it all the way to a P2 tier with 250 DTUs for our workloads to work without these Lock Exceptions. As you may know, with the tier and DTU count, the SQL server's IOPS and response time improves thus we suspected that the Metastore performance was the root cause for these Lock Exceptions with the increase in workloads.
Following link provides information about the DTU based performance variation in SQL servers in Azure.
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-dtu
Additionally as I know, the default Hive metastore that gets provisioned when you opt to not provide an external DB in cluster creation is just an S1 tier DB. This would not be suitable for any high capacity workloads. At the same time, as a best practice always provision your metastores external to the cluster and attach at cluster provisioning time, as this gives you the flexibility to connect the same Metastore to multiple clusters (so that your Hive layer schema can be shared across multiple clusters, e.g. Hadoop for ETLs and Spark for Processing / Machine Learning), and you have the full control to scale up or down your metastore as per your need anytime.
The only way to scale the default metastore is by engaging the Microsoft support.
We faced the same issue in HDINSIGHT. We solved it by upgrading the metastore.
The Default metastore had only 5 DTU which is not recommended for production environments. So we migrated to custom Metastore and spin the Azure SQL SERVER (P2 above 250 DTUs) and the setting the below properties:
hive.direct.sql.max.elements.values.clause=200
hive.direct.sql.max.elements.in.clause=200
Above values are set because SQL SERVER cannot process more than 2100 parameter. When you have partitions more than 348, you faced this issue as 1 partition creates 8 parameters for metastore 8 x 348

HSQLDB + SQuirreL: reading data by block

I'm running an instance of HSQLDB from inside a Java class: an instance of org.hsqldb.Server is initialized and set to be only in memory, no other configuration; then, it's used filling it with data accessible from outside the running jvm.
Using SQuirreL set to "Read on, Block size", I connect to HSQLDB server and query for data: it seems like all returning rows from the query are loaded in client memory and then displayed by block size. Instead, using Oracle (by example) I see the client downloading only displayed rows, others are downloaded only when the list is scrolled down. Is it possible to force HSQLDB client to act in the same way?
The query is performed using a java.sql.Statement object. This has a setFetchSize(n) method that indicates the number of rows that are fetched at one time. HSQLDB supports this when it is used in Server mode. It returns the rows in chunks containing the indicated fetch size.
The application program, in this case SQuirrel, should explicitly call setFetchSize(n) on the Statement object.

Using redis as an LRU cache for postgres

I have postgres 9.3 db and I want to use redis to cache calls the the DB (basically like memcached). I followed these docs, which means I have basically configured redis to work as an LRU cache. But am unsure what to do next. How do I tell redis to track calls to the DB and cache their output? How can I tell it's working?
In pseudo code:
see if redis has the record by 'record_type:record_id'
if so return the result
if not then query postgres for the record_id in the record_type table
store the result in redis by 'record_type:record_id'
return the result
This might have to be a custom adapter for the query engine that you are using.

Connecting to Oracle11g database from Websphere message broker 6

I am trying a simple insert command from websphere message broker 6 from a compute node.
The data source name which is provided in the odbc.ini file in the message broker is specified in the node property of the compute node. And have wrote the following ESQL code.
SET TABLE = 'MYTABLE';
SET MYVALUE = 'TESTVALUE';
INSERT INTO Database.TABLE VALUES(MYVALUE);
The connection url is provided in the tnsnames.ora. The url is cluster url. Which points to 3 database instances.
When I am running the query i am getting exception that table or view does not exist in the trace.
But when i connect to db using any of the 3 direct urls, i am able to see the table.
Note: database is oracle11g
Can any one explain me what is happening?
The problem was that my application was using the same DSN used by my broker. And while creating the broker, the username and password provided was pointing to different schema, which is not having the the tables for my application.
The solution was creating a new DSN, and using mqsisetdbparams to point it to the correct schema.