I'm using Apache Ignite v.2.8.1 .Net on a Windows 10 machine.
I am trying to use affinity collocation on a query-enabled cache. The entities I store in cache have a primary key named "Id" and an affinity key named "PartnerId". Both keys are of the type Int32. I'm defining the cache as follows:
new CacheConfiguration("BespokeCharge")
{
KeyConfiguration = new[]
{
new CacheKeyConfiguration()
{
AffinityKeyFieldName = "PartnerId",
TypeName = typeof(BespokeCharge).Name
}
}
};
Next I use the following code to add the data:
var cache = Ignite.GetCache<AffinityKey, BespokeCharge>("BespokeCharge")
cache.Put(new AffinityKey(entity.Id, entity.PartnerId), entity)
So far so good. Since I want to be able to use SQL to search for bespoke charges, I also add a QueryEntity configuration:
new CacheConfiguration("BespokeCharge",
new QueryEntity(typeof(AffinityKey), typeof(BespokeCharge))
{
KeyFieldName = "Id",
TableName = "BespokeCharge"
})
{
KeyConfiguration = new[]
{
new CacheKeyConfiguration()
{
AffinityKeyFieldName = "PartnerId",
TypeName = typeof(BespokeCharge).Name
}
}
};
When I run the code, both Ignite and my app crash and the following error is logged:
JVM will be halted immediately due to the failure: [failureCtx=FailureContext [type=CRITICAL_ERROR, err=class o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is corrupted [pages(groupId, pageId)=[IgniteBiTuple [val1=1565129718, val2=844420635164729]], cacheId=-1278247946, cacheName=BESPOKECHARGES, indexName=BESPOKECHARGES_ID_ASC_IDX, msg=Runtime failure on row: Row#29db2fbe[ key: AffinityKey [idHash=8137191, hash=783909474, key=3, affKey=2], val: UtilityClick.BillValidation.Shared.InMemory.Model.BespokeCharge [idHash=889383506, hash=-399638125, BespokeChargeTypeId=4, ChargeValue=100.0000, ChargeValueIncCommission=100.0000, Id=3, PartnerId=2, QuoteRecordId=5] ][ 4, 100.0000, 100.0000, , 2, 5 ]]]]
When I tried to define QueryEntity with the key type of int instead of AffinityKey, I got a different error with the same outcome -- a crash.
JVM will be halted immediately due to the failure: [failureCtx=FailureContext [type=CRITICAL_ERROR, err=java.lang.ClassCastException: class o.a.i.cache.affinity.AffinityKey cannot be cast to class java.lang.Integer (o.a.i.cache.affinity.AffinityKey is in unnamed module of loader 'app'; java.lang.Integer is in module java.base of loader 'bootstrap')]]
What am I doing wrong? Thank you for your help!
KeyFieldName = "Id" setting is the problem. It sets Id field to be used as a cache key, which is int, but then we use AffinityKey as a cache key, which causes a type mismatch.
In this case I don't think we need KeyFieldName at all, removing it fixes the problem, and SELECT queries should not be affected by this change.
Related
I work at a place where scalding writes are augmented with a specific API to track dataset meta data. When converting from normal writes to these special writes, there are some intricacies with respect to Key/Value, TSV/CSV, Thrift ... datasets. I would like to compare the binary file is the same prior to conversion and after conversion to the special API.
Given I cannot provide the specific api for the metadata-inclusive writes, I only ask how can I write a unit test for .write method on a TypedPipe?
implicit val timeZone: TimeZone = DateOps.UTC
implicit val dateParser: DateParser = DateParser.default
implicit def flowDef: FlowDef = new FlowDef()
implicit def mode: Mode = Local(true)
val fileStrPath = root + "/test"
println("writing data to " + fileStrPath)
TypedPipe
.from(Seq[Long](1, 2, 3, 4, 5))
// .map((x: Long) => { println(x.toString); System.out.flush(); x })
.write(TypedTsv[Long](fileStrPath))
.forceToDisk
The above doesn't seem to write anything to local (OSX) disk.
So I wonder if I need to use a MiniDFSCluster something like this:
def setUpTempFolder: String = {
val tempFolder = new TemporaryFolder
tempFolder.create()
tempFolder.getRoot.getAbsolutePath
}
val root: String = setUpTempFolder
println(s"root = $root")
val tempDir = Files.createTempDirectory(setUpTempFolder).toFile
val hdfsCluster: MiniDFSCluster = {
val configuration = new Configuration()
configuration.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, tempDir.getAbsolutePath)
configuration.set("io.compression.codecs", classOf[LzopCodec].getName)
new MiniDFSCluster.Builder(configuration)
.manageNameDfsDirs(true)
.manageDataDfsDirs(true)
.format(true)
.build()
}
hdfsCluster.waitClusterUp()
val fs: DistributedFileSystem = hdfsCluster.getFileSystem
val rootPath = new Path(root)
fs.mkdirs(rootPath)
However, my attempts to get this MiniCluster to work haven't panned out either - somehow I need to link the MiniCluster with the Scalding write.
Note: The Scalding JobTest framework for unit testing isn't going to work due actual data written is sometimes wrapped in bijection codec or setup with case class wrappers prior to the writes made by the metadata-inclusive writes APIs.
Any ideas how I can write a local file (without using the Scalding REPL) with either Scalding alone or a MiniCluster? (If using the later, I need a hint how to read the file.)
Answering ... There is an example of how to use a mini cluster for exactly reading and writing to HDFS. I will be able to cross read with my different writes and examine them. Here it is in the tests for scalding's TypedParquet type
HadoopPlatformJobTest is an extension for JobTest that uses a MiniCluster.
With some hand-waiving on detail in the link, the bulk of the code is this:
"TypedParquetTuple" should {
"read and write correctly" in {
import com.twitter.scalding.parquet.tuple.TestValues._
def toMap[T](i: Iterable[T]): Map[T, Int] = i.groupBy(identity).mapValues(_.size)
HadoopPlatformJobTest(new WriteToTypedParquetTupleJob(_), cluster)
.arg("output", "output1")
.sink[SampleClassB](TypedParquet[SampleClassB](Seq("output1"))) {
toMap(_) shouldBe toMap(values)
}
.run()
HadoopPlatformJobTest(new ReadWithFilterPredicateJob(_), cluster)
.arg("input", "output1")
.arg("output", "output2")
.sink[Boolean]("output2")(toMap(_) shouldBe toMap(values.filter(_.string == "B1").map(_.a.bool)))
.run()
}
}
I am trying to read a file with a list of IP addresses and another one with domains, as a proof of concept of the Input Framework defined in https://docs.zeek.org/en/stable/frameworks/input.html
I´ve prepared the following bro scripts:
reading.bro:
type Idx: record {
ip: addr;
};
type Idx: record {
domain: string;
};
global ips: table[addr] of Idx = table();
global domains: table[string] of Idx = table();
event bro_init() {
Input::add_table([$source="read_ip_bro", $name="ips",
$idx=Idx, $destination=ips, $mode=Input::REREAD]);
Input::add_table([$source="read_domain_bro", $name="domains",
$idx=Idx, $destination=domains, $mode=Input::REREAD]);
Input::remove("ips");
Input::remove("domains");
}
And the bad_ip.bro script, which check if an IP is in the blacklist, which loads the previous one:
bad_ip.bro
#load reading.bro
module HTTP;
event http_reply(c: connection, version: string, code: count, reason: string)
{
if ( c$id$orig_h in ips )
print fmt("A malicious IP is connecting: %s", c$id$orig_h);
}
However, when I run bro, I get the error:
error: Input stream ips: Table type does not match index type. Need type 'string':string, got 'addr':addr
Segmentation fault (core dumped)
You cannot assign a string type to an addr type. In order to do so, you must use the utility function to_addr(). Of course, it would be wise to verify that that string contains a valid addr first. For example:
if(is_valid_ip(inputString){
inputAddr = to_addr(inputString)
} else { print "addr expected, got a string"; }
A follow up to this:
one SCDF source, 2 processors but only 1 processes each item
The 2 processors (del-1 and del-2) in the picture are receiving the same data within milliseconds of each other. I'm trying to rig this so del-2 never receives the same thing as del-1 and vice versa. So obviously I've got something configured incorrectly but I'm not sure where.
My processor has the following application.properties
spring.application.name=${vcap.application.name:sample-processor}
info.app.name=#project.artifactId#
info.app.description=#project.description#
info.app.version=#project.version#
management.endpoints.web.exposure.include=health,info,bindings
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration
spring.cloud.stream.bindings.input.group=input
Is "spring.cloud.stream.bindings.input.group" specified correctly?
Here's the processor code:
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Object transform(String inputStr) throws InterruptedException{
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = " I AM [" + inputStr + "] AND I HAVE BEEN PROCESSED!!!!!!!";
log.info("SampleProcessor.transform() incoming inputStr="+inputStr);
return message;
}
Is the #Transformer annotation the proper way to link this bit of code with "spring.cloud.stream.bindings.input.group" from application.properties? Are there any other annotations necessary?
Here's my source:
private String format = "EEEEE dd MMMMM yyyy HH:mm:ss.SSSZ";
#Bean
#InboundChannelAdapter(value = Source.OUTPUT, poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public MessageSource<String> timerMessageSource() {
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = new SimpleDateFormat(format).format(new Date());
log.info("SampleSource.timeMessageSource() message=["+message+"]");
return () -> new GenericMessage<>(new SimpleDateFormat(format).format(new Date()));
}
I'm confused about the "value = Source.OUTPUT". Does this mean my processor needs to be named differently?
Is the inclusion of #Poller causing me a problem somehow?
This is how I define the 2 processor streams (del-1 and del-2) in SCDF shell:
stream create del-1 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
stream create del-2 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
Do I need to do anything differently there?
All of this is running in Docker/K8s.
RabbitMQ is given by bitnami/rabbitmq:3.7.2-r1 and is configured with the following props:
RABBITMQ_USERNAME: user
RABBITMQ_PASSWORD <redacted>:
RABBITMQ_ERL_COOKIE <redacted>:
RABBITMQ_NODE_PORT_NUMBER: 5672
RABBITMQ_NODE_TYPE: stats
RABBITMQ_NODE_NAME: rabbit#localhost
RABBITMQ_CLUSTER_NODE_NAME:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_MANAGER_PORT_NUMBER: 15672
RABBITMQ_DISK_FREE_LIMIT: "6GiB"
Are any other environment variables necessary?
I have a cache which store BinaryObject actually in a cluster(2 nodes). Ignite version is 2.1.0.
If I don't use any StreamReceiver(include StreamTransformer), there is no problems when adding lots of BinaryObject data with following code:
IgniteDataStreamer<Long,BinaryObject> ds = ignite.dataStreamer(CACHE_NAME);
SecureRandom random = new SecureRandom();
long i = 0;
long count = 1000000;
while(i++<count){
builder.setField("id", i);
builder.setField("name", "Test"+i);
builder.setField("age", random.nextInt(30));
builder.setField("score", random.nextDouble()*100d);
builder.setField("birthday", new Date());
ds.addData(i, builder.build());
if(i%10000==0){
System.out.println(i+" added...");
}
}
But now, I want to modify my BinaryObject data value before adding, so I tried StreamTransformer like this:
ds.receiver(new StreamTransformer<Long,BinaryObject>(){
#Override
public Object process(MutableEntry<Long, BinaryObject> entry, Object... arguments)
throws EntryProcessorException {
// TODO Auto-generated method stub
Long key = entry.getKey();
BinaryObject value = entry.getValue();
BinaryObjectBuilder builder = value.toBuilder();
//want to change the value of "name" field
builder.setField("name", "Modify"+builder.getField("name"));
entry.setValue(builder.build());
return null;
}
});
while(...){
//... original code to build BinaryObject data and call ds.add method
}
Unluckily following exceptions occurred:
[09:52:36] Topology snapshot [ver=61, servers=2, clients=0, CPUs=8, heap=2.7GB]
10000 added...
20000 added...
30000 added...
40000 added...
[09:52:39,174][SEVERE][data-streamer-#54%null%][DataStreamerImpl] DataStreamer operation failed.
class org.apache.ignite.IgniteCheckedException: Failed to finish operation (too many remaps): 32
at org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5.apply(DataStreamerImpl.java:869)
at org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5.apply(DataStreamerImpl.java:834)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:382)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:346)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:334)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:494)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:473)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:461)
at org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer$2.apply(DataStreamerImpl.java:1572)
at org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer$2.apply(DataStreamerImpl.java:1562)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:382)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:346)
at org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:334)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:494)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:473)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:461)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteCheckedException: Unknown pair [platformId=0, typeId=-1496463502]
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7229)
... 5 more
Caused by: java.lang.ClassNotFoundException: Unknown pair [platformId=0, typeId=-1496463502]
at org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:392)
at org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:342)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:686)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1755)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
at org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:797)
at org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:143)
at org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:161)
at org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:41)
at org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:125)
at org.apache.ignite.internal.processors.datastreamer.DataStreamerEntry$1.getValue(DataStreamerEntry.java:96)
at org.apache.ignite.stream.StreamTransformer.receive(StreamTransformer.java:45)
at org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:137)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6608)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:959)
... 4 more
What should I do to fix it?
You get ClassNotFoundException because DataStreamer internally tries to deserialize stored BinaryObject. To make it use BinaryObjects directly, you should invoke ds.keepBinary(true) before using it.
Another problem you have in your code is the way you use result of entry.getValue(). Actually, entry, passed to process method represents a record previously stored in cache, so you'll most likely get null value there. If you want to get a newly-assigned value, you should use arguments[0] value.
The code speaks better:
model = new QSqlRelationalTableModel();
model->setEditStrategy(QSqlRelationalTableModel::OnRowChange);
model->setTable("members");
model->setRelation(Member_TeamID, QSqlRelation("teams", "ID", "Name"));
model->setSort(Member_Name, Qt::AscendingOrder);
model->select();
mapper = new QDataWidgetMapper();
mapper->setSubmitPolicy(QDataWidgetMapper::ManualSubmit);
mapper->setModel(model);
mapper->setItemDelegate(new QSqlRelationalDelegate());
void member_detail::deleteMember()
{
int row = mapper->currentIndex();
bool x= model->removeRow(row);
mapper->submit();
mapper->setCurrentIndex(qMin(row, model->rowCount() - 1));
QMessageBox::critical(0,"W",QString::number(x)); // This Echos false
}
Simply When I call deleteMember , The record is not removed from the model but it's removed from the database (I Check against it using Navicat)
Specs: Qt 5.0.2 Linux 64-bit , g++ as compiler
From QSqlTableModel::removeRows docs:
Emits the beforeDelete() signal before a row is deleted. When the edit
strategy is OnManualSubmit signal emission is delayed until
submitAll() is called.
So check the edit strategy you're using and try to call model->submitAll() after changes.