We have successfully executed the DatabaseTablesPreparer and inited the tables in the DB, but when we try to init the indexes on the table with SQLScriptPreparer, we get the following exception:
ES1 dbinit [] [] com.intershop.platform.cartridge.internal.CartridgeImpl [] [] [] [] "main" Neither Ivy descriptor nor cartridge properties found for cartridge 'app_core_a1'!
ES1 dbinit [] [app_core_a1:Class1 DatabaseIndexesPreparer [hr/a1/core/dbinit/scripts/dbindex.ddl] Version:null] com.intershop.beehive.core.dbinit.preparer.database.DatabaseIndexesPreparer [] [] [] [] "main" [core] Exception java.lang.NullPointerException: null
at com.intershop.beehive.core.dbinit.preparer.database.SQLScriptPreparer.getCommand(SQLScriptPreparer.java:158)
at com.intershop.beehive.core.dbinit.preparer.database.SQLScriptPreparer.process(SQLScriptPreparer.java:353)
We had the similar problem with DatabaseTablesPreparer (Cartridge was null), and we solved it by adding cartridge.properties file, but now we are getting the same error ("Neither Ivy descriptor nor cartridge properties found for cartridge 'app_core_a1'") even though the cartridge properties file is defined.
There are the lines in decompiled preparer code where the null pointer exception occurs:
getCartridge().getVersion() + (getCartridge().getBuild().isEmpty() ? "" : new StringBuilder().append(".").append(getCartridge().getBuild()).toString()) };
This is the preparer from dbinit.properties:
Class1 = com.intershop.beehive.core.dbinit.preparer.database.DatabaseIndexesPreparer \
hr/a1/core/dbinit/scripts/dbindex.ddl
And this is the dbinit command we are executing:
dbinit.bat --exec-id=app_core_a1:Class1
DatabaseTablesPreparer from the same cartridge, defined in the same dbinit executes successfully.
Problem was fixed by publishing cartridge. It seems that ivy descriptor was deleted and it had to be republished.
Related
I just involved the new project for API test for our service by using Gatling. At this point, I want to search query, below is the code:
def chnSendToRender(testData: FeederBuilderBase[String]): ChainBuilder = {
feed(testData)
exec(api.AdvanceSearch.searchAsset(s"{\"all\":[{\"all:aggregate:text\":{\"contains\":\"#{edlAssetId}_Rendered\"}}]}", "#{authToken}")
.check(status.is(200).saveAs("searchStatus"))
.check(jsonPath("$..asset:id").findAll.optional.saveAs("renderedAssetList"))
)
.doIf(session => session("searchStatus").as[Int] == 200) {
exec { session =>
printConsoleLog("Rendered Asset ID List: " + session("renderedAssetList").as[String], "INFO")
session
}
}
}
I declared the feeder already in the simulation scala file:
class GVRERenderEditor_new extends Simulation {
private val edlToRender = csv("data/render/edl_asset_ids.csv").queue
private val chnPostRender = components.notifications.notice.JobsPolling_new.chnSendToRender(edlToRender)
private val scnSendEDLForRender = scenario("Search Post Render")
.exitBlockOnFail(exec(preSimAuth))
.exec(chnPostRender)
setUp(
scnSendEDLForRender.inject(atOnceUsers(1)).protocols(httpProtocol)
)
.maxDuration(sessionDuration.seconds)
.assertions(global.successfulRequests.percent.is(100))
}
But Gatling test failed to run, showing this error: Exception in thread "main" java.lang.UnsupportedOperationException: There were no requests sent during the simulation, reports won't be generated
If I hardcode the #{edlAssetId} (put the real edlAssetId in that query), I will get result. I think I passed the parameter wrongly in this case. I've tried to print the output in console log but no luck. What's wrong with this code? I would appreciate your help. Thanks!
feed(testData)
exec(api.AdvanceSearch.searchAsset(s"{\"all\":[{\"all:aggregate:text\":{\"contains\":\"#{edlAssetId}_Rendered\"}}]}", "#{authToken}")
.check(status.is(200).saveAs("searchStatus"))
.check(jsonPath("$..asset:id").findAll.optional.saveAs("renderedAssetList"))
)
You're missing a . (dot) before the exec to attach it to the feed.
As a result, your method is returning the last instruction, ie the exec only.
I'm using Apache Ignite v.2.8.1 .Net on a Windows 10 machine.
I am trying to use affinity collocation on a query-enabled cache. The entities I store in cache have a primary key named "Id" and an affinity key named "PartnerId". Both keys are of the type Int32. I'm defining the cache as follows:
new CacheConfiguration("BespokeCharge")
{
KeyConfiguration = new[]
{
new CacheKeyConfiguration()
{
AffinityKeyFieldName = "PartnerId",
TypeName = typeof(BespokeCharge).Name
}
}
};
Next I use the following code to add the data:
var cache = Ignite.GetCache<AffinityKey, BespokeCharge>("BespokeCharge")
cache.Put(new AffinityKey(entity.Id, entity.PartnerId), entity)
So far so good. Since I want to be able to use SQL to search for bespoke charges, I also add a QueryEntity configuration:
new CacheConfiguration("BespokeCharge",
new QueryEntity(typeof(AffinityKey), typeof(BespokeCharge))
{
KeyFieldName = "Id",
TableName = "BespokeCharge"
})
{
KeyConfiguration = new[]
{
new CacheKeyConfiguration()
{
AffinityKeyFieldName = "PartnerId",
TypeName = typeof(BespokeCharge).Name
}
}
};
When I run the code, both Ignite and my app crash and the following error is logged:
JVM will be halted immediately due to the failure: [failureCtx=FailureContext [type=CRITICAL_ERROR, err=class o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is corrupted [pages(groupId, pageId)=[IgniteBiTuple [val1=1565129718, val2=844420635164729]], cacheId=-1278247946, cacheName=BESPOKECHARGES, indexName=BESPOKECHARGES_ID_ASC_IDX, msg=Runtime failure on row: Row#29db2fbe[ key: AffinityKey [idHash=8137191, hash=783909474, key=3, affKey=2], val: UtilityClick.BillValidation.Shared.InMemory.Model.BespokeCharge [idHash=889383506, hash=-399638125, BespokeChargeTypeId=4, ChargeValue=100.0000, ChargeValueIncCommission=100.0000, Id=3, PartnerId=2, QuoteRecordId=5] ][ 4, 100.0000, 100.0000, , 2, 5 ]]]]
When I tried to define QueryEntity with the key type of int instead of AffinityKey, I got a different error with the same outcome -- a crash.
JVM will be halted immediately due to the failure: [failureCtx=FailureContext [type=CRITICAL_ERROR, err=java.lang.ClassCastException: class o.a.i.cache.affinity.AffinityKey cannot be cast to class java.lang.Integer (o.a.i.cache.affinity.AffinityKey is in unnamed module of loader 'app'; java.lang.Integer is in module java.base of loader 'bootstrap')]]
What am I doing wrong? Thank you for your help!
KeyFieldName = "Id" setting is the problem. It sets Id field to be used as a cache key, which is int, but then we use AffinityKey as a cache key, which causes a type mismatch.
In this case I don't think we need KeyFieldName at all, removing it fixes the problem, and SELECT queries should not be affected by this change.
I'm writing a stream UDF for Aerospike 3.6.2, and I'd like to put some code in a separate Lua module. I followed the example exactly and created a file mymodule.lua with the following contents:
local exports = {}
function exports.one()
return 1
end
function exports.two()
return 2
end
return exports
and put my UDF in a file testUdf.lua:
local MM = require('mymodule')
local function three()
return MM.one() + MM.two()
end
function testUdf(stream)
local type = three()
local testFilter = function(record)
return record.campaignType == type
end
return stream : filter(testFilter)
end
I register both modules and execute a query from the Java client:
LuaConfig.SourceDirectory = this.udfPath;
List<RegisterTask> tasks = new ArrayList<>();
for (String udfName : new String[] { "mymodule.lua", "testUdf.lua" }) {
File udf = new File (this.udfPath, udfName);
tasks.add (this.aerospike.register (null, udf.getPath(), udfName, Language.LUA));
}
tasks.stream().forEach (RegisterTask::waitTillComplete);
Statement stmt = new Statement();
stmt.setNamespace (getRawFactNamespace());
stmt.setSetName (SET_FACT);
stmt.setFilters (Filter.equal (BIN_PRODUCT_CODE, "UX"));
stmt.setBinNames (FACT_BINS);
int count = 0;
try (ResultSet rs = this.aerospike.queryAggregate (null, stmt, "testUdf", "testUdf")) {
while (rs.next()) {
count++;
}
}
I see the lua files, with the correct contents, on both of my test Aerospike servers in aerospike/var/udf/lua, which is where mod-lua.user-path points to in aerospike.conf. I logged package.path and it includes the aerospike/var/udf/lua directory as well.
But when I invoke my UDF, I get the following error:
Exception in thread "main" com.aerospike.client.AerospikeException: org.luaj.vm2.LuaError: testUdf:1 module 'mymodule' not found: mymodule
no field package.preload['mymodule']
mymodule.lua
no class 'mymodule'
stack traceback:
testUdf:1: in main chunk
[Java]: in ?
at com.aerospike.client.query.QueryExecutor.checkForException(QueryExecutor.java:122)
at com.aerospike.client.query.ResultSet.next(ResultSet.java:78)
...
Caused by: org.luaj.vm2.LuaError: testUdf:1 module 'mymodule' not found: mymodule
no field package.preload['mymodule']
mymodule.lua
no class 'mymodule'
stack traceback:
testUdf:1: in main chunk
[Java]: in ?
at org.luaj.vm2.LuaValue.error(Unknown Source)
at org.luaj.vm2.lib.PackageLib$require.call(Unknown Source)
at org.luaj.vm2.LuaClosure.execute(Unknown Source)
at org.luaj.vm2.LuaClosure.onInvoke(Unknown Source)
at org.luaj.vm2.LuaClosure.invoke(Unknown Source)
at org.luaj.vm2.LuaValue.invoke(Unknown Source)
at com.aerospike.client.lua.LuaInstance.loadPackage(LuaInstance.java:113)
at com.aerospike.client.query.QueryAggregateExecutor.runThreads(QueryAggregateExecutor.java:92)
at com.aerospike.client.query.QueryAggregateExecutor.run(QueryAggregateExecutor.java:77)
What am I doing wrong?
I have simple Job (named A) which starts a simple transformation (named A). The transformation
contains only a dummy component.
They are both stored into a db repository.
If I start the job from kitchen, everything runs fine:
./kitchen.sh -rep=spoon -user=<user> -pass=<pwd> -job A
Then I have written a simple java code:
JobMeta jobMeta = repository.loadJob(jobName, directory, null, null);
org.pentaho.di.job.Job job = new org.pentaho.di.job.Job(null, jobMeta);
job.getJobMeta().setInternalKettleVariables(job);
job.setLogLevel(LogLevel.ERROR);
job.setName(Thread.currentThread().getName());
job.start();
job.waitUntilFinished();
if (job.getResult() != null && job.getResult().getNrErrors() != 0) {
...
}
else {
...
}
Problem is that running the java program I always get the following error:
A - Unable to open transformation: null
A - java.lang.NullPointerException
at org.pentaho.di.job.entries.trans.JobEntryTrans.execute(JobEntryTrans.java:698)
at org.pentaho.di.job.Job.execute(Job.java:589)
at org.pentaho.di.job.Job.execute(Job.java:728)
at org.pentaho.di.job.Job.execute(Job.java:443)
at org.pentaho.di.job.Job.run(Job.java:363)
I have googled for this error without success and I am stucking there.
Any suggestion ?
The solution seems to be replacing the line inside kitchen
org.pentaho.di.job.Job job = new org.pentaho.di.job.Job(null, jobMeta);
with
org.pentaho.di.job.Job job = new org.pentaho.di.job.Job(repository, jobMeta);
Hoping that this helps someone else.
I have currently started to work with JSON files and process data using PIG scripts. I am using Pig version 0.9.3.I have come across PiggyBank which i thought will be useful to load and process json file in PIG scripts.
I have built piggybank.jar through ANT.
Later, I have compiled the Java File and updated the piggybank.jar. Was trying to run the given example json file.
I have written a simple PIGSCRIPT and the respective JSON as follows.
REGISTER piggybank.jar
a = LOAD 'file3.json' using org.apache.pig.piggybank.storage.JsonLoader() AS (json:map[]);
b = foreach a GENERATE flatten(json#'menu') AS menu;
c = foreach b generate flatten(menu#'popup') as popup;
d = foreach c generate flatten(popup#'menuitem') as menu;
e = foreach d generate flatten(menu#'value') as val;
DUMP e;
file3.json
{ "menu" : {
"id" : "file",
"value" : "File",
"popup": {
"menuitem" : [
{"value" : "New", "onclick": "CreateNewDoc()"},
{"value" : "Open", "onclick": "OpenDoc()"},
{"value" : "Close", "onclick": "CloseDoc()"}
]
}
}}
I get the following exception during runtime:
org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error while reading input - Could not json-decode string: { "menu" : {
at org.apache.pig.piggybank.storage.JsonLoader.parseStringToTuple(JsonLoader.java:127)
Pig log file:
Pig Stack Trace
---------------
ERROR 1066: Unable to open iterator for alias e
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias e
at org.apache.pig.PigServer.openIterator(PigServer.java:901)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:655)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:303)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:188)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:164)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:561)
at org.apache.pig.Main.main(Main.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.io.IOException: Job terminated with anomalous status FAILED
at org.apache.pig.PigServer.openIterator(PigServer.java:893)
... 12 more
================================================================================
Please correct me if I am wrong. Thanks
You can handle nested json loading with Twitter's Elephant Bird: https://github.com/kevinweil/elephant-bird
a = LOAD 'file3.json' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad')
This will parse the JSON into a map http://pig.apache.org/docs/r0.11.1/basic.html#map-schema the JSONArray gets parsed into a DataBag of maps.