de.hybris.platform.jalo.Jalo SystemException: no attribute ImpEx Import CronTab.sendEmail OnFailure found[HY--1] - insert-update

Hybris[Impex script import] :
Getting below exception while importing the impex script.
de.hybris.platform.jalo.Jalo SystemException: no attribute ImpEx Import CronTab.sendEmail OnFailure found[HY--1]

This issue occurs when we are using the db dump for the first time.
I have resolved this. Need to do system update by selecting the all extensions in Hac.
It worked.

Related

BigQueryCheckAsyncOperator in airflow does not exist

I am trying to use async operators for bigquery; however,
from airflow.providers.google.cloud.operators.bigquery import BigQueryCheckAsyncOperator
gives the error:
ImportError: cannot import name 'BigQueryCheckOperatorAsync' from 'airflow.providers.google.cloud.operators.bigquery'
The documentation in https://airflow.apache.org/docs/apache-airflow-providers-google/stable/operators/cloud/bigquery.html mentions that BigQueryCheckAsyncOperator exists.
I am using airflow 2.4.
How to import it?
The operator you are trying to import was never released.
It was added in PR and removed in PR both were part of Google provider 8.4.0 release thus overall the BigQueryCheckAsyncOperator class was never part of the release.
You can use defer mode in the existed class BigQueryCheckOperator by setting the deferrable parameter to True.

I created a updatesite.nsf from updatesite.ntf template and trying to import a feature

I created a updatesite.nsf from updatesite.ntf template and trying to import a feature and I get an error says "LS2J Error: Java constructor failed to execute in (#304)..." And a long message. Notes and Domino are 12.0.02 both 64 bit...Need help in this . Why am I getting this error ? Nothing in Activity logs either in the updatesite.nsf.
The developer tried to create a JAR and trying to import it the Feature into the updatesite,nsf database and it errors out as above. It should import as I am expecting.

Start-AzureSqlDatabaseImport completed

Is there a any way to know when a Start-AzureSqlDatabaseImport command has finished importing? The command returns pretty much immediately, but i can't find a way to check when the import has actually completed into my existing database.
Start-AzureSqlDatabaseImport starts an import which is an async operation. If you want to check the status of an import you will need to store the import request in a variable. See below:
$importRequest = Start-AzureSqlDatabaseImport -SqlConnectionContext $SqlCtx -StorageContainer $Container -DatabaseName $DatabaseName -BlobName $BlobName
After you store the request in a variable you can then use the Get-AzureSqlDatabaseImportExportStatus cmdlet to track the status of the import. See below:
Get-AzureSqlDatabaseImportExportStatus -Request $importRequest
Hope this helps!

PigUnit not working for pig scripts that use HCatLoader

I have my pig script where I am loading like:
LOAD_A = LOAD '$DB_AND_TABLE' USING org.apache.hcatalog.pig.HCatLoader();
I'm overriding the alias in my pigunit as:
overrideInputAlias("LOAD_A", load_a);
Ideally, I think if I override the alias, pigunit should not try loading using HCatLoader, but it is complaining
ERROR 1000: Error during
parsing. Could not resolve org.apache.hcatalog.pig.HCatLoader using imports: [,
java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Could somebody please point me if I need to do something different with using HCatLoader with PigUnit?
Please try to use override().
test.override("LOAD_A", "LOAD_A = LOAD 'abc' USING PigStorage(',');
If you still get the same error, I would suggest you to add hcatalog-pig-adapter to your maven dependencies.

Hadoop: wrong classpath in map reduce job

I'm running a cloudera cluster in 3 virtual maschines and try to execute hbase bulk load via a map reduce job. But I got always the error:
error: Class org.apache.hadoop.hbase.mapreduce.HFileOutputFormat not found
So, it seems that the map process doesnt find the class. So I tried this:
1) add the hbase.jar to the HADOOP_CLASSPATH on every node
2) adding TableMapReduceUtil.addDependencyJars(job) / TableMapReduceUtil.addDependencyJars(myConf, HFileOutputFormat.class) to my source code
nothing worked. I have absolute no idea why the class is not found, because the jar/class is definitely available in the classpath.
If I take a look into the job.xml I see the following entry:
name=tmpjars value=file:/C:/Users/Thomas/.m2/repository/org/apache/zookeeper/zookeeper/3.4.5-cdh4.3.0/zookeeper-3.4.5-cdh4.3.0.jar,file:/C:/Users/Thomas/.m2/repository/org/apache/hbase/hbase/0.94.6-cdh4.3.0/hbase-0.94.6-cdh4.3.0.jar,file:/C:/Users/Thomas/.m2/repository/org/apache/hadoop/hadoop-core/2.0.0-mr1-cdh4.3.0/hadoop-core-2.0.0-mr1-cdh4.3.0.jar,file:/C:/Users/Thomas/.m2/repository/com/google/guava/guava/11.0.2/guava-11.0.2.jar,file:/C:/Users/Thomas/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar
This seems a little bit odd to me, these are my local jars on the windows system. Maybe this should be the hdfs jars? If yes, how can I change the values for "tmpjars"?
Here is the java code I try to execute:
configuration = new Configuration(false);
configuration.set("mapred.job.tracker", "192.168.2.41:8021");
configuration.set("fs.defaultFS", "hdfs://192.168.2.41:8020/");
configuration.set("hbase.zookeeper.quorum", "192.168.2.41");
configuration.set("hbase.zookeeper.property.clientPort", "2181");
Job job = new Job(configuration, "HBase Bulk Import for "
+ tablename);
job.setJarByClass(HBaseKVMapper.class);
job.setMapperClass(HBaseKVMapper.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
job.setMapOutputValueClass(KeyValue.class);
job.setOutputFormatClass(HFileOutputFormat.class);
job.setPartitionerClass(TotalOrderPartitioner.class);
job.setInputFormatClass(TextInputFormat.class);
HFileOutputFormat.configureIncrementalLoad(job, hTable);
FileInputFormat.addInputPath(job, new Path("myfile1"));
FileOutputFormat.setOutputPath(job, new Path("myfile2"));
job.waitForCompletion(true);
LoadIncrementalHFiles loader = new LoadIncrementalHFiles(
configuration);
loader.doBulkLoad(new Path("myFile3"), hTable);
EDIT:
I tried a little bit more and its totaly strange. I add the following line to the java code:
job.setJarByClass(HFileOutputFormat.class);
after I executed this, the error is gone, but another class not found exception appear:
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class mypackage.bulkLoad.HBaseKVMapper not found
HBaseKVMapper is my custom Mapper Class I want to execute. So, I tried to add it with "job.setJarByClass(HBaseKVMapper.class)", but it doesnt work since its its only a class file and no jar. So I generated a Jarfile including HBaseKVMapper.class. After that, I executed it again and now got the HFileOutputFormat.class not found exception again.
After debugging a little bit, I found out that the setJarByClass Methode only copies the local jar file to .staging/job_#number/job.jar on HDFS. So, this setJarByClass() Method will only work for one jar file because it overwrites the job.jar after executing setJarByClass() again with another jar.
While searching for the eroor I saw the following strcuture in the the job staging direcotry:
and inside the libjars direcotry I saw the relevant jar files
so, the hbase jar is inside the libjars directory but the jobtracker doesn't use this it for executing the job. Why?
I would try using Cloudera Manager (free version) as it takes care of these issues for you. Otherwise note the following:
Both your own classes and the HBase Class HFileOutputFormat need to be available on the classpath locally and remotely.
Submitting the job
Meaning getting the classpath right locally for when your driver runs:
$ env HADOOP_CLASSPATH=$(hbase classpath) hadoop jar path/to/jar class....
On the server
In your hadoop-env.sh
export HADOOP_CLASSPATH=$(hbase claspath)
or use
TableMapReduceUtil.addDependencyJars
I found a "hacked" solution which worked for me, but I'm not happy with it because it's not really practicable.
My "hacked" solution:
create one big Jar with all necessary class files, I called it "big.jar" and add it to the local (eclipse) classpath
add the line: job.setJarByClass(MyMapperClass.class) ... the MyMapperClass has to be in the big.jar
When I execute this the big.jar will be copied for every job to the filesystem. No errors anymore. The problem is, that the jar is 80mb in size and have to be copied every time.
If anywone knows a better way I would be tahnkful if he could tell me how.
EDIT:
Now I try to execute jobs with Apache Pig and have exactly the same problem. My hacked soultion doesn't work in this case because pig creats the jobs automaticly. Here is the pig error:
java.lang.ClassNotFoundException: Class org.apache.hadoop.hbase.mapreduce.TableSplit not found