PigUnit not working for pig scripts that use HCatLoader - apache-pig

I have my pig script where I am loading like:
LOAD_A = LOAD '$DB_AND_TABLE' USING org.apache.hcatalog.pig.HCatLoader();
I'm overriding the alias in my pigunit as:
overrideInputAlias("LOAD_A", load_a);
Ideally, I think if I override the alias, pigunit should not try loading using HCatLoader, but it is complaining
ERROR 1000: Error during
parsing. Could not resolve org.apache.hcatalog.pig.HCatLoader using imports: [,
java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Could somebody please point me if I need to do something different with using HCatLoader with PigUnit?

Please try to use override().
test.override("LOAD_A", "LOAD_A = LOAD 'abc' USING PigStorage(',');
If you still get the same error, I would suggest you to add hcatalog-pig-adapter to your maven dependencies.

Related

Property expression of ExpressionStatement expected node to be of a type ["Expression"] but instead got "TSModuleBlock"

I am upgrading the dependencies of my react-native application from 0.53 to 0.59.
But I am facing the below error while trying to build it using ./gradlew assembleRelease.
#babel/template placeholder "$1": Property expression of ExpressionStatement expected node to be of a type ["Expression"] but instead got "TSModuleBlock".
#babel/template placeholder "$1": Property expression of ExpressionStatement expected node to be of a type ["Expression"] but instead got "TSModuleBlock"
at Object.validate (C:\vs-code-upgraded\node_modules#babel\types\lib\definitions\utils.js:132:11)
at validateField (C:\vs-code-upgraded\node_modules#babel\types\lib\validators\validate.js:24:9)
at validate (C:\vs-code-upgraded\node_modules#babel\types\lib\validators\validate.js:17:3)
at builder (C:\vs-code-upgraded\node_modules#babel\types\lib\builders\builder.js:38:27)
at Object.expressionStatement (C:\vs-code-upgraded\node_modules#babel\types\lib\builders\generated\index.js:316:31)
at applyReplacement (C:\vs-code-upgraded\node_modules#babel\template\lib\populate.js:86:27)
I would like to know if there is possiblity to solve this build error.
thank you in advance.
This might probably due to a namespace being exported which only contain interfaces/types (Not actual classes/functions/objects but type declarations).
A quick fix is to add declare to the exported namespace.
export declare namespace SomeNameSpace
instead of
export namespace SomeNameSpace

ImageFileError: Cannot work out file type of ".nii"

When I try to load my .NII file as a 4D Niimg-like object (I've tried both nilearn and nibabel),
I get the below error
Error: ImageFileError: Cannot work out file type of
"/Users/audreyphan/Documents/Spring2020/DESPO/res4d/1/res4d_anat.nii"
Here is my code:
ds_name = '/Users/audreyphan/Documents/Spring2020/DESPO/res4d/1/res4d_anat.nii'
block = nib.load(ds_name) #Nibabel
block = image.load_img(ds_name) #Nilearn
Both attempts result in the same error.
I'm not sure what's causing this error to occur?
Thanks!
It looks like the libraries are not able to extract the file type from your file.
So first of all we have to be sure that the file is not corrupt. Therefore, can you load the data correctly with a tool such as ITK-SNAP (http://www.itksnap.org)?
If yes, you can try to define the file type by your own in the nibabel package by using the specific loader function. E.g. you can try each one of the following loader functions:
img_nifti1 = nib.Nifti1Image.from_filename(file)
img_nifti2 = nib.Nifti2Image.from_filename(file)
Oddly enough, this error also occurs when the access permissions are not set appropriately for the file you are trying to load. Try using chmod to change those access permissions appropriately and then loading the *.nii file.

OrientDB 2.2.4 Load Balancing

I am trying to setup a cluster for load balancing. I am using the Java Graph API. In the documentation there is this code:
final OrientGraphFactory factory = new OrientGraphFactory("remote:localhost/demo");
factory.setConnectionStrategy(OStorageRemote.CONNECTION_STRATEGY.ROUND_ROBIN_CONNECT);
OrientGraphNoTx graph = factory.getNoTx();
I copied and pasted the code exactly like this and I get this compilation error
"incompatible types: CONNECTION_STRATEGY cannot be converted to
String"
The only relevant import I have is:
import com.orientechnologies.orient.client.remote.OStorageRemote;
Can you please help?
Has anyone tried this?
Thanks.
You could use
OStorageRemote.CONNECTION_STRATEGY.ROUND_ROBIN_CONNECT.toString()
Hope it helps

Doctrine2: Type x already exists

I have a problem with the Doctrine API.
I want to add a new Doctrine Type. I followed this documentation to create the class, and I have added the type in my custom driver.
Type::addType("custom", "Namespace\NameBundle\Types\CustomType");
$this->registerDoctrineTypeMapping("CustomType", "custom");
My problem append when I execute php app/console cache:clear.
[Doctrine\DBAL\DBALException]
Type custom already exists.
After few searches, I have found in Doctrine\DBAL\Types\Type::addType(…) throw an exception if the type is knew… I don't understand why this error is throwing.
I have found my problem !
I don't know why, but my custom type is loading again and again.
To resolve this problem, adding this code like checking.
if (!Type::hasType("custom")) {
Type::addType("custom", "Namespace\NameBundle\Types\CustomType");
$this->registerDoctrineTypeMapping("CustomType", "custom");
}
It works !

Pig latin load from S3 (folder expansion)

I am trying to use load with data source as S3 bucket.
load s3n://hourly-logprocessing/{2013090100,2013100501}/??????_0.gz' using some loader()
does not work.
load s3n://hourly-logprocessing/{201309????}/??????_0.gz using some loader()
does not work.
I get this exception.
Caused by: java.lang.IllegalArgumentException: Can not create a Path
from an empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:91)
at org.apache.hadoop.fs.Path.(Path.java:99)
at org.apache.hadoop.fs.Path.(Path.java:58)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:498)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1341)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1418)
at org.apache.hadoop.fs.FileSystem.globPathsLevel(FileSystem.java:1602)
at org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1539)
It only works when i use a single folder.
load s3n://some-folder/2013090100/??????_0.gz
How does pig expand. Any help would be appreciated.
First of all, I didn't try your examples, o lazy me, but this works for my 'load' statements:
's3n://SOME_BUCKET/20[0-9][0-9]-[0-9][0-9]-[0-9][0-9]-23-*.mystuff_v14*'
don't forget single quotes after the load command (which is missing in your examples)