Spark stand-alone v 2.3.2 Failing test - apache-spark-sql

I have build spark v 2.3.2 on big endian platform using adopt jdk 1.8
build is successful and we encounter test case failures in the following module.
I wanted some information related to this failing test, information on how severely would this failures effect spark functionality
Test Module Severity (1. Must Fix, 2. Not Must Fix)
core
unsafe
Spark Project SQL
Spark Project Hive
Spark Project Repl
We would be glad if u can address our question by associating above shown severity numbers to respective failing test module

Related

Is there a way to run unit tests with not compatible dependencies

just started to learn rust and wanted a real project to work on. Since my raspberry-pi only came with a non-pwm fan I wanted to write a small CPU temp monitor/fan control program for the pi.
Now I want to execute some unit tests and having some problems with it. First, some information:
I am working on MacOS
I installed compile tools and am using target armv7-unknown-linux-musleabihf
(target gnueabihf didn't work for me, even though it is mentioned in multiple tutorials)
I use a raspberry-pi 4 B
I only use one dependency: rppal = "0.13.1"
Compiling to the raspberry-pi target works like a charm and I can execute it on the pi.
But when I want to execute the tests on my MacOS the rppal dependency fails to compile.
Now I was wondering if there is a way to run tests with only compiling what is actually needed.

Use java 8 features (newer janino version) in pentaho data integration

Pentaho Data Integration 8.0.x is using Janino 2.5.16, released in 2010 for compiling the User Defined Java Class step. There is a JIRA in pentaho for updating this to use a newer Janino version which would bring new java 8 related features in pentaho v8.2.0 GA. But there is no info on when will this be released.
Is there any other way I can use a newer janino version (janino-3.0.8.jar) with exiting pentaho for UDJC? I tried to copy updated jar in the lib and also added commons-compiler-3.0.8.jar to fulfill dependency. Now when I open Spoon, I get the following error:
Please advise on how this can be achieved. I understand that just replacing the jar may not be enough but just want if something else can be done.
This is not easy. Even now, since you got ClassNotFound, public api of janino is changed. Some classes are removed some are changed. What is actual needs to update it?
If you need really complicated business logic, then create custom plugin. Documentation and tutorials are available and you can look into sources of current builtin plugins (sources are available on github).
What important new version of janino has, that old doesn't (beside java8 support)? Checkout kettle engine, look into sources of UserDefinedClass step, change code to support new janino version, test and make own build of pdi kettle, and try to send push request to maintainers of repository.
Any of this quite complicated, This plugin is builtin into engine, and you have to make own build. Own build means, you have to support it by yourself. This is non trivial, project is huge and now even bigger and continue evolving, I spent several days to make my first custom build (version of 4, was in ivy) just for purpose to know better and debug complicated cases, and it used never in production.
Maintainers of repository must have good reason to include your changes into stream, it must be well tested and it is long procedure and most probably doesn't worth it. A lots of changed since 2010, I probable have seen in release notes, new version of java already have abilities to compile at runtime.
My advice is to make you own plugin.

Are SDK's mutually exclusive in an Intellij Module?

I have a mixed scala/python maven module. I primarily do scala development but in order to do some python work I changed the SDK to python.
Now I can not do scala. So is there no way to have both be available? Do I need to manually switch back and forth??
Here is what happened after selecting the python sdk, and attempting to run a scala test class:
UPDATE It appears the situation is worse than anticipated: I can not even switch back to scala at all. Did the project get corrupted??
Here is the error shown in the screenshot:
Error:scalac: error while loading package, Missing dependency 'object java.lang.Object in compiler mirror', required by /Users/steve/.m2/repository/org/scala-lang/scala-library/2.10.4/scala-library-2.10.4.jar(scala/package.class)
If you have a Python facet, you shouldn't select a Python SDK as the SDK to use for your module or project. It only needs to be selected in the facet settings, as your screenshot shows. The project/module SDK needs to be a regular Java/Scala SDK.

Pig versions and UDF

I am using pig version as 0.12,But for creating UDFs i am using the jar file of Pig 0.9 version.
I simply downloaded the jar file for Pig 0.9 version and added that in my eclipse classpath.
All the UDFs that I created using Pig 0.9 version API works fine.
But I would like to know the impact on that.
Is there any problem that I will face in future
The issues that you will face is API inconsistencies as time goes by. Some of the core APIs are relatively stable. Heck, most. But the longer you use an old Pig API the higher the chance you'll get an issue running in the cluster.
Something else to think about is are you overriding your Pig version in the cluster. For example, say you have an uber-jar with the pig scripts in it. If that JAR contains Pig v.09, you'll actually use that version rather than .12. By not migrating, you might be pulling in the wrong version of Pig.

Sonar Eclipse plugin : local analysis is still tagging fixed issues

I'm using the Sonar Eclipse plugin v3.3.
After I've fixed a rule violation, not a new issue, but one that exists on the sonar server, I re-run the analysis on my project in Eclipse. I expected that the fixed issues would no longer be flagged by the analysis, but they appear to be still flagged even though they have been fixed.
In my Eclipse SonarQube preferences I have the severity marked as warning and Force full preview... unchecked.
In the view options I have Show->All Issues on Selection checked.
How do I set up the plugin so that once I've fixed the issue locally, the issue is no longer flagged when I re-run the analysis on my project?
Edit:
Full analysis is run nightly by a conditional build step in Jenkins using SonarQube Runner.
When I run the analysis via Eclipse, the first thing it does is wipe out the existing issue annotations, but then as soon as it contacts the server it immediately adds them back in. The issues stay flagged regardless of whether they were fixed locally or not.
If I intentionally put in the wrong projectKey in the org.sonar.ide.eclipse.core.prefs file, then the local analysis runs similar to what I would expect. It flags all existing issues as new, which is expected, since it can't reach the server to ask if they were preexisting. It doesn't flag any fixed issues.
The problem was that I was using 3.7.3 on the server and not 4.0 or above. I upgraded the server install to 4.1 and all is well now.
BTW - The Sonar console output clearly stated :
SonarQube version 4.0 is required to perform local analysis.