Getting error in Karate v1.1.0 - TypeError: invokeMember (contains) on ["ABC","XYZ","OTHR","NEW"] failed due to: Message not supported - karate

I was using Karate v0.9.6 all this while. Recently thought of upgrading the version to 1.1.0 and then 1.2.0.
One thing is troubling a lot is as belows,
Earlier I used to use 'contains' to verify in the schema that
#An array of expected values
def dept_type_code = ["ABC","XYZ","OTHR","NEW"]
##Then verify in the schema that the type_code has any one of those values in the array
def index_department_type_schema = {"code": '#? dept_type_code.contains(_)'}
It was working in 0.9.6 but with 1.1.0 is failing with error;
TypeError: invokeMember (contains) on ["ABC","XYZ","OTHR","NEW"] failed due to: Message not supported.
I'm sure I'm missing important part from the release notes. I would really appreciate any solution to this problem.
Thanks!

replacing .contains with .includes resolved the issue
https://github.com/karatelabs/karate/wiki/1.0-upgrade-guide#java-api-s-for-maps-and-lists-are-no-longer-visible-within-js-blocks

Related

Pylint: same pylint and pandas version on 2 machines, 1 fails

I have 2 places running the same linting job:
Machine 1: Ubuntu over SSH
pandas==1.2.3
pylint==2.7.4
python 3.8.10
Machine 2: Gitlab CI Docker image, python:3.8.12-buster
pandas==1.2.3
pylint==2.7.4
Python 3.8.12
The Ubuntu machine is able to lint all the code fine, and it has for many months. Same for the CI job, except it had been running Python 3.7.8. Now that I upgraded the Docker image to Python 3.8.12, it throws several no-member linting errors on some Pandas objects. I've tried clearing CI caches etc.
I wish I could provide something more reproducible. But, to check my understanding of what a linter is doing, is it theoretically possible that a small version difference in python messes up pylint like this? For something like a no-member error on Pandas objects, I would think the dominant factor is the pandas version, but those are equal, so I'm confused!
Update:
I've looked at the Pandas code for pd.read_sql_query, which is what's causing the no-member error. It says:
def read_sql_query(
sql,
con,
index_col=None,
coerce_float=True,
params=None,
parse_dates=None,
chunksize: Optional[int] = None,
) -> Union[DataFrame, Iterator[DataFrame]]:
In Docker, I get E1101: Generator 'generator' has no 'query' member (no-member) (because I'm running .query on the returned dataframe). So it seems Pylint thinks that this function returns a generator. But it does not make this assumption in my other setup. (I've also verified the SHA sum of pandas/io/sql.py matches). This seems similar to this issue, but I am still baffled by the discrepancy in environments.
A fix that worked was to bump a limit like:
init-hook = "import astroid; astroid.context.InferenceContext.max_inferred = 500"
in my .pylintrc file, as explained here.
I'm unsure why/if this is connected to my change in Python version, but I'm happy to use this and move on for now. It's probably complex.
(Another hack was to write a function that returns the passed arg if the passed arg is a dataframe, and returns 1 dataframe if the passed arg is an iterable of dataframes. So the ambiguous-type object could be passed through this wrapper to clarify things for Pylint. While this was more intrusive on our codebase, we had dozens of calls to pd.read_csv and pd.real_sql_query, and only about 3 calls caused confusion for Pylint, so we almost used this solution)

NettyApplicationEngine is given but ApplicationEngine is expected

Hey I maked my Projekt on the Ktor Project Creator but i got Errors that in embeddedServer() an ApplicationEngine is expectet but an NettyApplicationEngine is given:
Type mismatch.
Required:
io.ktor.server.engine.ApplicationEngine.Configuration
Found:
io.ktor.server.netty.NettyApplicationEngine.Configuration
But i got unresolved errors too by install{} inside of the embeddedServer{} (and bcs that i think jwt is an unresolved reference too)
Does anyone know how to fix this error (or the errors)
Here can you see the whole thing in one

I need help upgrading OroCommerce to 4.1.1

I just upgraded from 3.1.17 to 4.1.1 and I'm finding a problem with my shopping lists.
When I get to /customer/shoppinglist/5064 I see this:
Looking at my log files from production I see:
[2020-06-23 17:42:56] request.CRITICAL: Uncaught PHP Exception Symfony\Component\ErrorHandler\Error\UndefinedMethodError: "Attempted to call an undefined method named "getDigitalAsset" of class "Proxies\__CG__\Oro\Bundle\AttachmentBundle\Entity\File"." at /usr/share/nginx/html/oroapp/vendor/oro/platform/src/Oro/Bundle/DigitalAssetBundle/Provider/FileTitleProvider.php line 47 {"exception":"[object] (Symfony\\Component\\Debug\\Exception\\FatalThrowableError(code: 0): Attempted to call an undefined method named \"getDigitalAsset\" of class \"Proxies\\__CG__\\Oro\\Bundle\\AttachmentBundle\\Entity\\File\". at /usr/share/nginx/html/oroapp/vendor/oro/platform/src/Oro/Bundle/DigitalAssetBundle/Provider/FileTitleProvider.php:47)"} []
I went to look at the code and I see that in fact there is no method getDigitalAsset in oro/platform/src/Oro/Bundle/DigitalAssetBundle/Provider/FileTitleProvider.php, nor in the proxy... how can this be?
I checked this on my VM (where the problem is not happening) and I see that there's this definition in the proxy class:
/**
* {#inheritDoc}
*/
public function getDigitalAsset()
{
$this->__initializer__ && $this->__initializer__->__invoke($this, 'getDigitalAsset', []);
return parent::getDigitalAsset();
}
But again, I don't see a method called getDigitalAsset in the parent class.
I had some issues when doing the upgrade (I realized my nodejs wasn't upgraded as I thought it was), could that have anything to do with the issue?
Thanks
Edit:
I went through my platform upgrade again and found that there were some problems that prevented it from finishing completely.
This is what I found:
> loading Oro\Bundle\CMSBundle\Migrations\Data\ORM\LoadImageSlider
In LoadImageSlider.php line 117:
Attempted to call an undefined method named "setMainImage" of class "Oro\Bundle\CMSBundle\Entity\ImageSlide".
I commented out the loop inside the load method and re-run the upgrade. Then I got:
> loading Oro\Bundle\CMSBundle\Migrations\Data\ORM\LoadImageSlider
In QueryException.php line 65:
[Semantical Error] line 0, col 117 near 'digitalAsset': Error: Class Oro\Bundle\AttachmentBundle\Entity\File has no association named digitalAsset
In QueryException.php line 43:
SELECT file, digitalAsset, sourceFile FROM Oro\Bundle\AttachmentBundle\Entity\File file INNER JOIN file.digitalAsset digitalAsset INNER JOIN digitalAsset.sourceFile sourceFile WHERE file.parentEntityClass = :parentEntityClass
AND file.parentEntityId = :parentEntityId AND file.parentEntityFieldName =
:parentEntityFieldName
Finally I was able to complete the upgrade by commenting out the whole body of the load method
I had some issues when doing the upgrade (I realized my nodejs wasn't upgraded as I thought it was), could that have anything to do with the issue?
It looks like you have multiple versions of nodejs installed. To make an application use the right one, you can provide the absolute path to the executable with the AssetBundle configuration, like:
# config/config.yml
oro_asset:
nodejs_path: /usr/local/node
npm_path: /usr/local/npm

Kafka 1.0.0 - Serialized.with() uses default serde instead of the ones provided

We recently updated our kafka version from 0.10 to 1.0 and I am updating the deprecated code
KTable<Long, myClass> myKTable = this.streamBuilder
.stream(Serdes.Long(), mySerde, sub_topic)
.groupByKey(Serdes.Long(), mySerde)
.reduce(myReducer, my_store);
to this
KTable<Long, myClass> myKTable = this.streamBuilder
.stream(sub_topic, Consumed.with(Serdes.Long(), mySerde))
.groupByKey(Serialized.with(Serdes.Long(), mySerde))
.reduce(myReducer, Materialized.as(my_store));
My stream throws an error while serializing in groupByKey. The Serialized.with() does not use the keySerde provided and defaults back to byteArray. And this byteArray serde then encounters my key which is a Long and throws a cast error.
Has anyone else encountered this error in the 1.0.0 version of kafka. The first code with the outdated version of kafka works fine. But updating the code to use Serialized.with() does not seem to work. Any help is greatly appreciated.
Can you share the stack trace? I actually think the issue is with reduce() -- you need to specify the Serdes via Materialized again.
This is kinda regression on the new API and got fixed recently in trunk: https://github.com/apache/kafka/pull/4919 Thus, upcoming 2.0 release will contain the fix.

Blazegraph INSERT DATA crashes with NoSuchMethodError

In Blazegraph I attempted the following query:
INSERT DATA {
<http://my.site/User/instances/1>
:comment
<http://my.site/Comment/instances/16>.
}
and it crashes with the following exception trace:
java.lang.NoSuchMethodError: com.bigdata.rdf.sail.sparql.SPARQLStarUpdateDataBlockParser.read()I
at com.bigdata.rdf.sail.sparql.SPARQLStarUpdateDataBlockParser.checkSparqlStarSyntax(SPARQLStarUpdateDataBlockParser.java:107)
at com.bigdata.rdf.sail.sparql.SPARQLStarUpdateDataBlockParser.parseValue(SPARQLStarUpdateDataBlockParser.java:100)
at org.openrdf.rio.trig.TriGParser.parseGraph(TriGParser.java:158)
at org.openrdf.repository.sail.helpers.SPARQLUpdateDataBlockParser.parseGraph(SPARQLUpdateDataBlockParser.java:87)
at org.openrdf.rio.trig.TriGParser.parseStatement(TriGParser.java:128)
at org.openrdf.rio.turtle.TurtleParser.parse(TurtleParser.java:214)
at com.bigdata.rdf.sail.sparql.UpdateExprBuilder.doUnparsedQuadsDataBlock(UpdateExprBuilder.java:746)
at com.bigdata.rdf.sail.sparql.UpdateExprBuilder.visit(UpdateExprBuilder.java:161)
at com.bigdata.rdf.sail.sparql.UpdateExprBuilder.visit(UpdateExprBuilder.java:119)
at com.bigdata.rdf.sail.sparql.ast.ASTInsertData.jjtAccept(ASTInsertData.java:23)
at com.bigdata.rdf.sail.sparql.Bigdata2ASTSPARQLParser.parseUpdate2(Bigdata2ASTSPARQLParser.java:289)
at com.bigdata.rdf.sail.BigdataSailRepositoryConnection.prepareNativeSPARQLUpdate(BigdataSailRepositoryConnection.java:278)
at com.bigdata.rdf.sail.BigdataSailRepositoryConnection.prepareUpdate(BigdataSailRepositoryConnection.java:182)
at org.openrdf.repository.base.RepositoryConnectionBase.prepareUpdate(RepositoryConnectionBase.java:180)
However normal DELETE INSERT WHERE queries work fine.
Any ideas how to solve?
Simply removed the following line in my build.gradle:
compile('org.openrdf.sesame:sesame-runtime:2.8.6')
and refreshed dependencies. You simply don't need to declare a Sesame dependency if you're using bigdata-core, since Sesame is already a sub-dependency of that.
The reason for the crash was that bigdata-core relies on Sesame version 2.7.12 which was being superseded by my 2.8.6 declaration, which doesn't have an implementation of the method called, and hence the crash.
Thanks to #AKSW for the guidance behind this solution.