org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method hudson.model.Item getName - api

I was trying to delete the old history of builds using a groovy script, and earlier it was working fine and without any changes now I am facing issue as below:
ERROR: Build step failed with exception
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method hudson.model.Item getName
at org.jenkinsci.plugins.scriptsecurity.sandbox.whitelists.StaticWhitelist.rejectMethod(StaticWhitelist.java:175)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:137)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
at org.kohsuke.groovy.sandbox.impl.Checker$checkedCall.callStatic(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
at Script1.deleteBuildHistory(Script1.groovy:71)
at Script1$deleteBuildHistory.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:133)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
at org.kohsuke.groovy.sandbox.impl.Checker$checkedCall.callStatic(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194)
at Script1.run(Script1.groovy:58)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.run(GroovySandbox.java:141)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SecureGroovyScript.evaluate(SecureGroovyScript.java:333)
at hudson.plugins.groovy.SystemGroovy.run(SystemGroovy.java:95)
at hudson.plugins.groovy.SystemGroovy.perform(SystemGroovy.java:59)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:504)
at hudson.model.Run.execute(Run.java:1798)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Build step 'Execute system Groovy script' marked build as failure
Finished: FAILURE
In my groovy I am using the API "hudson.model.Hudson.instance.getItem(envVar.get("JOB_NAME"));" to get the Jenkins job name. Since it is working earlier, now I am facing this issue and not sure how to resolve the same. Kindly provide inputs.

You are using a rather generic way to access data from an object, which might be exploited somehow, so it got blacklisted or rather not whitelisted in Jenkins Groovy Sandbox.
You have several options here:
Just add an exception using in-process script approval
Use a less generic and therefore saver syntax like env.JOB_NAME.
I would definitely go for the second option in your case for it has no disadvantages and is simpler then your current code.
As for why it worked before: three might have been an approval, which somehow got lost –happened to me once– or the call you are using got un-whitelisted in an update of the security plugin.

Related

How can I run various tests for Quarkus Kafka Stream with Testcontainer?

Following the steps described here https://quarkus.io/guides/kafka#testing-using-a-kafka-broker it's possible to define quarkus tests using a "real" Kafka broker.
#QuarkusTest instantiate all the resources needed, including KafkaStrems and during the individual tests (#Test) we can limit ourselves to produce records for input topics and consume results from output topics.
The current stream Topology include steps of groupBy, aggregation, join, ...
During test, the problem is that, after first one, all other tests have "dirty aggregates". A kafkaStreams.cleanUp() might solve the problem but produce an error:
Caused by: java.lang.IllegalStateException: Cannot clean up while running.
at org.apache.kafka.streams.KafkaStreams.cleanUp(KafkaStreams.java:1486)
at eu.reply.lea.visibility.unieuro.stream.TopologyProducerIT.setup(TopologyProducerIT.java:70)
at eu.reply.lea.visibility.unieuro.stream.TopologyProducerIT_Bean.create(Unknown Source)
at eu.reply.lea.visibility.unieuro.stream.TopologyProducerIT_Bean.get(Unknown Source)
at eu.reply.lea.visibility.unieuro.stream.TopologyProducerIT_Bean.get(Unknown Source)
at io.quarkus.arc.impl.InstanceImpl.getBeanInstance(InstanceImpl.java:225)
at io.quarkus.arc.impl.InstanceImpl.getInternal(InstanceImpl.java:211)
at io.quarkus.arc.impl.InstanceImpl.get(InstanceImpl.java:97)
... 73 more
The question is: what is the correct approch to run KafkaStream testing in quarkus (the "traditional" approach of: perform a test, perform rollback and continue with the next ones seems not applicable).
Also the following approach fails:
// test 1
kafkaStreams.close();
kafkaStreams.cleanUp();
kafkaStreams.start();
// test 2
kafkaStreams.close();
kafkaStreams.cleanUp();
kafkaStreams.start();
// ...

Using Optaplanner for VRPPD

I am trying to run the example "optaplanner-mixedvrp-experiment" developed by Geoffrey De Smet and when I run it it throws me the following error:
Caused by: java.lang.IllegalStateException: The entity (MY) has a
variable (previousStandstill) with value (MUNO) which has a
sourceVariableName variable (nextVisit) with a value (WERBOMONT) which
is not null. Verify the consistency of your input problem for that
sourceVariableName variable.
I have not made any change, I have only cloned and executed it, I import and solve it and it throws me this error.
Do you know what could be happening?
I am applying it in the development of a variant of VRP with multiple deliveries and collections, but it throws me the same error. I have activated the FULL_ASSERT mode and nextVisit, previousStandstill, visitIndex are always null
It's been a long time since I looked at that code, so it's using an old version of optaplanner. Our goal is still to clean it up and offer an out of the box example for VRPPD (and probably remove some boilerplate along the way, using the upcoming #CollectionPlanningVariabe etc). That being said, we have multiple users&customers who used that optaplanner-mixedvrp-experiment to successfully build VRPPD implementations.
Which dataset did you try?
FWIW, that IllegalStateException says that when A.previous = B, the B.next is not A. So either the dataset importer didn't import it correctly - before calling solve() - especially if it fails before the first CH step in FULL_ASSERT. Or one of the custom moves corrupted the model.

karate-gatling report aggregation

One question as just starting to use karate-gatling: is it possible to aggregate the reports generated? So after multiple runs to get one single report? It would be nice to be able to compare somehow the performance - to get automatically the information if there is a performance degradation or not. What I did try but did not work, was to copy the simulation logs and afterwards only generate the reports ("gatling.bat -ro simulations") but this did not work. The error that I got was:
gatling.bat -ro simulations/catskaratesimulation-1544015145031
GATLING_HOME is set to "D:\AutomationTeam\gatling-charts-highcharts-bundle-3.0.1.1"
JAVA = ""C:\Program Files\Java\jdk1.8.0_131\bin\java.exe""
Parsing log file(s)...
Exception in thread "main" java.lang.NumberFormatException: For input string: "catskaratesimulation"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at scala.collection.immutable.StringLike.toLong(StringLike.scala:305)
at scala.collection.immutable.StringLike.toLong$(StringLike.scala:305)
at scala.collection.immutable.StringOps.toLong(StringOps.scala:29)
at io.gatling.charts.stats.LogFileReader.$anonfun$firstPass$1(LogFileReader.scala:102)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at io.gatling.charts.stats.LogFileReader.firstPass(LogFileReader.scala:86)
at io.gatling.charts.stats.LogFileReader.$anonfun$x$4$1(LogFileReader.scala:125)
at io.gatling.charts.stats.LogFileReader.parseInputFiles(LogFileReader.scala:63)
at io.gatling.charts.stats.LogFileReader.(LogFileReader.scala:125)
at io.gatling.app.RunResultProcessor.initLogFileReader(RunResultProcessor.scala:67)
at io.gatling.app.RunResultProcessor.processRunResult(RunResultProcessor.scala:49)
at io.gatling.app.Gatling$.start(Gatling.scala:81)
at io.gatling.app.Gatling$.fromArgs(Gatling.scala:46)
at io.gatling.app.Gatling$.main(Gatling.scala:38)
at io.gatling.app.Gatling.main(Gatling.scala)
Is there another way to do it? Should I somehow reconfigure gatling? Thanks!
It worked when using the same version (2.2.4) via gatling.bat -ro folder_with_simulations.

Array in output schema caused exception

I am following this WordCount example using the Google BigQuery-Hadoop connector:
https://developers.google.com/hadoop/writing-with-bigquery-connector#completecode
The example works fine as it is.
To test array in the output schema, I have altered just one line in the code by adding an array object definition to the output schema:
String outputTableSchema = "[{'name': 'Word','type': 'STRING'},{'name': 'Number','type': 'INTEGER'},{'name':'Persons','mode':'REPEATED','type':'RECORD','fields':[{'name': 'name','type': 'STRING'},{'name': 'age','type': 'INTEGER'}]}]";
Now when I run the WordCount example, it gives this exception:
java.lang.IllegalStateException
at com.google.gson.JsonArray.getAsString(JsonArray.java:133)
at com.google.cloud.hadoop.io.bigquery.BigQueryUtils.getSchemaFromString(BigQueryUtils.java:97)
at com.google.cloud.hadoop.io.bigquery.BigQueryOutputFormat.getRecordWriter(BigQueryOutputFormat.java:121)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.(ReduceTask.java:568)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:637)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:418)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Does anyone know what the issue is?
Thank you
This is actually a bug in the current version of the BigQuery connector which prevents it from supporting inner records with more than 1 field.
We have a fix internally and it's slated to go out with the next release (0.4.3) which may still be a couple weeks out; if you'd like to help try out a staging build, feel free to reach out to gcp-hadoop-contact#google.com and we can provide instructions.

Lucene Search Error Stack

I am seeing the following error when trying to search using Lucene. (version 1.4.3). Any ideas as to why I could be seeing this and how to fix it?
Caused by: java.io.IOException: read past EOF
at org.apache.lucene.store.InputStream.refill(InputStream.java:154)
at org.apache.lucene.store.InputStream.readByte(InputStream.java:43)
at org.apache.lucene.store.InputStream.readVInt(InputStream.java:83)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:195)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:55)
at org.apache.lucene.index.SegmentReader.initialize(SegmentReader.java:109)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:89)
at org.apache.lucene.index.IndexReader$1.doBody(IndexReader.java:118)
at org.apache.lucene.store.Lock$With.run(Lock.java:109)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:111)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:106)
at org.apache.lucene.search.IndexSearcher.<init>(IndexSearcher.java:43)
In this same environment I also see the following error:
Caused by: java.io.IOException: Lock obtain timed out:
Lock#/tmp/lucene-3ec31395c8e06a56e2939f1fdda16c67-write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:58)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:223)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:213)
The same code works in a test environment, however not in production. Cannot identify any obvious differences between the two environments.
File permissions are wrong (it needs write permission) or your are not able to access a locked file that the current process needs.