Best way to use groovy scripts in an application - scripting

I'm trying to use groovy scripts in my application. The problem is that GroovyScriptEngine#run always compiles the script, even if it was compiled in previous runs and hadn't changed since. Even if I set a physical output folder to save compilation results in the configuration.
What is the best way of working around this? The optimum for me is that I'm able to send the script with a folder containing precompiled results and no compilation is done (unless the script is modified of course)

Grails 1.3.5 is using Groovy 1.7.5. In that Groovy version, GroovyScriptEngine.run(..) calls the following methods: createScript(String, Binding) --> loadScriptByName(String) --> isSourceNewer(ScriptCacheEntry).
isSourceNewer(ScriptCacheEntry) is defined as (unfortunately, I didn't find a matching source file on the web):
protected boolean isSourceNewer(ScriptCacheEntry entry)
throws ResourceException {
// ...
for (String scriptName : entry.dependencies) {
// ...
return true; // without any further condition!
}
return false;
}
Which implements the (queer) logic "if a script has dependencies, it is newer than the cached script (and needs to be re-compiled)". That's not what the code is supposed to do; it's supposed to decide by modification time.
In newer versions of GroovyScriptEngine, this has been corrected (there've been massive changes to the logic), but for now, you'd need to subclass GroovyScriptEngine and overwrite isSourceNewer(ScriptCacheEntry) to fix the logic yourself.
Edit: The bug has been reported and fixed in Groovy 1.7.6. - So try using Groovy 1.7.6 in your Grails lib folder.

The solution (hack) I used at last was to stream out the scriptCache variable using xstream and to read it back and set it in the object

Not sure if this helps you, but you can alter GroovyScriptEngine's behaviour using CompilerConfiguration (see GroovyScriptEngine.setConfig). There's an option CompilerConfiguration.setRecompileGroovySource, which can be used to set whether the sources will be reloaded and recompiled if they change. You can read more about CompilerConfiguration here (page 282).

Related

How to differentiate if a TBO is called when importing new Document vs for any other operations

We are trying to add one additional feature to our method for TBO. The feature needs to be executed only when a new document for that object type is imported and should not be executed in any other case like checkin checkout or any changes in attributes.
However the new code is getting called everytime we make any changes to attribute to that document.
We have put that code in doSave() method.
I tried isNew method for distinguish between newly imported Document and other scenarios, however could not get success, may be missing the usage details of the method.
Can anyone suggest anything?
We are on Documentum version 7.2.
I always use isNew() method to check is object new or versioned, I don't remember having problems with it at any DFC version.
Only one thing that comes in mind is to make sure you don't use super.doSave() while inside the code since right after it method will return false.
But this is expected behaviour.
If you really need to do this - some calulations based on programatically preset data - make sure you use value saved within local variable throughout your code.
If you think you are experiencing bug with the method try with another DFC version or report a bug to the Support.

ClassLoader::getSystemResource returning null

It used to work in the past, but I don't what happened in the middle, now it returns always null.
The file to read is in the project root diretory, which corresponds to the output of Paths.get(".").
Note: function is top-level
I'm reading the imgui.ini file here
fileLoadToLines(iniFilename)
where it's so defined
fun fileLoadToLines(filename: String) = ClassLoader.getSystemResourceAsStream(filename)?.use { it.bufferedReader().readLines() }
Tried also the other Thread.currentThread().contextClassLoader, no success
What's the problem?
The project root directory is typically the default current working directory, but not necessarily on the classpath. That's why Paths.get(".") returns it, while the classloader doesn't find the file under it, because the latter goes by what's in the classpath.
It used to work probably because you had the project root added to the runtime classpath. The solution I would recommend is instead of using a classloader, just use the file system API in java.io to load it.

Ballerinalang configuration

I was trying to run a ballerina program on IntelliJ Idea. Then, Edit configuration appears and it says
Error: Main run kind is selected, but the file does not contain a main function.
What should I do ? And what should I select in Program Arguments.
source code:
import ballerina.net.http;
import ballerina.lang.messages;
#http:BasePath {value:"/helloservice"}
service helloService {
#http:GET {}
#http:PATH {value:"/hello?name={name}"}
resource hello (message m, #http:QueryParam {value:"name"} string name) {
string respStr = "Hello, World " + name + "!\n";
message responce = {};
messages:setStringPayload(response, respStr);
reply response;
}
}
Seems like the issue here is, you have manually created a main run configuration and trying to run a service using that. Please select the service run kind in configuration as shown below.
Also, you don't have to manually create a run configurations. IntelliJ IDEA plugin can automatically detect the run kind when you run a main function or services using gutter run icon like below.
Run configuration is automatically created.
If you first run a main and then run a service, the run configuration will be automatically changed according to that as well. So no manual intervention is needed.
On the side notes, the code seems to have much older Ballerina syntax and I would advice using the latest Ballerina syntax to avoid any issues with the latest IntelliJ IDEA plugin. Please refer Ballerina examples for latest syntax.

UseLegacyPathHandling is not loaded properly from app.config runtime element

I'm trying to use the new long path support at my app. In order to use it, without forcing clients to have the newest .NET 4.6.2 version instelled on their machines, one should only add those elements to his app.config (see the link for more info https://blogs.msdn.microsoft.com/dotnet/2016/08/02/announcing-net-framework-4-6-2/) :
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.1"/>
</startup>
<runtime>
<AppContextSwitchOverrides value="Switch.System.IO.UseLegacyPathHandling=false" />
</runtime>
When I use it in my execution project it works perfectly. The problem is in my testing projects (which use Nunit). I've added app.config to my test project in the same way I've added it to the execution project.
Using the ConfigurationManager class I've managed to ensure that app config indeed loaded (in short: using an app setting which i was able to retrieve in a unit test).
Using ConfigurationManager.GetSection("runtime"), I even managed to ensure the runtime element has been loaded properly (_rawXml value is the same as in app.config).
But (!) for some reason the app config runtime element is not influencing the UseLegacyPathHandling variable and therefore all of my calls with long path fail.
I guess the problem is somehow relates to the fact that testing projects become dll's that are loaded using the Nunit engine, which is the execution entry point.
I'm facing the exact same problem in another project I have, which is a dll loaded by Office Word application. I believe the problem is the same in both cases and derived from the fact that the projects are not meant to be an execution entry point.
It's important to understand that I've no access to the executions their self (Word Office or Nunit) and therefore I can't configure them myself.
Is there an option to somehow make the AppContextSwitchOverrides get loaded from scratch dynamically? Other ideas will be most welcome.
I've been having the same issue, and have noted the same lack of loading of that particular setting.
So far, what I've got is that the caching of settings is at least partly to blame.
If you check out how it's implemented, disabling the cache has no effect on future calls to values (i.e. if caching is enabled and something is accessed during that time, then it will always be cached).
https://referencesource.microsoft.com/#mscorlib/system/AppContext/AppContext.cs
This doesn't seem to be an issue for most of the settings, but for some reason the UseLegacyPathHandling and BlockLongPaths settings are getting cached by the time I can first step into the code.
At this time, I don't have a good answer, but if you need something in the interim, I've got a highly suspect temporary fix for you. Using reflection, you can fix the setting in the assembly init. It writes to private variables by name, and uses the specific value of 0 to invalidate the cache, so it's a very delicate fix and not appropriate for a long term solution.
That being said, if you need something that 'just works' for the time being, you can check the settings, and apply the hack as needed.
Here's a simple example code. This would be a method you'll need in your test class.
[AssemblyInitialize]
public static void AssemblyInit(TestContext context)
{
// Check to see if we're using legacy paths
bool stillUsingLegacyPaths;
if (AppContext.TryGetSwitch("Switch.System.IO.UseLegacyPathHandling", out stillUsingLegacyPaths) && stillUsingLegacyPaths)
{
// Here's where we trash the private cached field to get this to ACTUALLY work.
var switchType = Type.GetType("System.AppContextSwitches"); // <- internal class, bad idea.
if (switchType != null)
{
AppContext.SetSwitch("Switch.System.IO.UseLegacyPathHandling", false); // <- Should disable legacy path handling
// Get the private field that is used for caching the path handling value (bad idea).
var legacyField = switchType.GetField("_useLegacyPathHandling", System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.NonPublic);
legacyField?.SetValue(null, (Int32)0); // <- caching uses 0 to indicate no value, -1 for false, 1 for true.
// Ensure the value is set. This changes the backing field, but we're stuck with the cached value for now.
AppContext.TryGetSwitch("Switch.System.IO.UseLegacyPathHandling", out stillUsingLegacyPaths);
TestAssert.False(stillUsingLegacyPaths, "Testing will fail if we are using legacy path handling.");
}
}
}

atlassian-plugin.xml contains a definition of component-import. This is not allowed when Atlassian-Plugin-Key is set

This is what I get when I run atlas-create-jira-plugin followed by atlas-create-jira-plugin-module selecting option 1: Component Import.
The problem is that all tutorial examples appear to have plugin descriptor generated by old SDK version (that won't deploy with newer versions of SDK/Jira at all), which do not feature Atlassian-Plugin-Key, so I can't find my way to import a component.
I'm using SDK 6.2.3 and Jira 7.1.1.
Any hint - how to get this sorted out?
anonymous is correct. The old way of doing things was to to put the <component-import> tag in your atlassian-plugin.xml. The new way and also recommended is to use Atlassian Spring Scanner. When you create your add-on using atlas-jira-create-plugin and your pom.xml has the <Atlassian-Plugin-Key> tag and the dependencies atlassian-spring-scanner-annotation and atlassian-spring-scanner-runtime then you are using the new way.
If you have both the dependencies, you are using Atlassian Spring Scanner version 1.x. If you only have atlassian-spring-scanner-annotation then you are using version 2.x.
You don't have to omit/comment out Atlassian-Plugin-Key in your pom.xml and you don't have to put component-import in your atlassian-plugin.xml.
For example, you want to add licensing for your add-on and need to import the component PluginLicenseManager. You just go straight to the code and your constructor might look like this:
#Autowired
public MyMacro(#ComponentImport PluginLicenseManager licenseManager) {
this.licenseManager = licenseManager;
}
And your class like this:
#Scanned
public class MyMacro implements Macro {
If memory serves me right, be sure to check for null because sometimes Atlassian Spring Scanner can't inject a component. I think on version 1, writing an #EventListener, it could not inject a ConversionContext. But when writing a Macro, it was able to inject a ConversionContext.
According to
https://bitbucket.org/atlassian/atlassian-spring-scanner
component-import is not needed. You can replace it by #ComponentImport annotation in your Java.
Found answer here: https://developer.atlassian.com/docs/advanced-topics/configuration-of-instructions-in-atlassian-plugins
It looks like I've somehow been missing that Atlassian-Plugin-Key can be omitted, and it must be done when you need to import components.
This key just tells spring not to 'transform' plugin's Spring configuration, which must happen as part of components import process..