How do I get a list of the Velocity objects and APIs usable in user macros in Atlassian Confluence? [closed] - velocity

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I need to write some user macros in Atlassian Confluence. It uses the Apache Velocity templating engine. How can I find which APIs are available in that context?
For example, one community-provided macro uses objects like $spaceManager. How can I enumerate all of the objects that are available, and also get documentation for their methods?
The page on the Atlassian site listing the objects is incomplete: not only does it only list a small fraction of the true objects available, it's also not specific to the user macro context, or even to my specific version of Confluence. (For example, user macros are given a different Velocity context than Velocity used in other plugin points and have different objects available, and the objects available in Confluence 5.1 are different from those available in Confluence 5.6.)
There are similar questions on the Atlassian Answers site, but none point to complete API and type references.

The official list of Velocity objects visible in Confluence is listed on Atlassian's site, but as you may have noticed, it is far from complete.
With JIRA, there is a trick that one can use to list all objects in the current Velocity context:
#foreach($p in $ctx.keySet().toArray())
$p.toString() - $ctx.get($p).getClass().getName().toString()
#end
Unfortunately, the JIRA trick above does not work for Confluence because Confluence does not expose the ctx map.
Luckily, you mentioned that you are writing user macros, so not all is lost! Using another trick to access objects that are not normally available in the default context, you can punch a hole through to the class that provides the Velocity context for user macros and give yourself a copy. Combining the two approaches, we get the following for Confluence:
#set($macroUtilClass=$action.class.forName('com.atlassian.confluence.renderer.radeox.macros.MacroUtils'))
#set($getContextMethod=$macroUtilClass.getDeclaredMethod('defaultVelocityContext',null))
#set($ctx=$getContextMethod.invoke(null))
#foreach($p in $ctx.keySet().toArray())
$p.toString() - $ctx.get($p).getClass().getName().toString()<br/>
#end
This gives you a list of all objects available and their corresponding types, like this:
res - com.atlassian.confluence.web.filter.DebugFilter$LoggingResponseWrapper
bootstrap - com.atlassian.confluence.setup.DefaultBootstrapManager
settingsManager - com.atlassian.confluence.setup.settings.DefaultSettingsManager
userAccessor - com.sun.proxy.$Proxy54
seraph - com.atlassian.confluence.util.SeraphUtils
xsrfTokenGenerator - com.atlassian.xwork.SimpleXsrfTokenGenerator
...
To find the methods available through each of those objects, you will need to look up the corresponding JavaDocs by the class name. I find that Google is often the fastest path to done, but you can also go directly to the JavaDocs (for example, for the SpaceManager) and then root around in the other packages or adjust the URL to match the class you want. You will also want to adjust the "latest" in the URL to correspond to the actual version of Confluence you have installed. Lastly, a few of the classes are hidden behind proxies (like userAccessor above), but the variable name often matches the class name, so these are usually easy to figure out.
Note that this example is specifically tailored for user macros. The objects available in other Velocity contexts will certainly be different.

Related

What are the General Development Best Practices in MuleSoft [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What are some common best practices one needs to consider while developing Apps in MuleSoft for clients, focusing Anypoint Studio 7.x.x and Mule 4.
List down your recommendations, which you have followed with any clients.
Please Note: I asked this general question to have a dedicated section of MuleSoft Development Best Practices on SO, rather than having similar titles where users having their personal agendas\user driven case scenario.
Mule Developers must considered this to be a critical topic.
Given below are the wide range of MuleSoft best practices which are involved during App development phase.
Development best practices are commonly divided into three categories:
Mule General Best Practices
Mule Munits Best Practices
Mule Error Handling and Exception Scenario Best Practices
Mule General Best Practices
Note: Suggestions are placed in <>. These are just best practices, not a compulsion.
Naming Conventions
Flow and SubFlow Names. <Must use camelCase>
XML files and properties files <Must be all lower case with '-' in between words>
Other common files (Examples, JSON files, Scripts) <Must use camelCase>
All the rest (Components, Transformers, Scopes, Flow Controls). <First letters of each word must be capitalized. Spaces must be used between words>
Property Parameterization
Configuration Properties. <All configurations must be declared as *key=value* in the property files>
Environment Based Properties. <Configuration properties must be segregated into files based on the *Environment* we deploy the app. Example "config-dev.properties", "config-qa.properties", "config-prod.properties">
Runtime Property Variables. <Should be used to fetch appropriate ".properties" files needed for the environment we deploy. Example, name your environment files as "/config-${env}.properties" using Configuration properties in global elements and pass 'env=dev' or 'env=qa' as a Runtime variables in Run Configurations. We can also pass global arguments like 'crypto.key=w4ref$%wrfw3', used to decrypt encrypted values>
Externalizing Transform Message Code to dw Script files. <A common rule of thumb is to use script files when the lines of code is greater than 10>
RAML and Project Files Folder Structuring
Place all the .properties files in src/main/resources/properties
Place all dw script files in src/main/resources/scripts.
Place the RAML examples, libraries, dataTypes, securitySchemes and Traits in dedicated folders while designing in Design Center.
Keep a separate file for API Kit Router and all its generated flow.
Error handling must have its own separate file. Error flows must not be seen anywhere else apart from this file.
Move all Connector Configurations/Global Elements into a separate 'global-config.xml' file. <This keep the rest of mule xml files clean and tidy>
Hard coded values
Must be aware of which code values you must 'Hard code' and which ones not.
Most Global elemental configurations must be property parameterized rather hardcoded. Example, 'Reconnection Strategy, 'Connection Idle Timeouts', 'Max Idle Timeouts', 'host', 'port', 'usernames', 'passwords' and more.
Property Value Encryption. Critical Information must be encrypted. <Using secure-properties-tools.jar or Mule property editor>
Autohide sensitive Property Values passed in Runtime Manager Tab of cloud hub. <Achieved using 'mule-artifact.json'>
Using functions, local/global variables in Transform messages to enforce DRY
Add detailed inline XML comments for flows, choice, etc.
Add descriptive multiline code comments for any complex transformations in Transform messages.
Replace long repeating 'if/else' statements 'with match/case' in data weave.
If flows are getting big using more choice routers. Refactor each choice scope into its own subflow.
A good rule of thumb is that if you have to scroll the Mule canvas back and forth to see the whole flow, it's too complex and should be rewritten.
Avoid Mule Async Scope Calls as much as you can. It caused data integrity issue, based on several developer complaints.
Do not use mule-objectstores for fast-long repeated operations. Know your TPS. Always clear your object stores in your mule cyclic execution in relevance to the requirement from time to time.
Keep a track of each 'variable' initialized in the flow executions. Always make sure to clear or remove variables once finished using them. <Helps you to have a clean process, removing unnecessary code manipulation and heap limitations>
Change your mule loggers from 'INFO' to 'DEBUG' after done development. <Helps you by not over-burdening the Mule APP when deployed in cloudhub. Keeps the mule health monitor in check, so that the APP does not auto restart>
Make sure to never cross an App's 70 percentage CPU usage shown in dashboards. Create Apps accordingly.
18. Be always aware of Data losses caused by Fatal Errors\Application Restarts. <Always use a backup data centers like AWS, Database, Object stores, Mule Load Balancers etc>
Mule Munits Best Practices
Never forget to use Spy and Asserts.
Scenarios based test cases.
Success Scenarios. <Have one major test case to run through the entire API once>
Failure Scenarios. <Have multiple test cases for each flow or subflows, testing for all possible failure situations, like testing mapping, choice routing etc>
Always mock all external service calls, like HTTP, DB, SQS Connectors. <Never call your actual endpoints in Munits>
Consider to put your test payloads in 'src/test/resources/testExample.json' but not directly in the Mocks or Events. <use #[MunitTools::getResourceAsString('testExample.json')]>
Include the files needed under 'src/test/resources' for Munit Test Runs, similar to having 'src/main/resources'.
Mule Error Handling and Exception Scenario Best Practices
All Error status codes must be included appropriately as per the requirement.
Errors must be separately specified in a 'global-error-handling.xml' file.
All Exceptions\Errors must be properly branched as given below
System Exceptions <Source related data exceptions>
Business Exceptions <Target\End System exceptions (Not to be bothered by the Mule APP, but must be handled)>
System\Application Errors
Admiring the usage of object stores and Data Queues for failed messages and record reprocessing.
Having a Retry mechanism for all HTTP based errors.
Can you imagine all the hours of pain we can avoid by following some simple recommendations.
Hope you this helps !

Why isn't Cucumber considered as a testing tool? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I'm new to Cucumber, and I'm trying to understand the tool. While reading the documentation, I found that it is defined shortly as "a tool that supports BDD":
Cucumber is a tool that supports Behaviour-Driven Development(BDD).
Also it is described as a "validation tool":
Cucumber reads executable specifications written in plain text and validates that the software does what those specifications say.
In the other side, I noticed the excessive use of the word "test" on the 10-minute tutorial.
AFAIK, what does this tool is agile testing, since it is used massively in e2e testing (test basis = Gherkin feature specs + step definitions). However, the blog says something different:
Finally, remember that Cucumber is not a testing tool. It is a tool for capturing common understanding on how a system should work. A tool that allows you, but doesn't require you, to automate the verification of the behaviour of your system if you find it useful.
Now if this tool is not really about testing, then what use is it intended for?
TL;DR
Cucumber is a BDD framework. Its main components are:
Gherkin: a ubiquitous language used as communication tool than
anything, and can be used as a springboard for collaboration. It
helps manage the expectations of the business, and if everyone can
see the changes that you are making in a digestible format, they'll
hopefully get less frustrated with the development teams, but also
you can use it to speed up the reaction time of your team if there
are bugs, by writing the tests that cucumber wraps with the mindset
that someone is going to have to come back and debug it at some
point.
CLI implementation (or the CLI): a test runner based on
Gherkin. It is developed by volunteers who are all donating part of
their spare time. Every implementation is specific to a programming
language supporting a production code that is ready for test. It is
considered as the concrete tool / utility.
The Long Version
The intended use of Gherkin as a communication tool, describing interactions with a system from multiple perspectives (or actors), and it just so happens that you can integrate it with test frameworks, which aids in ensuring that the system in place correctly handles those interactions.
Most commonly, this is from a users perspective:
Given John has logged in
When he receives a message from Brenda
Then he should be able to click through to his message thread with Brenda via the notification
But it can also be from a component/pages perspective too:
Given the customer history table is displayed
And there have been changes to the customer table for that user since the page was first loaded
When it receives a click to refresh the data
Then the new changes should be displayed
It's all about describing the behaviours, and allowing the business and developers to collaborate freely, while breaking down the language barriers that usually end up plaguing communication, and generally making both sides frustrated at each other because of a lack of mutual understanding of an issue
This is where the "fun" begins - Anakin, Ep III
You could use these files to create an environment of "living documentation" throughout your development team (and if successful, the wider business), and in theory - worded and displayed correctly, it would be an incredible boon for customer service workers, who would more easily be able to keep up with changes, and would have extremely well described help documentation - without any real additional effort, but this isn't something that I've seen much in the wild. I've written a script at work that does this by converting the features into markdown, and alongside various other markdown tools (mermaid for graphs, tsdoc-plugin-markdown to generate the API docs, and various extensions for my chosen HTML converter, docsify) I've managed to generate something that isn't hard to navigate and open up communication between teams that previously found it harder to communicate their issues to the dev team (most people know a little markdown these days, even if it has to be described as "the characters you type in reddit threads and youtube comments to make text bold and italics, etc" for people to get what it is, but it means everyone can contribute to it)
It is an extremely useful tool when it comes to debugging tests, especially when used with the screenplay pattern (less so with the standard page object model, because of the lack of additional context that the pom provides, but it's still useful), as everything is described in a way that breeds replication of the issue from a users or components perspective if it fails.
I've paired it with flow charts, where I draw out the user interactions, pinning the features to it and being able to see in a more visual way where users will be able to do something that we might not have planned for, or even figure out some garish scenario that we somehow missed.
The Long Version Longer
My examples here will mostly in javascript, as we've been developing in a node environment, but if you wanted to create your own versions, it shouldn't be too different.
The Docs
Essentially, this is bit is just for displaying the feature files in a way that is easily digestible by the business (I have plans to integrate test reports into this too, and give the ability to switch branches and such)
First, you want to get a simple array of all of the files in your features folder, and pick out the ones with ".feature" on the end.
Essentially, you just need to flatten an ls here (this can be improved, but we have a requirement to use the LTS version of node, rather than the latest version in general)
const fs = require('fs');
const path = require('path');
const walkSync = (d) => fs.statSync(d).isDirectory() ? fs.readdirSync(d).map(f => walkSync(path.join(d, f))) : d;
const flatten = (arr, result = []) => {
if (!Array.isArray(arr)){
return [...result, arr];
}
arr.forEach((a) => {
result = flatten(a, result)
})
return result
}
function features (folder) {
const allFiles = flatten(walkSync(path.relative(process.cwd(), folder)))
let convertible = []
for (let file of allFiles) {
if (file.match(/.feature$/)) {
convertible.push(file)
}
}
return convertible
}
...
Going through all of those files with a Gherkin parser to pull out your scenarios requires some set up, although it's pretty simple to do, as Gherkin has an extremely well defined structure and known keywords.
There can be a lot of self referencing, as when you boil it down to the basics, a lot of cucumber is built on well defined components. For example, you could describe a scenario as a background that can have a description, tags and a name:
class Convert {
...
static background (background) {
return {
cuke: `${background.keyword.trim()}:`,
steps: this.steps(background.steps),
location: this.location(background.location),
keyword: background.keyword
}
}
static scenario (scenario) {
return {
...this.background(scenario),
tags: this.tags(scenario.tags),
cuke: `${scenario.keyword.trim()}: ${scenario.name}\n`,
description: `${scenario.description.replace(/(?!^\s+[>].*$)(^.*$)/gm, "$1<br>").trim()}`,
examples: this.examples(scenario.examples)
}
}
...
}
You can flesh it out fully to write to either a single file, or output a few markdown files (making sure to reference them in a menu file)
Flowcharts
Flow charts make it easier to help visualise an issue, and there are a few tools that use markdown to help generate them like this:
In the back, it'll end up looking like this:
### Login
Customers should be able to log into their account, as long as they have registered.
...
```mermaid
graph TD
navigateToLogin["Navigate to Login"] -->logIn{"Login"}
logIn -->validCredentials["Valid<br>Credentials"]
logIn -->invalidCredentials{"Invalid<br>Credentials"}
invalidCredentials -->blankPass["Blank Password"]
invalidCredentials -->wrongPass["Wrong Password"]
invalidCredentials -->blankEmail["Blank Email"]
invalidCredentials -->wrongEmail["Wrong Email"]
...
click blankPass "/#/areas/login/scenario-blank-password" "View Scenario"
...
```
It's essentially just a really quick way to visualise issues, and links us to the correct places in the documentation to find an answer. The tool draws out the flowchart, you just have to make the connections between key concepts or ideas on the page (i.e. a new customer gets a different start screen)
Screenplay Pattern, Serenity and Debugging
I think all that really needs to be said here is that when you run a test, this is your output:
✓ Correct information on the billing page
✓ Given Valerie has logged into her account
✓ Valerie attempts to log in
✓ Valerie visits the login page
✓ Valerie navigates to '/login'
✓ Valerie waits up to 5s until the email field does become visible
✓ Valerie enters 'thisisclearlyafakeemail#somemailprovider.com' into the email field
✓ Valerie enters 'P#ssword!231' into the password field
✓ Valerie clicks on the login button
✓ Valerie waits for 1s
It will break down any part of the test into descriptions, which means if the CSS changes, we won't be searching for something that no longer exists, and even someone new to debugging that area of the site will be able to pick up from a test failure.
Communication
I think all of that should show how communication can be improved in a more general sense. It's all about making sure that the projects are accessible to as many people who could input something valuable (which should be everyone in your business)

Can Intellij IDEA (14 Ultimate) generate regex based TODO-comments?

A few years back i worked in a company where i could press CTRL+T and a TODO-comment was generated - say my ID to be identified by other developers was xy45 then the generated comment was:
//TODO (xy45):
Is something available from within Intellij 14 Ultimate or did they write their own plugin for it?
What i tried: Webreserach, Jetbrais documentations - it looks like its not possible out of the box (i however ask before i write a plugin for it) or masked by the various search results regarding the TODO-view (due to bad research skills of mine).
There is no built-in feature in IntelliJ IDEA to generate such comments, so it looks like they did write their own plugin.
Found something that works quite similar but is not boundable to a shortcut:
File -> Settings -> Live Templates
I guess the picture says enoth to allow customization (consult the Jetbrains documentation for more possibilities). E.g. browse to the Live Template section within the settings, add a new Live Template (small green cross, upper right corner in the above picture) and set the context where this Live Template is applicable.
Note: Once you defined the Live Template to be applicable within Java (...Change in the above image where the red exclamation marks are shown) context you can just type "t", "todo" and hit CTRL+Space (or the shortcut you defined for code completion).
I suggest to reconsider using that practice at all. Generally you should not include redundant information which is easily and more reliably accessible through your Version Control System (easily available in Idea directly in editor using Annotate feature). It is similiar to not using javadoc tag #author as the information provided with it is often outdated inaccurate and redundant. Additionaly, I don´t think author of TODO is that much valuable information. Person who will solve the issue will often be completly different person and the TODO should be well documented and descriptive anyway. When you find your own old TODO, which is poorly documented, you often don't remember all the required information even if you were the author.
However, instead of adding author's name, a good practice is to create a task in you issue management system and add identifier of this task to the description of the todo. This way you have all your todos in evidence at one place, you can add additional information to the task, track progress, assign it etc. My experience is that if you don´t use this, todos tend to stay in the code forever and after some time no one remembers clearly the details of the problem. Additionaly, author mentioned in the todo is often already gone working for a different company.
Annotated TODO with issue ID

vba code refactoring - are there any tools to assist? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am trying to refactor my VBA code. I am so used to using refactoring in Java-based IDEs for a number of years. Does VBA editor support any refactoring or are there any add-ins? MZ Tools did not have any such functionality.
I want to be able to do at least the following:
1. Rename variables
2. Split Procedures into sub-procedures to make the code more readable
3. Change the scope of the variable from global to procedure and vice-versa
Disclaimer: I'm heavily involved with this project.
Rubberduck is an open-source add-in for the VBA/VB6 IDE under [very] active development, that includes this functionality.
Version 1.3 includes a Rename refactoring:
Version 2.0 (beta available, still stabilizing) includes a dozen refactorings:
Introduce Parameter promotes a local variable to a parameter
Introduce Field promotes a local variable to module scope
Encapsulate Field turns a public field into a property
Move Closer to Usage moves a field that's only used in 1 procedure, into that procedure. Or moves a local variable immediately above its first use.
Extract Interface lets you pick what class members to extract into an interface, creates a new class modules with stubs for them, and makes the original class implement the extracted interface.
Implement Interface creates stubs for all members of an unimplemented interface, so you don't need to create them manually by selecting them one by one in the code pane dropdown:
Implements IClass1
Public Sub IClass1_DoSomething()
Err.Raise 5 'TODO implement interface member
End Sub
Public Function IClass1_GetFoo() As Integer
Err.Raise 5 'TODO implement interface member
End Function
Sub DoSomething()
End Sub
Function GetFoo() As Integer
End Function
More refactoring tools are on the project's roadmap (including Extract Method), which you can follow on GitHub.
The only 'refactoring' tool I know of in VBA is Ctrl+F and Ctrl+R.
I use V-Tools for refactoring-like work as it will do find / replace in objects, not just VBA code.
http://www.skrol29.com/us/vtools.php
Yes there is.... almost
In the good old days i used this one.
http://www.moshannon.com/speedferret.html
helped me alot and I think i have the 3.5" disks somewere ;)
The trick is to copy your excel code to Access or VB6 and do your refactoring there.
Replacing scope: solution is creative naming and using replace.
spitting procedures... well thats a manual I'm sorry.
It's usually not worth it unless you have some serious excel vba code, I would recoment converting most of it into c# or VB.Net dll where you can do refactoring, testing and some modern magic and only do as little you can in VBA.

LGPL grammar file licensing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Given a LGPL'ed grammar file, is the source generated by a compiler-compiler for the grammar a derivative works? What about if the grammar file was modified before it was given as input to the compiler-compiler? There isn't any linking, at least not in the conventional sense.
If the output is a derivitive work, must I then simply provide the (modified) grammer sources making any best efforts to ensure the grammar will function without dependencies imposed by the program/library using it? Or are there more restrictions which must be resolved?
1) Since the grammar contains the essence of the resulting code, it definitely belongs to "all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities" and is not a part of "the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work". In brief, LGPLv3 applies.
So, you need to convey the "Minimal Corresponding Source" (the one used to build the version in the Combined Work) according to sec.4 d) 0) or GPLv3 sec.6, mark it as modified if it is and possibly include custom tools if required by GPL's definition of "Corresponding Source". (In general, as sec.0 says, LGPLv3 is effectively GPLv3 with a few additional provisions.)
2) It might be a derivative work of the generator used as well if the latter inserts parts of itself into the code (see FSF FAQ#Can I use GPL-covered tools... to compile...?) - check the generator's workings and licensing terms if necessary. If it is, you'll have to satisfy both LGPLv3 and the generator's terms that apply to the results of its work.
The best answer, and which everyone should be giving you is as follows:
Contact a lawyer
Disclaimer: IANAL and if you want something "official" you should talk to one. That said...
A common-sense approach says that yes, the result of compilation of something that is compilable is a derivative work. For instance, the compiled version of an LGPL library is still LGPL - you can't say that you obtained a compiled version of the library and never compiled it yourself and somehow dodge providing the source code that way.
Thus, the LGPL would require you to distribute the (potentially modified) source of the original LGPL work, such that if an individual wanted to further modify the work, they could.