I am using Apache Tomcat with Velocity and VelocityViewServlet. I have created a custom tool with refference to ViewContext. It all works well.
The question is: what is best way to locate/load template and procces it with suplied parameters?
I have already absolute path to the file obtained via
((ViewContext)context).getRequest().getSession().getServletContext().getRealPath("/")
Do I have to instantiate VelocityEngine? I suppose there is no global maintained by Velocity (VelocityViewServlet)
Which (and how) of Velocity loaders is best to use?
Several points here:
The VelocityViewServlet will instantiate itself a VelocityEngine. It's not global, it's one engine per ServletContext.
The VelocityViewSerlet will locate itself the template that corresponds to the request URI using its default loader (WebappLoader), so you don't have to do it yourself either.
The Velocity context your template will be evaluated with, will already be populated with all standard tools (for tools 2.0), among which $params which allows you to inspect HTTP parameters.
I don't understand "a custom tool with refference to ViewContext": instead of using the ViewContext, you should add to your custom tool all the appropriate setters you need among the properties listed here (for instance, if you need access to the request, then you'll declare a "public setRequest(HttpServletRequest request)" method). Remember that from a bottom-up perspective, your tool must be only reachable via a key that you choose for it in your tools configuration file, and should not be aware of Velocity.
I advise you to use VelocityTools 2.0, which is a more mature library than the 1.x.
Related
I am trying to write a Vulkan program but am somewhat fuzzy on how the extension mechanism works.
Concretely, I want to access VK_COLOR_SPACE_EXTENDED_SRGB_NONLINEAR_EXT (is not found at compilation) but am not sure how to include the swapchain_colorspace extension.
VK_EXT_swapchain_colorspace is an instance extension.
You can enable the extension by passing its name to vkCreateInstance via the pCreateInfo->ppEnabledExtensionNames.
You can use either "VK_EXT_swapchain_colorspace" directly or use the VK_EXT_SWAPCHAIN_COLOR_SPACE_EXTENSION_NAME macro to avoid typos.
Then, generally speaking, you have to load extension commands (functions) unless it is WSI and you are using the official Vulkan loader.
VK_EXT_swapchain_colorspace defines no new commands, so that step can be skipped.
Enumerants such as VK_COLOR_SPACE_EXTENDED_SRGB_NONLINEAR_EXT are always present\defined (assuming you have updated vulkan.h header; if not, just download newest LunarG Vulkan SDK). Enabling the extension only gives formal permission for them to be used.
I'm trying to understand what is safe vs. not safe with respect to the Eclipse plugin lifecycle.
Background
Something in the Eclipse/RCP/OSGI framework allows for circular dependencies between bundles by allowing bundles to provide extension points. If bundle X provides an extension point, Bundle Y may both depend on bundle X, and provide an extension that implements an interface or extends a class known to X, and make that extension available to bundle X.
Then there's the promise of activators: as far as I understand, it is promised that your activator's start(BundleContext) method will be called before any class in your bundle is made available to any other bundle, and that your dependencies' start(...) methods will have been called before yours.
Limitations/Possible Contradictions
Now, I'm ready to describe my conundrum: I would like to retrieve all the providers of a specific extension point as soon as possible; the easy way to do this would appear to be in the activator of my bundle.
However, if what I've described about the promises that the Eclipse/RCP/OSGI framework makes is true, then I'm pretty sure it shouldn't be possible for me to do that during activation:
Either (1) I'll have a reference to classes provided by one of my dependencies before their start(...) method has been called, or (2) My dependency's start(...) method will have to be called before mine, or (3) No violations will occur, but I'll retrieve zero extensions because the plugins that depend on me couldn't be started before me, so their implementations of my extension point are not yet available.
Why I Need Extensions at Startup
My challenge is that I need to load some data ASAP after the startup of my plugin, but I need to ensure that my extensions are loaded first, because the extensions in question are extensions to the data format of the data that I need to load; if I load the data first, it fails or becomes corrupted.
I'm also wondering whether my picture of the Eclipse plugin lifecycle is correct, because, despite searching for discussions of the plugin lifecycle, I haven't come across any warnings about its limitations; I'm fairly certain it must be possible to do things wrong and create serious problems, and I'd like to understand under what circumstances things would go wrong so I can avoid creating problems.
The extension point registry accessed by the IExtensionRegistry interface will tell you about extension points without starting any of the plugins involved.
IExtensionRegistry extReg = Platform.getExtensionRegistry();
In the registry for an extension point you will have a number of IConfigurationElement entries describing the individual extensions declared by plugins. It is only when you call the createExecutableExtension method of this interface that the the contributing plugin is started.
Note: A plugin's activator start method is not normally run until Eclipse needs to run some other code in the plugin - it does not run at Eclipse startup unless you force it too.
POST 1: theoretical question
We use some software, that is actually a Web Module with its own Tomcat and shell scripts for controlling it. It has also a Plugin System, which allows you to upload a .jar file with a certain structure to add new functionality to the Application.
Question:
I would like to control&actually change the responses to different calls in the main system/application (not in my jar). Could I use AspectJ to do that? Why or why not? What would be the other general possibilities, except changing the code of the Main Application.
POST 2: the try
I tried to do it this way (in Eclipse):
In the AspectJ Project I added the jar file, where the classes to be woven are (actually I added it to the INPATH).
Exported the Project as "Jar with AspectJ support"
Deployed the jar file exported at the step 2: No result.
Questions:
In the exported aspect-jar, there are only the .class files of the AspectJ project, no .class files for the INPATH-Jar.
Should there be other classes, from the imported INPATH-jar?
In the exported aspect-jar there is no jar with the aspectj-runtime (aspectj-rt.jar). Should it be there, or how to configure the virtual machine to have it?
Yes, why not? If you could extend your question and explain (maybe with an example) which actors and actions there are in the system, we might be able to help you in a more conrete fashion. But basically I see no problem. The JAR modules might be loaded dynamically, but if you know which calls in the Tomcat app you want to intercept, you can easily instrument them either statically by reweaving the existing classes or dynamically via LTW (load-time weaving) during JVM start-up. There is no need to touch your uploaded JAR modules, which is, as I understand you, what you want to avoid.
You probably want to weave your main application's target classes via
execution(<methodsToBeChecked>) pointcut in combination with
around() advice.
The other details depend on your specific use case, the package, class and method names, parameters etc. The around advice can do one or several of the following things:
determine caller,
check call paramaters,
manipulate call parameters,
call original target with original or changed parameters,
alternatively not perform the original call at all,
pass back the result of the original call to the caller,
pass back a manipulated version of the result to the caller,
pass any synthetic value with the correct return type to the caller,
catch exceptions raised by the original call,
throw your own exceptions
etc.
Your fantasy (and AspectJ's few limitations) are the limit. :-)
So far I have two short questions:
1) What precisely are the benefits of creating custom nature?
2) Is it possible to somehow programmatically read files in [project]/.setting or [workspace]/.metadata/.plugins?
I'm using Eclipse Helios (3.6).
Ad 1. I've read that you can't have two natures ofthe same set, that you can use it to associate certain perspectives/tools (ex. builder) with it but well.. anyting else I can't do easily without nature? Ex. I can easily add a builder by modifying an IProject variable.
Ad 2. I tried to find a way to read project specific settings or plugin settings but failed. No specs, different file types, inconsistent XML tags... Is it at all possible without parsing them manually?
Thanks for your help!
Paweł
Think of a nature as a flag. All project-related functionality in Eclipse is triggered by natures. Project properties pages, context menu items, etc. appear based on presence of natures. Third parties can check for presence of nature to tell if the project is of certain "type". A nature also has install/uninstall methods. This gives you a convenient place to implement all actions that need to happen on the project when your technology is enabled. Why is that convenient? Because a third party can simply add the nature without knowing what else is necessary to configure and your code takes care of the rest.
Plugins write to [project]/.setting or [workspace]/.metadata/.plugins locations in different ways. The file formats are never documented as they aren't meant to be manipulated directly. Some plugins re-use the common ProjectScope and InstanceScope classes to read/write the data. Some read/write on their own. I would start with what information you are trying to read, figure out which plugin it belongs to and then see if there is public API in that plugin for accessing that information. Reading these settings directly is almost never going to be the correct approach.
Apologies if this is a duplicate, but I've not managed to find this question being asked directly.
The general opinion here (that's me and him across from me) is that they shouldn't, the reason being that DLLs can be shared; therefore the idea of having application-specific information in a DLL is nonsense. If the information is not application-specific, then constants can be used.
A further question is, assuming that DLLs do not have their own config file, whether DLLs should use the configuration files of the executable that loaded the DLL, or instead be passed the relevant data as part of some kind of constructor. Our opinion here is the latter, as it makes it more testable, the downside being that it will sometimes be necessary to pass a significant amount of data to the dll.
Opinions?
There's no reason why you can't have the best of both worlds in terms of "simple to configure with config files" and "testable". Have a static method which can create instances from the configuration file, but also provide a constructor for more control and testability. The static method would just grab the settings and call the constructor.
I believe it's possible to create settings classes for DLLs just like any other project, and then you just need to put the actual text into the application's config file instead of one for the DLL. Basically ignore the app.config generated for the library project, except to use as a template for the application's central one.
Alternatively, use something like Spring.NET to manage this sort of thing :)
Usually, I guess you should pass relevant information to the functions you're calling or set relevant properties in objects you're creating that are defined within the DLL. I guess that's why .NET does not really support config files for DLLs (you can create them, but they'll not be used when running).
I have one scenario, where DLLs are reading a config file, but that is very special: The .NET DLL exports objects as COM objects for use by Microsoft Navision. It communicates with a factoring bank using an XML-RPC interface.
While the DLL is installed on every user's machine, the configuration for the interface is common to all users, so I have a configuration placed on a network drive that's mapped on every PC and the configuration (URL, credentials, etc.) is read from that common file.
Whether that's good practice is up to the reader, but in that scenario having a common config file just made sense...