How to load extensions - vulkan

I am trying to write a Vulkan program but am somewhat fuzzy on how the extension mechanism works.
Concretely, I want to access VK_COLOR_SPACE_EXTENDED_SRGB_NONLINEAR_EXT (is not found at compilation) but am not sure how to include the swapchain_colorspace extension.

VK_EXT_swapchain_colorspace is an instance extension.
You can enable the extension by passing its name to vkCreateInstance via the pCreateInfo->ppEnabledExtensionNames.
You can use either "VK_EXT_swapchain_colorspace" directly or use the VK_EXT_SWAPCHAIN_COLOR_SPACE_EXTENSION_NAME macro to avoid typos.
Then, generally speaking, you have to load extension commands (functions) unless it is WSI and you are using the official Vulkan loader.
VK_EXT_swapchain_colorspace defines no new commands, so that step can be skipped.
Enumerants such as VK_COLOR_SPACE_EXTENDED_SRGB_NONLINEAR_EXT are always present\defined (assuming you have updated vulkan.h header; if not, just download newest LunarG Vulkan SDK). Enabling the extension only gives formal permission for them to be used.

Related

Make Modelica class read-only in Dymola

Is there are way to make a user written class read-only in dymola? I want to avoid modifying it by error, when working on models that use it.
There are two ways I know of. The first is to make the files read only on the file system. I'm pretty sure Dymola will recognize that fact and prevent modification. I think.
There is also a way to add an annotation that is essentially a checksum or hash or something. But this is typically done by DS as a way of "signing" libraries. I don't think there is a way for ordinary users to perform this signing.
Have you check in the documentation? It might be documented there. I don't have access to a machine with Dymola on it right now to check.
Since Dymola 2017 FD01 classes can be locked.
Right-Click a class in the package browser and select Lock...
This will create the annotation
__Dymola_LockedEditing="<reason-for-locking>"
and the class and nested classes (e.g. classes in a package) are not editable anymore.

Apache Velocity + Tomcat: Manually process templates within web-app

I am using Apache Tomcat with Velocity and VelocityViewServlet. I have created a custom tool with refference to ViewContext. It all works well.
The question is: what is best way to locate/load template and procces it with suplied parameters?
I have already absolute path to the file obtained via
((ViewContext)context).getRequest().getSession().getServletContext().getRealPath("/")
Do I have to instantiate VelocityEngine? I suppose there is no global maintained by Velocity (VelocityViewServlet)
Which (and how) of Velocity loaders is best to use?
Several points here:
The VelocityViewServlet will instantiate itself a VelocityEngine. It's not global, it's one engine per ServletContext.
The VelocityViewSerlet will locate itself the template that corresponds to the request URI using its default loader (WebappLoader), so you don't have to do it yourself either.
The Velocity context your template will be evaluated with, will already be populated with all standard tools (for tools 2.0), among which $params which allows you to inspect HTTP parameters.
I don't understand "a custom tool with refference to ViewContext": instead of using the ViewContext, you should add to your custom tool all the appropriate setters you need among the properties listed here (for instance, if you need access to the request, then you'll declare a "public setRequest(HttpServletRequest request)" method). Remember that from a bottom-up perspective, your tool must be only reachable via a key that you choose for it in your tools configuration file, and should not be aware of Velocity.
I advise you to use VelocityTools 2.0, which is a more mature library than the 1.x.

AspectJ & controlling calls in other jars

POST 1: theoretical question
We use some software, that is actually a Web Module with its own Tomcat and shell scripts for controlling it. It has also a Plugin System, which allows you to upload a .jar file with a certain structure to add new functionality to the Application.
Question:
I would like to control&actually change the responses to different calls in the main system/application (not in my jar). Could I use AspectJ to do that? Why or why not? What would be the other general possibilities, except changing the code of the Main Application.
POST 2: the try
I tried to do it this way (in Eclipse):
In the AspectJ Project I added the jar file, where the classes to be woven are (actually I added it to the INPATH).
Exported the Project as "Jar with AspectJ support"
Deployed the jar file exported at the step 2: No result.
Questions:
In the exported aspect-jar, there are only the .class files of the AspectJ project, no .class files for the INPATH-Jar.
Should there be other classes, from the imported INPATH-jar?
In the exported aspect-jar there is no jar with the aspectj-runtime (aspectj-rt.jar). Should it be there, or how to configure the virtual machine to have it?
Yes, why not? If you could extend your question and explain (maybe with an example) which actors and actions there are in the system, we might be able to help you in a more conrete fashion. But basically I see no problem. The JAR modules might be loaded dynamically, but if you know which calls in the Tomcat app you want to intercept, you can easily instrument them either statically by reweaving the existing classes or dynamically via LTW (load-time weaving) during JVM start-up. There is no need to touch your uploaded JAR modules, which is, as I understand you, what you want to avoid.
You probably want to weave your main application's target classes via
execution(<methodsToBeChecked>) pointcut in combination with
around() advice.
The other details depend on your specific use case, the package, class and method names, parameters etc. The around advice can do one or several of the following things:
determine caller,
check call paramaters,
manipulate call parameters,
call original target with original or changed parameters,
alternatively not perform the original call at all,
pass back the result of the original call to the caller,
pass back a manipulated version of the result to the caller,
pass any synthetic value with the correct return type to the caller,
catch exceptions raised by the original call,
throw your own exceptions
etc.
Your fantasy (and AspectJ's few limitations) are the limit. :-)

Can I write to the resource fork using NSDocument?

I'd like to store some additional information along with a document, but I can't use bundles or packages, and I cannot store it inside the document itself.
The application is a text editor, and I'd like it to store code folding and bookmark locations with the document, but obviously this cannot be embedded into the code directly, and I don't want to alter the code with ugly comments.
Can I use NSDocument to store information in the resource fork of a document? If so, how can I do this? Should I directly write to <filename>/..namedfork/rsrc or is there an API available?
First, don't use the resource fork. It's virtually deprecated. Instead, use extended attributes. They can be set programmatically at the BSD level via setxattr and getxattr. Extended attributes are used in many places... for example, in the latest OS X, the resource fork itself is implemented as a special type of extended attributes.
For example, the Cocoa text system automatically adds an extended attribute to a file to specify the encoding.
I thought NSFileManager and NSFileWrapper supported extended attributes since Snow Leopard, but I can't find any documentation :p You can always use the BSD level functions, though.
Does the state need to move with the file if it's copied to another computer? If not, you could do a lot worse than emulating the way Bare Bones handles document state with BBEdit. They store state for all documents in ~/Library/Preferences/com.barebones.bbedit.PreferenceData/Document State.plist.
The resource fork documentation is here. But it contains plenty of suggestions to not use the resource fork.
I have a class on my web site for reading and writing resource forks, which I have never got around to moving to my GitHub repository because, as Yuji points out, they are not really used any more.
I was going to say alias files and web Internet location file are the only places they are used, but I used and tested it on Mac OS X v10.7 (Lion), and they are not even used there any more; they may still be used for custom icons. I didn't test for that exclusively. I will have to see how that affect my NDAlias class on 10.7.
ndresourcefork

Cocoa/Objective-C Plugins Collisions

My application has a plugin system that allows my users to write their own plugins that get loaded at runtime. Usually this is fine but in some cases two plugins use the same libraries that will cause a collision between those two.
Example:
Plugin A wants to use TouchJSON for working with JSON and thus the creator adds the TouchJSON code to the plugin source and it gets compiled and linked into the plugin binary. Later Plugin B also wants to use that same library and does exactly the same. Now when my app loads these two different plugins it detects this and spits out an warning like this:
Class CJSONScanner is implemented in
both [path_to_plugin_a] and
[path_to_plugin_b]. One of the two
will be used. Which one is undefined.
Since my app just loads plugins and makes sure they conform to a certain protocol I have no control over which plugins are loaded and if two or more use the same library.
As long as both plugins use the exact same version of the library this will probably work but as soon as the API changes in one plugin a bunch of problems will arise.
Is there anything I can do about this?
The bundle loading system provides no mean to pacifically resolve name conflicts. In fact, we're told to ensure ourselves that the problem doesn't happen, rather than what to do if it happens. (Obviously, in your case, that's not possible).
You could file a bug report with this issue.
If this is absolutely critical to your application, you may want to have bundles live in separate processes and use some kind of IPC, possibly NSDistantObject, to pass the data from your program to the plugin hosts. However, I'm fairly sure this is a bag of hurt, so if you don't have very clearly-defined interfaces that allow for distribution into different processes, it might be quite an undertaking.
In a single-process model, the only way to deal with this is to ensure that the shared code (more precisely, the shared Objective-C classes) is loaded once. There are two ways to do this:
Put the shared code in a framework.
Put the shared code in a loadable bundle, and load the bundle when the plug-in is loaded if the relevant classes aren’t already available (check using NSClassFromString()). The client code would also have to use NSClassFromString() rather than referring to classes directly.
Of course, if you aren’t in control of the plug-ins you can’t enforce either of these schemes. The best you can do is provide appropriate guidelines and possibly infrastructure; for instance, in the second case the loading could be handled by the application, perhaps by specifying a class to check for and the name of an embedded bundle to load if it isn’t available in the plug-in’s Info.plist.