Implementing Custom Cache Store - infinispan

Just implemented a custom cache store based on the official existing RocksDb one for a different backend store.
That leads me to a number of concerns/questions:
Found out the hard way that PersistenceContextInitializerImpl is auto-generated and had added an import from Eclipse to resolve the issue. Now I have to leave it non-imported and showing as an error in Eclipse, is there a best practice way to handle this?
Why is RocksDBDbStoreTest#testSegmentsRemovedAndAdded call when segmented is false, since this calls removeSegments that contractually should not be called if not segmented?
Same class, why is buildConfig numSegments set or larger than 1 for non-segmented test cases?
Any example of store implementing the NonBlockingStore transactional methods? Mostly wondering to make sure that all calls are from the same thread?
Wanted to disable the compatibility test, since not supported in prior versions. Changed group to unstable or manual and would always still get called, which doesn't seem to match documentation. What is the right way to disable it from build time run?
Are there any kind of performance/stress tests for persistence store that can be executed or adapted?

Found out the hard way that PersistenceContextInitializerImpl is auto-generated and had added an import from Eclipse to resolve the issue. Now I have to leave it non-imported and showing as an error in Eclipse, is there a best practice way to handle this?
There should be a way to have it run the annotation processor. We use IntelliJ and it works fine OOTB.
Why is RocksDBDbStoreTest#testSegmentsRemovedAndAdded call when segmented is false, since this calls removeSegments that contractually should not be called if not segmented?
This is just a side effect of having a parameterized test like that. In actual runtime it won't be invoked. You can just have the test ignore it if it is segmented.
if (segmented) return;
Same class, why is buildConfig numSegments set or larger than 1 for non-segmented test cases?
Infinispan data container is always segmented when running any of the cluster modes. The store however is not required to be segmented in those cases. If the store is not segmented you can ignore any segment parameters as documented.
Any example of store implementing the NonBlockingStore transactional methods? Mostly wondering to make sure that all calls are from the same thread?
You can see some at https://github.com/infinispan/infinispan/blob/main/persistence/jdbc/src/main/java/org/infinispan/persistence/jdbc/stringbased/JdbcStringBasedStore.java#L724. The methods can and will be invoked from different metehods, thus why it stores a Map keyed by the Transaction.
Wanted to disable the compatibility test, since not supported in prior versions. Changed group to unstable or manual and would always still get called, which doesn't seem to match documentation. What is the right way to disable it from build time run?
Not sure I understand the question. For your store you shouldn't need any compatibility test, so just don't copy that test file.
Are there any kind of performance/stress tests for persistence store that can be executed or adapted?
We have https://github.com/infinispan/infinispan-benchmarks/blob/main/cachestores/src/main/java/org/infinispan/jmhbenchmarks/InfinispanHolder.java that should work.

Related

How to detect and measure different paths to a specific method with JProfiler?

I'm using a marker method in unwanted if-else code branches: these branches are not slow, but there is a more efficient implementation in the opposite branch. Now I want to use JProfiler to figure out all paths (including their importance) to these unwanted branches to fix code to run into the preferred branch instead. Also, I want to do this detection/measurement with least profiling overhead.
I found Sampling not to work because the marker methods executes too fast to have it showing up in the Hot Spots. Also, It might not be executed frequently enough.
I couldn't figure out to do it with Instrumentation either. Again, the method does not even show up in Hot Spots.
In the ideal case, I would tell JProfiler just to monitor my marker method with instrumentation and then restrict the call graph only to calls to this marker method.
Is this possible? Are there other efficient ways to do what I want?
You have to use instrumentation for that purpose. Locate the marker method in the call tree, then invoke the
Analyze->Calculate Backtraces To Selected Method
action in the context menu or the tool bar.

Add warning messages to ILineBreakpoint in Eclipse

I wrote custom remote debugger for a specific environment. However, the remote environment performs several optimizations that move or delete pieces of original code and therefore it can't accept all breakpoints. Before debugger session starts and connects to the remote runtime, we can't predict which of the breakpoints can't be set. I would like to keep these breakpoints as they are set in editor, but when the debugger starts, it must somehow tell the user that certain breakpoints are invalid. I think that these breakpoints should look different way, but I haven't found API methods for this purpose. I tried to set IMarker attributes such as IMarker.PROBLEM and IMarker.SEVERITY, but it didn't help. What is the best way to do this?
This code snippet from the Eclipse Debugger guide is probably what you are looking for:
public PDALineBreakpoint(IResource resource, int lineNumber) throws CoreException {
IMarker marker = resource.createMarker(
"org.eclipse.debug.examples.core.pda.lineBreakpoint.marker");
setMarker(marker);
setEnabled(true);
ensureMarker().setAttribute(IMarker.LINE_NUMBER, lineNumber);
ensureMarker().setAttribute(IBreakpoint.ID, IPDAConstants.ID_PDA_DEBUG_MODEL);
}
https://www.eclipse.org/articles/Article-Debugger/how-to.html
I've found solution by myself, but it looks like a dirty hack. It works only with IJavaLineBreakpoint, for another languages another solution required, but for now Java support is enough. IJavaLineBreakpoint had the isInstalled method that indicates whether the breakpoint is installed into some JVM. Unfortunately, you have no direct way to modity this flag. Internal implementation just exposes value of the org.eclipse.jdt.debug.core.installCount attribute. So, to set installed property of a breakpoint, you should do the following:
breakpoint.getMarker().setAttribute("org.eclipse.jdt.debug.core.installCount", 1);
Also, you can just increase/decrease this attribute the same way. However, I am not sure that this approach is compatible across versions of JDT.

Is there an easy way to determine which parts of a vb.net project is still used?

I maintain an old vb.net project that I didn't make and I was wondering if there's an easy way to determine which parts of the software is still used today by the staff where I work.
I would like to log all function calls without having to edit each one of them if possible.
The project has 27 forms and 6 modules.
Any ideas?
Thanks!
There is no way to 100% determine everything that is used by the system. Vb.Net supports dynamic invocation of methods / properties. Hence you can't even do tricks like delete some code and see if it recompiles. Even if it compiles it could be invoked dynamically.
One way to get a sense of what code is used is to profile the application. Start up the profiler, run the app and go through all of the ways in which the app is used. The resulting profile should give you a good sense of what parts are used. It's very possible though this approach will miss code though

How to get rid of unmanaged code in VW 3.1d and ENVY

I have an old VW3/ENVY image with a parcel loaded as unmanaged code (exactly the situation Mastering ENVY/DEVELOPER warns against). Unfortunately, this problem happened a long time ago and it's too late to just "go back" to an image without the parcel loaded.
Apparently, there is a way to solve this problem (we have one development image where this has been solved, and there are normal configuration maps that contain the exact same code as the unmanaged parcel but they can't be loaded), but the exact way has long since been forgotten (and there are some problems with taking that particular dev image as the base for a new runtime image, so I need to find out how how to do it again).
In theory, it should be possible to remove the parcel and reload the code from a configuration map. In practice, all normal ways (using the ParcelBrowser or directly calling UnmanagedCode>>remove) fail. I even tried manually removing the offending selectors from the method dictionary, but past a certain point (involving a call to #primBecome:) the whole image hangs completely (I can't even drop into the debugger). I started hacking the instances of the classes and methods, hoping I'd trick ENVY into thinking that these particular methods are normal versioned code, but without any success yet.
Are there any smalltalk/envy gurus around that still remember enough of VW 3 to provide me with any pointers?
Status update
After a week of trying to solve the problem I finally made it, at least partially, so in case anyone's interested...
First, I had to fix file pointers for the umnanaged code (otherwise, all everything that tried to touch the methods would throw an exception). It looks like ENVY extends Parcel so that, in theory, all integer file pointers are changed to ENVY's void filepointer when loaded, but in my case, I had to do it manually (a Parcel provides enumeration for all selectors it defines). Another way would be to tweak the filePointer code, but that can't easily be done automatically on every image where it's needed.
Then, the parcel can be discarded, which drops the parcel information, but keeps the code. The official "Discard" mechanism needs to have a valid changes file (which envy doesn't use so it has to be set manually, and reset afterwards) and the parcel source (which we fortunately had).
To be able to make any changes to the methods (either manually, or via loading an application or class from ENVY), they need to get rid of their unmanaged status. This can be done by manually tweaking TheClass>>applicationAssocs (I also got rid of all references to the classes in UnmanagedCode sich as timestamps, and removed the reference to the discarded parcel). I actually had some info on how to get to this point from my boss, but I haven't been able to understand the instructions until I almost figured it out by myself.
This finally allowed me to load and reload all the Applications that contained the classes. In theory. In practice, the image still hung completely whenever I tried to load a newer version of the Application (that contained the code formerly in the parcel).
It turned out that the crashes had absolutely nothing to do with the code being unmanaged, but with the fact that the parcel in question modified InputState>>process:, where it caused an exception due to a missing and/or uninitialized class variable (the InputState>>initialize method wasn't called until after the new process: method was in place). I had to modify the Notifier class to dump all exceptions to a file to find out what was going on. Adding the class variable to the source of the class (instead of adding it via reflection), suspending the input processing thread via toBeLoadedCode and starting it again in the loaded method and creating a new version of the application solved even this problem.
Now everything works, in theory. In practice it's still unusable, because reloading the WindowSystem or VisualworksBase applications causes their initialization blocks to run, and a whole lot of settings are reset to their defaults - fonts and font sizes, window colors, UI settings... And there doesn't seem to be any way to just save the settings to a file and load them later on, or just to see what all the settings are (either the official Settings menu doesn't show everything, or we have a heavily tweaked image... so much for reconstructing it from scratch). But that's a completely different question.
Well, normally the recommendation would be that you should be able to rebuild your development image from scratch by loading your code from the repository. But if you had that, then the answer would be simple, just discard that image and reload. I think it's been long enough that I've lost whatever knowledge I've had about how to mess with the internal structures to get it back, and it sounds like you've tried a lot of things. So, although it might be painful, figuring out the recipe to rebuild your development image by loading stuff from the repository sounds like it may be your best bet. It probably isn't all that horrible, there just might be a few dependencies on the image state, or special doits that need to be executed.
You also probably need to validate what's in the repository against what's in the image you're working from. If there was unmanaged code loaded and then someone modified it and saved it, it's not clear to me that it would have been saved to ENVY. So you probably want to audit everything that was unmanaged code and if it's been changed, save that to a repository edition.
Sorry I don't have any better answers.

How to locally test cross-domain builds?

Using the dojo toolkit, what is the proper way of locally testing code that will be executed as cross-domain, without making the actual build?
As it appears, there are three possible options (each, with their own drawbacks):
Using local (non xd) XMLHttpRequest dojo.require
This option does not really test the xd behavior, since it dojo.require[s] the js synchronously via XHR.
djConfig.debugAtAllCosts = true;
Although this option does load the required code asynchronously (via the 'script' tag), it also pulls the code in via XHR, parses the dojo.require[s] inside that, and pulls them in. This (using the loader_debug), again, is not what the loader_xd is doing. More info on this topic in a different question.
Creating a cross-domain build
This approach requires a build, which is not possible in the environment which I'm running the code in (We're using our own on-the-fly build process, which includes only the js that is necessary for a particular page. This process is not suitable for development).
Thus, my question: is there a way to use the loader_xd, which does not require an xd build (which adds the xd prefix / suffix to every file)?
The 2nd way (using the debugAtAllCosts) also makes me question the motivation for pre-parsing the dojo.require[s]. If the loader_xd will not (or rather can not) pre-parse, why is the method that was created for testing/debugging doing so?
peller has described the situation. If you wanted to just generate .xd.js file for your modules, you could look at util/buildscripts/jslib/buildUtilXd.js and its buildUtilXd.xdgen() function.
It would take a bit of work to make your own script, but you could look at util/buildscripts/build.js for pointers.
I am hoping in the future for Dojo (maybe Dojo 2.x timeframe) we can switch to a loader that just uses script tags with a module format that has a function wrapper around the module, something that is coded by the developer. This would allow the same module format to work in the local and xd cases.
I don't think there's any way to do XD loading without building and deploying it. Your analysis of the various options seems about right.
debugAtAllCosts is there specifically to solve a debugging problem, where most browsers, until recently, could not do anything intelligent with code brought in through eval. Still today, Firefox will report exception in the console as appearing at the eval site (bootstrap.js) with a line number offset from the eval, rather than from the actual eval buffer, and normally that eval buffer is anonymous. Firebug was the first debugger to jump through some hoops to enhance the debugging experience and permitted special metadata that Dojo's loader injects between the XHR and the eval to determine a filepath to the source. Webkit/Safari have recently implemented this also. I believe debugAtAllCosts pre-dates the XD loader.