Make Modelica class read-only in Dymola - dymola

Is there are way to make a user written class read-only in dymola? I want to avoid modifying it by error, when working on models that use it.

There are two ways I know of. The first is to make the files read only on the file system. I'm pretty sure Dymola will recognize that fact and prevent modification. I think.
There is also a way to add an annotation that is essentially a checksum or hash or something. But this is typically done by DS as a way of "signing" libraries. I don't think there is a way for ordinary users to perform this signing.
Have you check in the documentation? It might be documented there. I don't have access to a machine with Dymola on it right now to check.

Since Dymola 2017 FD01 classes can be locked.
Right-Click a class in the package browser and select Lock...
This will create the annotation
__Dymola_LockedEditing="<reason-for-locking>"
and the class and nested classes (e.g. classes in a package) are not editable anymore.

Related

Can I use Sorbet without sig in source code?

I would like to use Sorbet in my Ruby projects at work. As I want to make the process as smooth as possible, I'd like to know if it is possible to add static type checking only using RBI files in sorbet folder.
The idea is to avoid adding signatures to the source code, so my colleagues do not complain, then adding signatures in RBI on the side. This way I can start typing and benefiting in my local environment until it is advanced enough.
Thanks
Yes, it's possible for static checking but not for runtime checking. But I think it's enough for your use case

Cocoa/Objective-C Plugins Collisions

My application has a plugin system that allows my users to write their own plugins that get loaded at runtime. Usually this is fine but in some cases two plugins use the same libraries that will cause a collision between those two.
Example:
Plugin A wants to use TouchJSON for working with JSON and thus the creator adds the TouchJSON code to the plugin source and it gets compiled and linked into the plugin binary. Later Plugin B also wants to use that same library and does exactly the same. Now when my app loads these two different plugins it detects this and spits out an warning like this:
Class CJSONScanner is implemented in
both [path_to_plugin_a] and
[path_to_plugin_b]. One of the two
will be used. Which one is undefined.
Since my app just loads plugins and makes sure they conform to a certain protocol I have no control over which plugins are loaded and if two or more use the same library.
As long as both plugins use the exact same version of the library this will probably work but as soon as the API changes in one plugin a bunch of problems will arise.
Is there anything I can do about this?
The bundle loading system provides no mean to pacifically resolve name conflicts. In fact, we're told to ensure ourselves that the problem doesn't happen, rather than what to do if it happens. (Obviously, in your case, that's not possible).
You could file a bug report with this issue.
If this is absolutely critical to your application, you may want to have bundles live in separate processes and use some kind of IPC, possibly NSDistantObject, to pass the data from your program to the plugin hosts. However, I'm fairly sure this is a bag of hurt, so if you don't have very clearly-defined interfaces that allow for distribution into different processes, it might be quite an undertaking.
In a single-process model, the only way to deal with this is to ensure that the shared code (more precisely, the shared Objective-C classes) is loaded once. There are two ways to do this:
Put the shared code in a framework.
Put the shared code in a loadable bundle, and load the bundle when the plug-in is loaded if the relevant classes aren’t already available (check using NSClassFromString()). The client code would also have to use NSClassFromString() rather than referring to classes directly.
Of course, if you aren’t in control of the plug-ins you can’t enforce either of these schemes. The best you can do is provide appropriate guidelines and possibly infrastructure; for instance, in the second case the loading could be handled by the application, perhaps by specifying a class to check for and the name of an embedded bundle to load if it isn’t available in the plug-in’s Info.plist.

Eclipse: Project nature benefits, reading project & plugins settings

So far I have two short questions:
1) What precisely are the benefits of creating custom nature?
2) Is it possible to somehow programmatically read files in [project]/.setting or [workspace]/.metadata/.plugins?
I'm using Eclipse Helios (3.6).
Ad 1. I've read that you can't have two natures ofthe same set, that you can use it to associate certain perspectives/tools (ex. builder) with it but well.. anyting else I can't do easily without nature? Ex. I can easily add a builder by modifying an IProject variable.
Ad 2. I tried to find a way to read project specific settings or plugin settings but failed. No specs, different file types, inconsistent XML tags... Is it at all possible without parsing them manually?
Thanks for your help!
Paweł
Think of a nature as a flag. All project-related functionality in Eclipse is triggered by natures. Project properties pages, context menu items, etc. appear based on presence of natures. Third parties can check for presence of nature to tell if the project is of certain "type". A nature also has install/uninstall methods. This gives you a convenient place to implement all actions that need to happen on the project when your technology is enabled. Why is that convenient? Because a third party can simply add the nature without knowing what else is necessary to configure and your code takes care of the rest.
Plugins write to [project]/.setting or [workspace]/.metadata/.plugins locations in different ways. The file formats are never documented as they aren't meant to be manipulated directly. Some plugins re-use the common ProjectScope and InstanceScope classes to read/write the data. Some read/write on their own. I would start with what information you are trying to read, figure out which plugin it belongs to and then see if there is public API in that plugin for accessing that information. Reading these settings directly is almost never going to be the correct approach.

What's the best approach to incremental compilation when building a DSL using Eclipse?

As suggested by the Eclipse documentation, I have an org.eclipse.core.resources.IncrementalProjectBuilder that compiles each source file and separately I also have a org.eclipse.ui.editors.text.TextEditor that can edit each source file. Each source file is compiled into its own compilation unit, but it can reference types from other (already compiled) source files.
Two tasks for which this is important are:
Compiling (to make sure the types we're using actually exist)
Autocomplete (to look up the type so we can see what properties/methods are present on it)
To accomplish this, I want to store a representation of all the compiled types in memory (referred to below as my "type store").
My question is two fold:
Task one above is performed by the builder and task two by the editor. So that they both have access to this type store, should I create a static store somewhere that they both can have access to, or does Eclipse provide a neater way to deal with this problem? Note that it is eclipse, not me, that instantiates the builders and editors when they are needed.
When opening eclipse, I don't want to have to rebuild the whole project just so I can re-populate my type store. My best solution so far is to persist this data somewhere and then repopulate my store from that (perhaps upon project open). Is this how other incremental compilers typically do this? I believe Java's approach is to use a special parser that efficiently extracts this data from the class files.
Any insights would be really appreciated. This is my first DSL.
This is an interesting question and one that doesn't have a simple solution. I'll try to describe a potential solution and also describe in a little bit more detail how JDT accomplishes incremental compilation.
First, a bit about JDT:
Yes, JDT does read class files for some of its information, but only for libraries that don't have source code. And this information is really only used for editing assistance (content assist, navigation, etc).
JDT computes incremental compilation by keeping track of dependencies between compilation units as they are compiled. This state information is stored on disk and retrieved and updated after each compile.
As a more complete example, let's say that after a full build, JDT determines that A.java depends on B.java, which depends on C.java.
If there is a structural change in C.java (a structural change is a change that can affect outside files (e.g., adding/removing a non-private field or method)), then B.java will be recompiled. A.java will not be recompiled since there was no structural change in B.java.
After this bit of clarification on how JDT works, here are some possible answers to your questions:
Yes. This must be done through statically accessible global objects. JDT does this through the JavaCore and JavaModelManager objects. If you don't want to use global singletons, then you can access to your type store available through your plugin's Bundle activator instance. The e4 project does allow dependency injection, which is probably even better (but is not really a part of the core Eclipse APIs).
I think persisting the information on the file system is your best bet. The only real way to determine incremental compile dependencies is to do a full build, so you need to persist the information somewhere. Again, this is how JDT does it. The information is stored in your workspaces' .metadata directory somewhere in the org.eclipse.core.resources plugin. You can have a look at the org.eclipse.jdt.internal.core.builder.State class to see the implementation.
So, this may not be the answer you are looking for, but I think this is the most promising way to approach your problem.

How to provide specific GWT implementations

Suppose I am working on exposing some of my server-side classes to a GWT application, but certain parts could be done much better using GWT-specific components (like JSNI, for instance).
What are some techniques for doing so without being too hacky?
For instance, I am aware of using a subpackage and using the <super-source/> tag, but this requires the package names to be different, which causes eclipse to complain. The general solution in the community is to then tell eclipse to use that as a source folder, but then eclipse complains about there being two classes with the same name.
Ideally, there would just be a way to keep everything in a single source tree, and actually have different classes which apply the alternate implementations. This would feel like a more OO approach.
I would like to add a suffix to a class like _gwt which accomplishes this automatically, and I know I could write a script to do this kind of transformation, but that is a kludge for sure.
I've been considering using Google's GIN/GUICE libraries for my projects in general, and I think there might be some kind of a solution there, but I am not sure as I have not thoroughly investigated it.
What are some solutions you have tried in the past on GWT projects?
The easiest way to have split implementations is to use super-source code, but only enough to instantiate a uniquely-named instance or dispatch to a different method. Ideally, the super-source implementation is just a few lines long, and not so bad that you can't roll it by hand.
To work around the Eclipse / javac double-mapping and package name issues, the GWT source uses two top-level roots for user code: user/src and user/super. For example, the AutoBeans package has a split-implementation of JSON quoting and evaluation, one for the JVM and one for the browser.
There's really no non-kludgy way to implement super-source, as this is a feature way outside what you can specify in the language. There's nothing that lets you say "use this implementation in this environment" without the use of some external tool.