Can I use Sorbet without sig in source code? - sorbet

I would like to use Sorbet in my Ruby projects at work. As I want to make the process as smooth as possible, I'd like to know if it is possible to add static type checking only using RBI files in sorbet folder.
The idea is to avoid adding signatures to the source code, so my colleagues do not complain, then adding signatures in RBI on the side. This way I can start typing and benefiting in my local environment until it is advanced enough.
Thanks

Yes, it's possible for static checking but not for runtime checking. But I think it's enough for your use case

Related

How to prevent the perl compiler from changing the name of dynamic link library

I'm making a perl6 package which contains some c source files that will be compiled into a dynamic link library. I found that the name of the library, such as libperl.so, will be changed into something like "A858A3D6EC5363B3D3F59B1.so" after "zef install". However, the name is used in python code as a module name(libperl). After the change, it is no longer a valid identifier. So, is it possible to prevent the change? If it is, what should I do?
I am not sure if it's possible to do that. Maybe it is.
Inspired by #raiph's link, however, I decided to create a soft link. Now the package works well.

GNU Autotools, what is "*h.in" files?

I am new to autotools, and can't find out where from *.h.in files are created?
I am trying to create small hello_demo for my library, where I include *.h and use it. make throws me error, that *h.in not found.
Any guide how to make it proper way, will be very appreciate)
Assuming you're talking about config.h.in, it is generated by default by autoheader. You should not strictly need it though.
Make sure you're using autoreconf -fis to generate your autotools support files, that will call all the needed tools for the generation to work. I would also point you to my guide for the very minimal code you need to build a project, which does not involve autoheader at all.

Make Modelica class read-only in Dymola

Is there are way to make a user written class read-only in dymola? I want to avoid modifying it by error, when working on models that use it.
There are two ways I know of. The first is to make the files read only on the file system. I'm pretty sure Dymola will recognize that fact and prevent modification. I think.
There is also a way to add an annotation that is essentially a checksum or hash or something. But this is typically done by DS as a way of "signing" libraries. I don't think there is a way for ordinary users to perform this signing.
Have you check in the documentation? It might be documented there. I don't have access to a machine with Dymola on it right now to check.
Since Dymola 2017 FD01 classes can be locked.
Right-Click a class in the package browser and select Lock...
This will create the annotation
__Dymola_LockedEditing="<reason-for-locking>"
and the class and nested classes (e.g. classes in a package) are not editable anymore.

Compiling Sass with custom variables per request under Rails 3.1

In a Rails 3.1 app, one controller needs to have all its views compile whatever Sass stylesheets they might need per request using a set of custom variables. Ideally, the compilation must happen via the asset pipeline so that content-based asset names (ones that include an MD5 hash of the content) are generated. It is important for the solution to use pure Sass capabilities as opposed to resorting to, for example, ERB processing of Sass stylesheets.
From the research I've done here and elsewhere, the following seems like a possible approach:
Set up variable access
Create some type of variable accessor bridge using custom Sass functions, e.g., as described by Konstantin Haase here (gist). This seems pretty easy to do.
Configure all variable access via a Sass partial, e.g., in _base.sass which is the Compass way. The partial can use the custom functions defined above. Also easy.
Capture all asset references
Decorate the asset_path method of the view object. I have this working well.
Resolve references using a custom subclass of Sprockets::Environment. This is also working well.
Force asset recompilation, regardless of file modification times
I have not found a good solution for this yet.
I've seen examples of kicking off Sass processing manually by instantiating a new Sass::Engine and passing custom data that will be available in Sass::Script::Functions::EvaluationContext. The problem with this approach is that I'd have to manage file naming and paths myself and I'd always run the risk of possible deviation from what Sprockets does.
I wasn't able to find any examples of forcing Sprockets processing on a per-request basis, regardless of file mod times, that also allows for custom variable passing.
I'd appreciate comments on the general approach as well as any specific pointers/suggestions on how to best handle (3).
Sim.
It is possible. Look here SASS: Set variable at compile time
I wrote a solution to address it, I'll post soon and push it here, in case you or someone else still need it.
SASS is designed to be pre-compiled to CSS. Having Sprockets do this for every request for a view on a per request basis is not going to perform very well. Every request is going to have to wait for the compilation to be done, and it is not fast (from a serving-pages point of view).
The MD5 generation is within Sprockets, so if you are changing custom variables you are going to have to force a compilation on every single request to make sure that changes are seen because the view is (probably) not going to know.
It sounds as though this is not really in the sweet-spot of the asset-pipeline, and you should look at doing something more optimised for truly dynamic CSS.
Sorry. :-)

What's the best approach to incremental compilation when building a DSL using Eclipse?

As suggested by the Eclipse documentation, I have an org.eclipse.core.resources.IncrementalProjectBuilder that compiles each source file and separately I also have a org.eclipse.ui.editors.text.TextEditor that can edit each source file. Each source file is compiled into its own compilation unit, but it can reference types from other (already compiled) source files.
Two tasks for which this is important are:
Compiling (to make sure the types we're using actually exist)
Autocomplete (to look up the type so we can see what properties/methods are present on it)
To accomplish this, I want to store a representation of all the compiled types in memory (referred to below as my "type store").
My question is two fold:
Task one above is performed by the builder and task two by the editor. So that they both have access to this type store, should I create a static store somewhere that they both can have access to, or does Eclipse provide a neater way to deal with this problem? Note that it is eclipse, not me, that instantiates the builders and editors when they are needed.
When opening eclipse, I don't want to have to rebuild the whole project just so I can re-populate my type store. My best solution so far is to persist this data somewhere and then repopulate my store from that (perhaps upon project open). Is this how other incremental compilers typically do this? I believe Java's approach is to use a special parser that efficiently extracts this data from the class files.
Any insights would be really appreciated. This is my first DSL.
This is an interesting question and one that doesn't have a simple solution. I'll try to describe a potential solution and also describe in a little bit more detail how JDT accomplishes incremental compilation.
First, a bit about JDT:
Yes, JDT does read class files for some of its information, but only for libraries that don't have source code. And this information is really only used for editing assistance (content assist, navigation, etc).
JDT computes incremental compilation by keeping track of dependencies between compilation units as they are compiled. This state information is stored on disk and retrieved and updated after each compile.
As a more complete example, let's say that after a full build, JDT determines that A.java depends on B.java, which depends on C.java.
If there is a structural change in C.java (a structural change is a change that can affect outside files (e.g., adding/removing a non-private field or method)), then B.java will be recompiled. A.java will not be recompiled since there was no structural change in B.java.
After this bit of clarification on how JDT works, here are some possible answers to your questions:
Yes. This must be done through statically accessible global objects. JDT does this through the JavaCore and JavaModelManager objects. If you don't want to use global singletons, then you can access to your type store available through your plugin's Bundle activator instance. The e4 project does allow dependency injection, which is probably even better (but is not really a part of the core Eclipse APIs).
I think persisting the information on the file system is your best bet. The only real way to determine incremental compile dependencies is to do a full build, so you need to persist the information somewhere. Again, this is how JDT does it. The information is stored in your workspaces' .metadata directory somewhere in the org.eclipse.core.resources plugin. You can have a look at the org.eclipse.jdt.internal.core.builder.State class to see the implementation.
So, this may not be the answer you are looking for, but I think this is the most promising way to approach your problem.