I'm writing a Kotlin program which according to convention lives in src/main/kotlin/mypackage/*.kt with each source file containing package mypackage.
I have used the IntelliJ IDEA option to create a test class, FooBarTest, which lives in src/test/kotlin/mypackage/FooBarTest.kt. So far, so good.
However, to my surprise, FooBarTest.kt does not contain package mypackage. This means the things it tests, would need to be imported explicitly with separate import statements.
Is IntelliJ IDEA telling me a surprising truth, that unlike main source files, test source files should not specify a package?
Or is it making a mistake, omitting a package statement that should be there, and I should go ahead and put in the package mypackage statement by hand?
I think IDEA's making a mistake -- or at least, being less helpful than it might.
Of course, there's no real necessity for test classes to be in the same package as the tested classes. But in my experience, it makes good sense: they're easier to find, and as you say, it avoids lots of import statements.
It also makes the file hierarchy align with the package hierarchy. Again, while in Kotlin there's no absolute necessity for that, it does make files easier to find and avoid unexpected clashes, and I've not yet found a reason to diverge from that.
Related
I'm trying to use YGuard to obfuscate some parts of my program which contain encryption methods and other sensitive information (which I'll further protect in other ways once I figure this out).
Because the program is quite complex and contains quite many libraries it obviously gives a series of warning and finally fails with:
WARNING: Method initialize_ffi_type is native but com/sun/jna/Native is not kept/exposed.
WARNING: Method getAPIChecksum is native but com/sun/jna/Native is not kept/exposed.
[...]
yGuard was unable to resolve a class (java.lang.ClassNotFoundException: com.sun.tools.javac.parser.Parser$Factory)
Now whatever that means I'd like to
exclude libraries which being all open source have nothing to hide so far
obfuscate just the methods and variables of some Class or some package and leave the rest untouched.
So far in YGuard it seems I have to specify what I don't want to be obfuscated, however I have far too many classes, I'd like instead to do the opposite: Specify what I'd like to obfuscate and proceed increasing the number of Classes and packages I want obfuscated.
Thanks
It is the normal practice for obfuscators to specify what should be kept and not the other way around.
However, you can define library classpaths with the externalclasses rule (link). Classes that are defined in this path are neither obfuscated nor shrinked. The second error you are getting (ClassNotFoundException) indicates that you have not specified all libraries that your project depends on.
In order to obfuscate your code now, what you could do is:
Pack the code that you want to be obfuscated in one jar and define everything else as a library
use a patternset in your keep rule (link) to define everything to be kept except the classes that you want to have obfuscated.
I'm writing tests for an OCaml module. Some of the functions in the module are not meant to be publicly visible, and so they're not included in the signature (.mli file).
I can't call these functions from my tests, because they're not visible outside of the module. So I'm having a hard time testing them. Is there a good way to get around this? For example, a way to tell ocamlc not to read the signature from the .mli file when it's compiling tests?
Some ideas:
Actually export the test functions, but use ocamldoc's stop comment (**/**) feature to avoid displaying the exports in the documentation.
Put all of your tests entirely in another module. However, this is difficult if you have abstract types because your tests may very well need access to the internal implementation.
Create a submodule Test, where all your tests go. That way it is clear what functions are just for testing. Possibly combine this with the (**/**) feature to also hide the sub-module from documentation.
I've heard that people sometimes separate their .mli files from their .ml files (in a different directory) so that they can compile with or without them (by telling ocamlc to look in the separate directory or not). I just tried a few experiments with this. I think it can be made to work, but it seems a little bit error prone to me. Maybe you could put the tests of the internal functions into the module. Exporting the test functions might not violate the modularity too badly. (Though of course it clutters up the module.)
As suggested by the Eclipse documentation, I have an org.eclipse.core.resources.IncrementalProjectBuilder that compiles each source file and separately I also have a org.eclipse.ui.editors.text.TextEditor that can edit each source file. Each source file is compiled into its own compilation unit, but it can reference types from other (already compiled) source files.
Two tasks for which this is important are:
Compiling (to make sure the types we're using actually exist)
Autocomplete (to look up the type so we can see what properties/methods are present on it)
To accomplish this, I want to store a representation of all the compiled types in memory (referred to below as my "type store").
My question is two fold:
Task one above is performed by the builder and task two by the editor. So that they both have access to this type store, should I create a static store somewhere that they both can have access to, or does Eclipse provide a neater way to deal with this problem? Note that it is eclipse, not me, that instantiates the builders and editors when they are needed.
When opening eclipse, I don't want to have to rebuild the whole project just so I can re-populate my type store. My best solution so far is to persist this data somewhere and then repopulate my store from that (perhaps upon project open). Is this how other incremental compilers typically do this? I believe Java's approach is to use a special parser that efficiently extracts this data from the class files.
Any insights would be really appreciated. This is my first DSL.
This is an interesting question and one that doesn't have a simple solution. I'll try to describe a potential solution and also describe in a little bit more detail how JDT accomplishes incremental compilation.
First, a bit about JDT:
Yes, JDT does read class files for some of its information, but only for libraries that don't have source code. And this information is really only used for editing assistance (content assist, navigation, etc).
JDT computes incremental compilation by keeping track of dependencies between compilation units as they are compiled. This state information is stored on disk and retrieved and updated after each compile.
As a more complete example, let's say that after a full build, JDT determines that A.java depends on B.java, which depends on C.java.
If there is a structural change in C.java (a structural change is a change that can affect outside files (e.g., adding/removing a non-private field or method)), then B.java will be recompiled. A.java will not be recompiled since there was no structural change in B.java.
After this bit of clarification on how JDT works, here are some possible answers to your questions:
Yes. This must be done through statically accessible global objects. JDT does this through the JavaCore and JavaModelManager objects. If you don't want to use global singletons, then you can access to your type store available through your plugin's Bundle activator instance. The e4 project does allow dependency injection, which is probably even better (but is not really a part of the core Eclipse APIs).
I think persisting the information on the file system is your best bet. The only real way to determine incremental compile dependencies is to do a full build, so you need to persist the information somewhere. Again, this is how JDT does it. The information is stored in your workspaces' .metadata directory somewhere in the org.eclipse.core.resources plugin. You can have a look at the org.eclipse.jdt.internal.core.builder.State class to see the implementation.
So, this may not be the answer you are looking for, but I think this is the most promising way to approach your problem.
Suppose I am working on exposing some of my server-side classes to a GWT application, but certain parts could be done much better using GWT-specific components (like JSNI, for instance).
What are some techniques for doing so without being too hacky?
For instance, I am aware of using a subpackage and using the <super-source/> tag, but this requires the package names to be different, which causes eclipse to complain. The general solution in the community is to then tell eclipse to use that as a source folder, but then eclipse complains about there being two classes with the same name.
Ideally, there would just be a way to keep everything in a single source tree, and actually have different classes which apply the alternate implementations. This would feel like a more OO approach.
I would like to add a suffix to a class like _gwt which accomplishes this automatically, and I know I could write a script to do this kind of transformation, but that is a kludge for sure.
I've been considering using Google's GIN/GUICE libraries for my projects in general, and I think there might be some kind of a solution there, but I am not sure as I have not thoroughly investigated it.
What are some solutions you have tried in the past on GWT projects?
The easiest way to have split implementations is to use super-source code, but only enough to instantiate a uniquely-named instance or dispatch to a different method. Ideally, the super-source implementation is just a few lines long, and not so bad that you can't roll it by hand.
To work around the Eclipse / javac double-mapping and package name issues, the GWT source uses two top-level roots for user code: user/src and user/super. For example, the AutoBeans package has a split-implementation of JSON quoting and evaluation, one for the JVM and one for the browser.
There's really no non-kludgy way to implement super-source, as this is a feature way outside what you can specify in the language. There's nothing that lets you say "use this implementation in this environment" without the use of some external tool.
I have developed a large MSBuild project to build a portion of our solution. There's a lot of things going on-- XML parsing/replacing, Windows services, remote copy, etc. As a result, the file has grown really difficult to manage, despite my best efforts to add decorations in comments.
As a goof, I broke out the main chunks of functionality out into separate files, like "XML.targets", "Services.targets", etc and imported them into the main "Build.proj." The build still worked and I immediately found it to be much more manageable.
However, all the info I have read on the Import feature of MSBuild is that it should be used to import reusable targets, ie those than can be consumed by -any- MSBuild project without any modifications. The separate projects I'm creating here are the opposite-- specific to one project and will break by default if use with anything else unless modified.
So I guess what I'm asking is, even though I can... should I? Is there an inherent danger in using Import strictly for the purpose of organizing a large project? Is there a better way to do this?
Thanks
No, there is no inherent danger. I think it's a good decision to split large project into several .targets files specific to certain operation since it reduces overall complexity. The idea of creating reusable targets means that they should have as little dependencies on the other parts as possible. By analogy you can think of separate .targets files as classes. The less coupled they are - the better. Because modification in one targets file will less likely break the whole process. You can take a peace of paper, draw your targets files as points with your main project in the center and draw all the connections between them. Say if one targets file overrides target from another or expects some properties from it or somehow else depends on it then there is a connection. In the perfect scenario you'll get something like a star.
In short: you should if it reduces complexity.