missing packages in lucene 4.0 snapshot - lucene

anybody knows why there is no QueryParser, nor IndexWriter.MaxFieldLength(25000) and some more in Lucene 4.0 Snapshot?
I'm having hard time to port the code to this newer version, though I'm following the code as given here: http://search-lucene.com/jd/lucene/overview-summary.html
How do I find the missing packages, and how do I get them? As the snapshop jar doesn't contain all the features..
thanks

Lucene has been re-architectured, and some classes which used to be in the core module are now in submodules. You will now find the QueryParser stuff in the queryparser submodule. Similarly, lots of useful analyzers, tokenizers and tokenfilters have been moved to the analysis submodule.
Regarding IndexWriter, the maximum field length option has been deprecated, it is now recommended to wrap an analyzer with LimitTokenCountAnalyzer (in the analysis submodule) instead.

Related

Is there a way to compare 2 Pypi package sourecode difference

I have already built pypi package stored on pypi server few days back. Now I want to compare source code diff between already built pypi package and recent code built today. Is there any way to this?
I want to compare already built pypi package and newly build code. And If there is any difference in source code then only create a new package and upload it to pypi server
If you have only Python bytecodes, you cannot get the corresponding source code (that hypothetical transformation is called decompilation, and is not possible in general; read e.g. about Rice's theorem). Since any translation (such as the one done by the python program) from source code to bytecode is losing some information (e.g. name of local variables, comments explaining the intent of the code).
Equality of the behavior of functions by static analysis of their source code (and the observable behavior of your code is what you really care about) is an undecidable problem. Learn more about λ-calculus, it is deeply related to that question.
The source code (by definition, the preferred form of code on which developers work) is not only for computers, but mostly for fellow developers: in other words, most of its value and its meaning is a social one (and that is what free software is about). Read more about the semantics of programs.
For example, renaming a variable from i to x may convey the implicit hypothesis that the intended dynamic runtime type of the value of that variable was an integer, and becomes a floating point.
Maybe you want some kind of package manager (or some version control system, if you deal with source code, or some build automation tool, if you build then install software out of it). Python has something to manage packages. The scons build automation uses Python, but there are many other build automation tools, GNU make being a common one (that you could use to drive compilation from .py source files to .pyc bytecode files and their installation). For version control, I recommend git.
PS. Your question is very unclear and smells like some XY problem.

What's deprecated in KitWare's guide to Finding libraries in CMake?

I need to write a CMake FindXYZ-type module. Googling, I've found this guide:
https://cmake.org/Wiki/CMake:How_To_Find_Libraries
from Kitware, but there's a disclaimer about it being deprecated. Which significant changes, if any, have been made to how these modules are written over the past, say, 6-7 years?
Yes, the CMake Wiki's content now officially moved inside CMake's documentation, so the "deprecated" warning is more a general one that the Wiki is no longer looked after.
In your case the main part of CMake Wiki: How To Find Libraries moved to CMake's documentation cmake-packages chapter.
What has changed?
I think the major change over the last years is what Stephen Kelly in his "Embracing Modern CMake" talk called:
Modern CMake packages define IMPORTED targets
find_package(Foo REQUIRED)
add_executable(hello main.cpp)
target_link_libraries(hello
Foo::Core
)
The same basic tint is found in CMake's documentation cmake-developer - Find Modules chapter:
The traditional approach is to use variables for everything, including libraries and executables. This is what most of the existing find modules provided by CMake do.
The more modern approach is to behave as much like config file packages files as possible, by providing imported target. This has the advantage of propagating Transitive Usage Requirements to consumers.
Details
You can see this "modern approach" as an extension of the previous methods (like in "FindZLIB: Add imported target and documentation" commit).
What should definitely be there (the core of all "Find Modules" for years now) is the find_package_handle_standard_args() macro.
This macro is build around the ..._FOUND cached variable handling.
My recommendation would be to concentrate on the imported targets and the ..._INCLUDE_DIRS and ..._LIBRARIES variables are just a side effect of having to cache your find results somewhere.
No, I don't think there's any significant changes. I still use it.
I think they are just trying to get you to look at their other documentation, like find_package.
In writing new Find modules, I normally just look at the other
FindXXX.cmake
as examples/templates and go from there.

project.json versioning format

I'm looking for a formal definition of version number formats for .NET Core project.json files.
version
Visual studio creates a default version number of "1.0.0-*". I would love for this to mean the * gets updated on successive builds (it doesn't). The build version number is 1.0.0. What does the * mean and what are the legal possibilities?
dependencies
I expected the dependency numbering to follow the nuget versioning rules given that KPM is basically a nuget front-end, but it doesn't appear to support bracket numbering (eg "[1,2)") - I get "not a valid version string" when I try anything other than a blank or x.x-* format.
Outside of the source, does anyone have a link to a formal definition?
I'm not sure what's wrong with looking into the source for a definition. I think that's the most accurate place to search, especially now that vNext is hosted on GitHub.
Looking at the exception described, we're pointed to SemanticVersion.cs.
In the method TryParseInternal, it's fairly obvious why you're running into issues when attempting to declare min/max versions that way. There is simply no handling for [,] or (,) built into that method.
If we look into the regular NuGet version specification, it's obvious that TryParseVersionSpec does have this handling built in.
As for documentation specifying acceptable formats, you'll probably have to wait until it's out of CTP status. If you believe it's an issue, you should document it in GitHub. The contributors are very responsive to these types of issues. Personally I'm not sure if there's a need for setting a maximum version of a dependency when it's deployed with your build.

Intellij Idea - how to maintain multiple versions of the package in one project?

I have case as follows and don't know if there is any conveninet solution:
I wrote some set of sources and put them in into a package.
Next, I refactored it deeply for performance reasons.
Now, I have new version which contains at least one bug, that I have to find.
I would like to have both version of my package in one project and easily switch between them
when I compile and run test application.
Of course I can compile both of them and choose in runtime because of names conflicts.
Is it any smart way to solve this?
You are looking for a version control system (which is supported through intellij IDEA).
Try git, they have good support in IDEA.
http://www.jetbrains.com/idea/webhelp/using-git-integration.html
http://www.jetbrains.com/idea/features/version_control.html
Here is a good link for git / vcs tutorials.
http://sixrevisions.com/resources/git-tutorials-beginners/

Examples of Semantic Version Names

I have been reading about semver. I really like the general idea. However, when it comes to putting it to practice, I feel like I'm missing some key pieces of information. I'm not sure where the name of a library exists, or what to do with file variants. For instance, is the file name something like [framework]-[semver].min.js? Are there popular JavaScript frameworks that use semver? I don't know of any.
Thank you!
Let me try to explain you.
If you are not developing a library that you like to keep for years to come, don't bother about it.. If you prefer to version every development, read the following.
Suppose you are an architect or developer developing a library that is aimed to be used by hundreds of developers over time, in a distributed manner. You really need to be cautious of what you are doing, what your developers are adding (so interesting features that grabs your attention to push those changes in the currently distributed file). You dont know how do you tell your library users to upgrade. In what scenarios? People followed some sort of versioning, and interestingly, their thoughts all are working fine.
Then why do you need semver ?
It says "There should be a concrete specification for anything for a group of people to follow anything collectively, even though they know it in their minds". With that thought, they made a specification. They have made their observation and clubbed all the best practices in the world about versioning software mainly, and given a single website where they listed them. that is semver.org. Its main principles are :
Imagine you have already released your library with a version "lib.1.0.98", Now follow these rules for subsequent development.
Let your library is bundled and named as xyz and,
Given a version number MAJOR.MINOR.PATCH, (like xyz.MAJOR.MINOR.PATCH), increment the:
1. MAJOR version when you make incompatible API changes
(existing code of users of your library breaks if they adapt this without code changes in their programs),
2. MINOR version when you add functionality in a backwards-compatible manner
(existing code works, and some improvements in performance and features also), and
3. PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
If you are not a developer or are not in a position to develop a library of a standard, you need not worry at all about semver.
Finally, the famous [d3] library follows this practice.
Semantic Versioning only defines how to name your versions. It does not specify what you will do with your version number afterwards. You can put the version numbers in package names, you can store it in a properties file inside your application, or just publish it in a wiki. All those options are opened to discussion and not part of the problem space addressed by SemVer.
semver is used by npm and bower (and perhaps some other tools) for dependency management. Using semver it is possible to decide which versions of which packages to use if multiple libraries used depend on the same library.
As others have said, semantic versioning is a standard versioning scheme that tells your users which versions of your library should be compatible with each other, and which ones are not.
The idea, is to be able to give your users more confidence that it's safe to upgrade to a newer patch/version, because it's tried, tested, and true to being backwards compatible with the previous version (minor increments). That is, perceptively that's what your telling your users.
As far as tooling goes, I don't do much in javascript, but I typically let my build server handle stamping my assemblies etc with the correct version. I have a static major number I upgrade whenever I make breaking changes, a static minor number I upgrade everytime I add new features, and an auto-incrementing Patch number whenever I checkin bug fixes.
Especially if this is a javascript library you plan to share on a public repository of some kind (nuget, gem, etc) you probably want some for of automated packaging system, and you put the logic in there for specifying your version number (in the package meta data, in the name of the javascript file, which is typically the standard I've seen).
Take a look at sbt which is the Scala Build Tool. In it, we write dependencies like this:
val scalatest = "org.scalatest" %% "core" % "2.1.7" "test"
val jodatime = "org.joda" % "jodatime" % "1.4.5"
Wherein the operator %% means "the current version of Scala that you're building." Packaging things in this language generally create JAR files with the name like this <my project>_<scala version>_<library version>.jar which is quite handy for semantically naming things automagically. The % operator can be interpreted as "don't version this part."
That said, this resulted from the fact that the same library compiled to different Scala versions were not binary compatible with each other. So it was more as a result of, rather than a conscious design choice, the binary incompatibilities.