Is Skiko right now only available for JVM awt? - kotlin

Using this: https://github.com/JetBrains/skiko/
I was able to get the SkiaAwtSample to work and it shows a window with a grid of animating clocks. It shows that the backend is OpenGL (I'm using Linux Mint 21, and have NVidia proprietary drivers installed). My first impression is that the performance seems average at best. I predict if I'd try to replicate this using plain old Java2D, I'd get similar performance. I also predict that the performance of Java2D is downplayed. But it is not performance that I am after.
I want to stop investing in UI and graphics technologies that aren't portable.
The samples directory shows these 4 subdirectories:
SkiaAndroidSample SkiaAwtSample SkiaJsSample SkiaMultiplatformSample
When I try to use the build target in the SkiaJsSample directory, I get a long maven error report, that amounts to a dependency not having been met. It wants org.jetbrains.skiko:skiko:0.0.0-SNAPSHOT with 'org.jetbrains.kotlin.platform.type' with value 'js'.
The DEVELOPMENT.md file only mentions of building and making available in the local maven repo using :skiko:publishToMavenLocal
Digging further, I tried :skiko-js-wasm-runtime:publicToMavenLocal but no such target exists.
It seems only the awt stuff is included in the github repository. Isn't the whole thing open source. I can find wasm related entries in online maven repos, but why can't we build it locally and public to our local maven repos?

Related

Versioning APIs during internal development

In our team we have a number of APIs specified using the Open API Specification (formerly Swagger). We use Maven and OpenAPI Generator to generate code, build and publish the artifact to our local nexus. We build our code on TeamCity. The artifact is given the version that is specified in the pom.xml file of Maven.
During development we only use snapshot versions, that is versions which can be overwritten and will be cleaned up. This is opposite to release versions, that cannot be overwritten and needs administrative privileges to clean up. The reason for this is, that a developer usually changes a little bit at the time, which is much more convenient with snapshot versions. This also makes cleaning up outdated unreleased artifacts much easier.
Our problem is, that from time to time a developer makes API changes but forgets to set a new version. This works fine locally, but when the code is build on TeamCity the changed API overwrites the artifact of an older version. A developer not working on this branch will then experience a compile error, because the code does not match the API artifact being used.
What does others do? Is there a best practice? Preferably with standard tools. We have tried many things and nothing works well. At the same time this issue is so basic that someone must have a good solution - or at least experience enough to point to the least bad solution.

Use java 8 features (newer janino version) in pentaho data integration

Pentaho Data Integration 8.0.x is using Janino 2.5.16, released in 2010 for compiling the User Defined Java Class step. There is a JIRA in pentaho for updating this to use a newer Janino version which would bring new java 8 related features in pentaho v8.2.0 GA. But there is no info on when will this be released.
Is there any other way I can use a newer janino version (janino-3.0.8.jar) with exiting pentaho for UDJC? I tried to copy updated jar in the lib and also added commons-compiler-3.0.8.jar to fulfill dependency. Now when I open Spoon, I get the following error:
Please advise on how this can be achieved. I understand that just replacing the jar may not be enough but just want if something else can be done.
This is not easy. Even now, since you got ClassNotFound, public api of janino is changed. Some classes are removed some are changed. What is actual needs to update it?
If you need really complicated business logic, then create custom plugin. Documentation and tutorials are available and you can look into sources of current builtin plugins (sources are available on github).
What important new version of janino has, that old doesn't (beside java8 support)? Checkout kettle engine, look into sources of UserDefinedClass step, change code to support new janino version, test and make own build of pdi kettle, and try to send push request to maintainers of repository.
Any of this quite complicated, This plugin is builtin into engine, and you have to make own build. Own build means, you have to support it by yourself. This is non trivial, project is huge and now even bigger and continue evolving, I spent several days to make my first custom build (version of 4, was in ivy) just for purpose to know better and debug complicated cases, and it used never in production.
Maintainers of repository must have good reason to include your changes into stream, it must be well tested and it is long procedure and most probably doesn't worth it. A lots of changed since 2010, I probable have seen in release notes, new version of java already have abilities to compile at runtime.
My advice is to make you own plugin.

Repository for storing derived information (build artifacts)

I'm looking for a "repository" to store derived information (build artifacts).
We have a repository (currently Mercurial) to store our source code. When something is pushed to the source repository the code goes through a continuous integration server and we do an incremental build and as a result some dlls will be changed. This should be added to some "repository" so that everybody can use that version without needing to do the build again.
I'm looking for the following features:
It should be easy to update the source code and get the corresponding binaries (we could probably make a script for that)
You should easily get all binaries at once (not only those that changed during the last incremental build.
Binaries that weren't changed should only be stored once in the repository.
When updating the source code and the binaries only the changed binaries should be transferred (and not all binaries). This is similar to what happens for source code.
When updating to some version, only that version should be stored locally, not the complete history.
We should be able to remove certain versions from the binary "repository" after a while. However if the dlls are still necessary for subsequent incremental builds, these dlls should of course not be completely removed from the "repository"
What would fit these requirements?
I agree with Manfred, what you are looking for is a binary repository manager. Besides the Nexus repository manager you should consider Artifactory.
As for the feature list you asked about:
As you have mentioned the CI server should be responsible for identifying a change in the version control and starting a build process which creates the binaries. The CI server/build tool should also deploy the generated binaries to the repository manager, in case the build was successful. Artifactory offers a build integration feature which takes care of deploying the binaries together with the build metadata.
Using the build integration feature of Artifactory, you can get a list of all the binaries generated by a specific build and download them as an archive. Artifactory provides a REST API for those actions.
There are different approaches for storing the artifacts in a repository manager. Some tools stores a multiple copies of the same binary. Other, for example Artifactory, use a checksum based storage which keeps only one copy per binary (based on its checksum). This pays of if you keep multiple copies of the same binary in different repositories, especially if you are dealing with large binaries (war files, docker images, ISOs etc.). Another benefit are cheap copies/moves between repositories which is a common practice for promotion workflows.
The Artifactory build integration uses checksum based deployment which deploys only binaries which does not exist in Artifactory. For binaries which do exist and have not changed, it only created a new reference to the existing binary saving the need to send the actual bytes.
Artifactory provides multiple option of cleaning up binaries, including built in cleanup policies and the option to develop your own custom logic using user plugins and the Artifactory query language (AQL)
In addition, I highly recommend to take a look at the binary repository comparison matrix.
Disclaimer: I am working for JFrog the company behind Artifactory
You are basically asking for a repository manager like the Nexus Repository Manager as you have correctly identified with the tags.
In terms of specific requirement from your questions here are a couple of ideas.
binary components are typically identified via some coordinates that most of the time includes some sort of name and version. A release and build process changes those and deploys them to the repository. This allows you to match source code with binaries. You can also embed information like git refs in the produced binaries.
accessing the binaries is typically done via HTTP, so its easy. You then just have to determine what it means to get "all binaries".
not duplicating binaries that are essentially the same can be supported by the underlying file system or the build tool. I have seen both processes to work. Often it is however not worth the effort since storage is cheap.
there are various ways to automatically clean up repositories including scheduled tasks that do it regularly. Worst case you have to implement your own logic in an extension
Disclaimer: I work as community advocate and trainer for the Nexus Repository Manager with Sonatype.

How to install TinkerPop

I have just recently come across graph databases and Tinkerpop.
I am somewhat confused on how/what to install to use Tinkerpop 2.5.0/2.6.0. Does it have to be installed on each Database separately (as you would a plugin) or can I set it up and then use it to access different supported software.
My goal is to use it to try out 2 (possibly more) different databases (mainly Neo4j and OrientDB or perhaps Titan) and be able to query them using Gremlin.
How you use TinkerPop is entirely dependent on what you intend to do with it. If you are just getting started, I suggest you simply download the Gremlin distribution, unpackage it and start the console with bin/gremlin.sh. Working in the REPL will help you learn quickly as the feedback time for trying things out is basically instantaneous. Even as your Gremlin code makes its way to production, you will find the Gremlin Console to be a good friend as it provides a way to try out ideas before committing them to code. It also provides a mechanism for maintaining/administering your database with Gremlin.
If you intend to use TinkerPop in a JVM-based application then you will want to use a dependency management tool like Maven and reference the appropriate TinkerPop dependencies you'd like to use. Alternatively, I suppose you could try to manually manage the dependencies by downloading them individually from Maven Central and adding them to your path (though I wouldn't recommend that for obvious reasons). I guess my point for suggesting that, is to just make it clear that the TinkerPop library is just a set of jars that can be included in your JVM development tools like any other.
How you work with a particular database is dependent on the one that you choose, but again the process is little different than what I described above. Neo4j is packaged with the Gremlin Console, so you can work with it right away in there. For OrientDB, you will want to copy those dependencies into the Gremlin Console path (i.e. the /lib directory). If you are building an application, then maven is again your friend and you simply reference the Neo4j or OrientDB maven coordinates and all require dependencies will come with it.
Some implementations, like Titan, have separate prerequisites (e.g. install cassandra or hbase). In those cases, you will need to refer to their documentation for specifics on how to set them up.
All that said, if you are just getting started, I recommend that you look into TinkerPop3. It is the next major line of development for TinkerPop and quit different from it's previous incarnations. It does not yet have all the of the implementations in play as of yet, but database vendors are at work to bring them online. All that I wrote about TinkerPop 2.x "installation" above generally applies to TinkerPop3, however, the TinkerPop3 Gremlin Console does have a plugin system that can help make it a little easier to bring in external dependencies, preventing you from having to worry about dealing with them manually.

Archivable, replicable releases when building with Maven: is there a right way?

We have a largish standalone (i.e. not Java EE) commercial Java project (10,000+ classes, four or five SVN repositories, ten or twenty third-party libraries) that's in the process of switching over to Maven. Unfortunately only one engineer (in a team of a dozen or so distributed across three countries) has any prior Maven experience, so we're kind of figuring it out as we go.
In the old Ant way of doing things, we'd:
check out source code from three or four repositories
compile it all into a single monolithic JAR
release that (as part of a ZIP file with library JARs, an installer, various config files, etc.)
check the JAR into SVN so we had a record of what the customers had actually got.
Now, we've got a Maven repository full of artifacts, and a build process that depends on Maven having access to that repository. So if we need to replicate what we actually shipped to a customer, we need to do a build against a Maven repository that has all the proper versions of everything. This is doable, I guess, if in (some version of) the (SVN-controlled) POM files we set all the dependencies to released versions?
But it gives our release engineer the creepy-crawlies, because there doesn't seem to be any way:
to make sure that somebody doesn't clobber the copy of foo-api-1.2.3.jar on the WebDAV server by mistake (the WebDAV server has access control, but that wouldn't stop a buggy build script)
to detect it if they did
to recover afterwards
His idea is, for release builds, to use a local file system as the repository rather than the WebDAV server, and put that local repository under SVN control.
Our one Maven-experienced engineer doesn't like that -- I guess because he doesn't like putting binaries under version control? -- and suggests that maybe the professional version of the Nexus server can solve the clobbering or clobber-tracking/recovery problem.
Personally, I'm not happy (sorry, Sonatype readers) with shelling out money for a non-free build system when we haven't even seen any benefit from the free version yet, and there's no guarantee it will actually solve the problem.
So our choices seem to be:
WebDAV server
Pros: only one server, also accessible by devs, ...?
Cons: easy clobbering, no clobber-tracking/recovery
Local file system
Pros: can be placed under revision control
Cons: only works with the distribution script
Frankly, both of these seem like hacks to me, and I have to wonder if there isn't a better way to do this.
So: Is there a right thing to do here?
I'm not sure to get everything but I would:
Use the maven-release-plugin (which automates the release process i.e. execute all the steps documented in release:prepare).
Use WebDAV with anonymous read-only and authenticated write policy (so only release engineer can actually deploy released artifacts to the corporate repo).
There is a no need to put generated artifacts under version control (if you have the poms under version control). I don't see the benefits of using the local file system instead of WebDAV (this is not providing more security, you can secure WebDAV as well). I don't see what the commercial version of Nexus would solve here.
Nexus has a setting which prevents you from clobbering an already released artefact in a release repository.
For a team of about a dozen, the free version of Nexus should be enough.