Bytebuddy doesn't work on zing - byte-buddy

Looks like bytebuddy doesn't work with zing jvm. Javaagent is initiated but interceptor is not activated. Anyone managed to get it to work with zing?

For a short a while, Azul started offering a free trial of Zing and I was able to run Byte Buddy and all of its tests on their JDK. I can now confirm that Byte Buddy works on Zing without limitations, including the retransformation.

Related

Karate UI executed from a spring boot application

I am running Karate UI tests that are kicked off from a spring boot application, which works fine. However, if the driver fails, karate appears to be killing the whole process. I saw that System.exit() is used in different places in the Karate API. Is there a strategy for avoiding Karate getting killed of with System.exit()? Must I create custom code that doesn't include System.exit()? Any suggestions?
Thanks,
Chris
Completely agree that System.exit() is not-ideal, I remember that it made sense in the "fat jar" - so is that what you are using?
The best thing you could do is submit a PR or at least suggest which part of the code comes into play for your case. Karate certainly has not been designed to be kicked off from a web-app, so yes - this may need some investigation.

How do I run cucumber tests when testing an rest or graphql API

This is my first time playing with cucumber and also creating a suite which tests and API. My questions is when testing the API does it need to be running?
For example I've got this in my head,
Start express server as background task
Then when that has booted up (How would I know if that happened?) then run the cucumber tests?
I don't really know the best practises for this. Which I think is the main problem here sorry.
It would be helpful to see a .travis.yml file or a bash script.
I can't offer you a working example. But I can outline how I would approach the problem.
Your goal is to automate the verification of a rest api or similar. That is, making sure that a web application responds in the expected way given a specific question.
For some reason you want to use Cucumber.
The first thing I would like to mention is that Behaviour-Driven Development, BDD, and Cucumber are not testing tools. The purpose with BDD and Cucumber is to act as a communication tool between those who know what the system should do, those who write code to make it happen, and those who verify the behaviour. That’s why the examples are written in, almost, a natural language.
How would I approach the problem then?
I would verify the vast majority of the behaviour by calling the methods that make up the API from a unit test or a Cucumber scenario. That is, verify that they work properly without a running server. And without a database. This is fast and speed is important. I would probably verify more than 90% of the logic this way.
I would verify the wiring by firing up a server and verify that it is possible to reach the methods verified in the previous step. This is slow so I would do as little as possible here. I would, if possible, fire up the server from the code used to implement the verification. I would start the server as a part of the test setup.
This didn’t involve any external tools. It only involved your programming language and some libraries. The reason for doing it this way is that I want to to be as portable as possible. The fewer tools you use, the easier it gets to work with something.
It has happened that I have done some of the setup in my build tool and had it start a server before running the integration tests. This is usually more heavy weight and something I avoid if possible.
So, verify the behaviour without a server. Verify the wiring with a server. It is important to only verify the wiring in this step. The logic has been verified earlier, there is no need to repeat it.
Speed, as in a fast feedback loop, is very important. Building and testing the entire system should, in a good world, take seconds rather than minutes.
I have a working example if you're interested (running on travis).
I use docker-compose to launch the API & required components such as database, then I run cucumber-js tests against the running stack.
docker-compose is also used for local development & testing.
I've also released a library to help writing cucumber for APIs, https://github.com/ekino/veggies.

Run 'HTTP-POST' method on application RUN (IntelliJ IDEA)

I am looking for a solution when I run my program it would execute an HTTP-POST method. But it looks like there are no HTTP-REQUEST option in intelliJ build configurations. So I have thought of a way having a batch script execute a java application that would do that but that seems just over-complicating things. Any suggestions?
Research:
https://www.jetbrains.com/idea/help/working-with-build-configurations.html
Nothing found that would help me do this.
I just created a simple Java application using the Apache http libary which sends a POST request. Here is the apache documentation.

Artifice for Objective-C?

Is there an Objective-C version of Artifice?
If not, how would I design/develop/create it?
Related Questions
Mock HTTP response via Objective-C
Mock NSURLConnection
I think I might be able to help you here.
I have a Ruby library that is somewhat similar to artifice, albeit more self-contained and built on top of Sinatra, called Mimic. I'm pretty happy with it and one of my favourite features is that as well as being configured using it's Ruby DSL (or using the Sinatra API directly), it can be configured remotely or from any process that speaks HTTP. This means you can use it in your Objective-C tests and configure it from the tests too (rather than having say, a set of external fixtures in a Ruby file).
In the name of eating my own dog food, I recently converted the acceptance tests for my Objective C RestClient port, Resty to use Mimic. The Mimic daemon is started up as part of the build process and my stubs are configured directly in the tests, using a thin Objective-C wrapper around the Mimic REST API.
As you can see, I strive very hard for test clarity!
Those tests use OCUnit but you can use this with Kiwi. In fact, the assertEventually macro in the above tests was the basis of the asynchronous testing support that I ported to Kiwi.
I've since extracted the Objective-C wrapper for Mimic from LRResty and moved it into the Mimic repository. You may want to check out the Resty project to see how my project and the tests are configured. If you have any questions, please ask.
One caveat: I haven't found a way of getting these tests to run successfully in Xcode 4, using the "Test" option, due to the way that it runs. In Xcode 3, I rely on Run Script build phases to start and stop the Mimic daemon, but because Xcode 4 doesn't run the tests as part of the build process this doesn't work. I've tried to accomplish something similar using pre/post test actions but unfortunately these are woefully inadequate due to various bugs.
Bonus tip: I find Charles Debugging Proxy as massive help when working with web services and you can use it with Mimic too; the Objective-C wrapper can be proxied through Charles so you can see exactly what is happening, both in terms of stub configuration and actual HTTP requests (Mimic can even be configured to return some helpful debugging data in the response headers).
Do let me know if you have any questions.

Running a groovy application under Maven

We have developed a Groovy application. Under development for starting it we use the following command line
C:\myapp>mvn grails:run-app
Without sending any request to the server one can see how the memory used by the java process in increasing and increasing. When it starts at about 100M are allocated and a couple of hours later -without doind anything- the memory goes up to 300M.
When I start the application directly
C:\myapp> grails run-app
the memory consume is somehow different, without sending any request it is somehow stabilized at 110M. Sometimes goes up, sometimes comes down.
Although 300M is not critical I would like to know if this is a memory leak or not.
Is anybody having similar behaviours?
Thanks!
This could be a memory leak in Maven, but more likely a leak in the grails:run-app command. I would suggest posting this on the Grails development mailing list.
Why does this concern you? You should only be using these commands for development, not production as you'd be deploying a war file in production. If you're simply concerned, the Grails development mailing list is definitely the place for something like this.
I doubt there's a memory leak here.
It's perfectly normal for the JVM to wait on doing a full GC until it has to. That means if you allocate more memory, your java/groovy process will happily consume it.
Most likely you have different default memory settings for Maven vs Grails. I'm not sure exactly how these properties are set in windows, but they look something like:
GRAILS_OPTS="-Xms100m -Xmx110m"
MAVEN_OPTS="-Xms100m -Xmx300m"