instanceof checkcast secondary_super_cache jvm bug - jvm

There's been some posts and talks recently about this issue https://bugs.openjdk.org/browse/JDK-8180450.
Posts https://netflixtechblog.com/seeing-through-hardware-counters-a-journey-to-threefold-performance-increase-2721924a2822
And folks(https://twitter.com/forked_franz) who are working with it even provided some tests to reproduce it https://github.com/RedHatPerf/type-pollution-agent
i.e. test case
https://github.com/franz1981/java-puzzles/blob/d1652dae963a970c87b9222d54d8b47e46f45ee9/src/main/java/red/hat/puzzles/polymorphism/CheckcastContentionTest.java
Can someone explain in more details what's really going on in this test?
As we iterate through a collection of homogeneous objects and access each one there's no type_pollution penalty and otherwise there is one. Why so?
one more test example
https://gist.github.com/tjake/1b42331a11903980efeb4d3d7dd1df1b

Related

Is there a way we can chain the scenarios in karate like java method chaining

I have been using Karate for the past 6 months, I am really impressed with the features it offers.
I know karate is meant to test API(s) individually but we are also trying to use it for E2E tests that involves calling multiple scenarios step by step.
Our feature file looks like below
1.Call Feature1:Scenario1
2.Call Feature2:Scenario2
.....
Note: We are re-using a scenarios for both API Testing and E2E testing.Sometimes I find it difficult to remember all feature files.
Can we chain the scenario call like java, I doubt feature file will let us to do that. We need your valuable suggestion. pls let us know if you feel our approach is not correct
First, I'd like to quote the documentation: https://github.com/intuit/karate#script-structure
Variables set using def in the Background will be re-set before every Scenario. If you are looking for a way to do something only once per Feature, take a look at callonce. On the other hand, if you are expecting a variable in the Background to be modified by one Scenario so that later ones can see the updated value - that is not how you should think of them, and you should combine your 'flow' into one Scenario. Keep in mind that you should be able to comment-out a Scenario or skip some via tags without impacting any others. Note that the parallel runner will run Scenario-s in parallel, which means they can run in any order.
So by default, I actually recommend teams to have Scenario-s with multiple API calls within them. There is nothing wrong with that, and I really don't understand why some folks assume that you should have a Scenario for every GET or POST etc. I thought the "hello world" example would have made that clear, but evidently not.
If you have multiple Scenario-s in a Feature just run the feature, and all Scenario-s will be executed or "chained". So what's the problem ?
I think you need to change some of your assumptions. Karate is designed for integration testing. If you really need to have a separate set of tests that test one API at a time, please create separate feature files. The whole point of Karate is that there is so little code needed - that code-duplication is perfectly ok.
Let me point you to this article by Google. For test-automation, you should NOT be trying to re-use things all over the place. It does more harm than good.
For a great example of what happens when you try to apply "too much re-use" in Karate, see this: https://stackoverflow.com/a/54126724/143475

Is there a good way to run specflow tests in a new app domain?

Due to some constraints on our production code, we have some .NET services that need to be run with their own config file. We've been using app-domains to provide arbitrary config files to these services at test run time.
The problem comes when we try and use SpecFlow for these tests - since each step is called separately and from an overall runner class that we don't have direct access to, pushing test data across app-domain boundaries for every single STEP is pretty messy and results in everything being in all sorts of odd lambdas, plus serializability needs to be considered when most of the time we shouldn't need to care about that in a test code context (internal data objects, that sort of thing).
Does anyone have a method by which SpecFlow can be convinced to run all of its steps in a provided app-domain, or generally just play nicer with the app-domain concept in general?
Would it be possible to write a plugin / test generator that did this, and if so would this be very technically complicated? I had a look at that sort of extensibility but couldn't find the right place to start to do this, so I may have missed it.
(I'm aware that "Refactor your service so you don't need arbitrary config files" would also solve the underlying problem, but for the purposes of this question please assume I can't do that - I'm interested in whether SpecFlow can be configured to solve this, whether on its own or by extending it.)
Edit: After some more investigation I think this -should- be possible by using a custom unit test generator plugin? The problem I then have is there's basically zero documentation on that, and not many examples around on the internet. If you can give me a good example that I can look at to adapt that would go a long way...

What is a sanity test/check

What is it and why is it used/useful?
A sanity test isn't limited in any way to the context of programming or software engineering. A sanity test is just a casual term to mean that you're testing/confirming/validating something that should follow very clear and simple logic. It's asking someone else to confirm that you are not insane and that what seems to make sense to you also makes sense to them... or did you down way too many energy drinks in the last 4 hours to maintain sanity?
If you're bashing your head on the wall completely at a loss as to why something very simple isn't working... you would ask someone to do a quick sanity test for you. Have them make sure you didn't overlook that semicolon at the end of your for loop the last 15 times you looked it over. Extremely simple example, really shouldn't happen, but sometimes you're too close to something to step back and see the whole. A different perspective sometimes helps to make sure you're not completely insane.
The difference between smoke and sanity, at least as I understand it, is that smoke test is a quick test to see that after a build the application is good enough for testing. Then, you do a sanity test which would tell you if a particular functional area is good enough that it actually makes sense to proceed with tests on this area.
Example:
Smoke Test: I can launch the application and navigate through all the screens and application does not crash.
-If application crashes or I cannot access all screens, this build has something really wrong, there is "a fire" that needs to be extinguished ASAP and the vesion is not good for testing.
Sanity Test (For Users Management screen): I can get to Users Management screen, create a user and delete it.
So, the application passed the Smoke Test, and now I proceed to Sanity Tests for different areas. If I cannot rely on the application to create a user and to delete it, it is worthless to test more advanced functionalities like user expiration, logins, etc... However, if sanity test has passed, I can go on with the test of this area.
Good example is a sanity check for a database connection.
SELECT 1 FROM DUAL
It's a simple query to test the connection, see:
SELECT 1 from DUAL: MySQL
It doesn't test deep functionality, only that the connection is ok to proceed with.
A sanity test or sanity check is a basic test to quickly evaluate whether a claim or the result of a calculation can possibly be true # http://en.wikipedia.org/wiki/Sanity_testing
Smoke test is for quick test of a new build for its stability.
Sanity test is a test of newly deployed environment.
The basic concept behind a sanity check is making sure that the results of running your code line up with the expected results. Other than being something that gets used far less often than it should, a proper sanity check helps ensure that what you're doing doesn't go completely out of bounds and do something it shouldn't as a result. The most common use for a sanity check is to debug code that's misbehaving, but even a final product can benefit from having a few in place to prevent unwanted bugs from emerging as a result of GIGO (garbage in, garbage out).
Relatedly, never underestimate the ability of your users to do something you didn't expect anyone would actually do. This is a lesson that many programmers never learn, no matter how many times it's taught, and sanity checks are an excellent tool to help you come to terms with it. "I'd never do that" is not a valid excuse for why your code didn't handle a problem, and good sanity checks can help prevent you from ever having to make that excuse.
For a software application, a sanity test is a set of many tests that make a software version releasable to the public after the integration of new features and bug fixes. A sanity test means that while many issues could remain, the very critical issues which could for example make someone lose money or data or crash the program, have been fixed. Therefore if no critical issues remain, the version passes sanity test. This is usually the last test done before release.
It is a basic test to make sure that something is simply working.
For example: connecting to a database. Or pinging a website/server to see if it is up or down.
The act of checking a piece of code (or anything else, e.g., a Usenet posting) for completely stupid mistakes.
Implies that the check is to make sure the author was sane when it was written;
e.g., if a piece of scientific software relied on a particular formula and was giving unexpected results, one might first look at the nesting of parentheses or the coding of the formula, as a sanity check, before looking at the more complex I/O or data structure manipulation routines, much less the algorithm itself.

How much code do you tend to write before you test? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have noted over the years, that I tend to write maybe a screen full of code, then test to make sure it does what it should.
Some of the benefits of this technique are
Syntax errors are a result of the new code,
so you don't have to look far to find the cause.
It is cheap to set up a temporary condition, that lets you test the else
clause of an if statement, so you can be sure to get error messages,
and the like correct when they are cheap to test.
How do you tend to code?
What benefits do you get by doing it that way?
EDIT: Like most of my questions, I really haven't set the context well enough. I am not really talking about unit test level granularity. I am referring to making sure the local bit of code does exactly what I intend it to, at the time of implementation.
I'd like to say I always write a unit test before I write the corresponding code to pass it, but I'd be lying.
I tend to code until I have something that should produce a well-defined observable behavior. Usually, this is a single public API function, sometimes a full class. This also encourages me to break down the problem into small functions with well-defined observable behavior. Most of my functions are smaller than a full screen. If a function is too complex to test, then it's probably badly designed from other perspectives anyhow.
Personally I find I tend to write the obvious Interfaces and drag in the utility resources (be they C# libraries, CSS, whatever) before I actually write tests.
I think there's a balance between zealotry and experience to be struck.
This may sound silly, but I usually test the code I write after each "processing task". Meaning, if I open a file, I test the routine. If I connect to a Database and pull out a single record, I test that routine. Or sometimes I write a test that just exercises all the methods of a class just to see if they work.
I don't think I use a hard or fast rule, but mostly when I write code to preform a task, I test to "verify" it does what it's supposed to do.
Exactly as much as I have to. Sometimes that means a few hundred lines, especially if I'm adding a large system to an existing framework, when the application wouldn't even run without some part of it.
I suppose I follow the principle of testing whenever I can. Obviously that doesn't mean halfway through writing a loop, but when I'm done with the loop I'll try it out before moving on. The less you changed since the last test, the easier it'll be to figure out what was changed that caused your error condition. :)
I usually do what you describe, but I don't get a full page written before I test. I've found that if I write some code then write a test, I usually have to refactor the code to make it more testable. This seems a little bit wasteful, so I'm down to just a few lines of code before I write a unit test. I find that I'm moving closer and closer to strictly adhering to TDD.
I don't use TDD, but build what are effectively test stubs first, that become the actual application.
For instance, in a WinForms app, I build the buttons first, and test them. Then when I build the class, I test that the class's methods are being called by the UI.
Then, if for instance I'm going to put the actual work into a background worker, I build that with nothing inside it, and test that the Start/Progress/Complete handlers all fire, and are handled by the class that creates the BGW.
Then I start putting the functionality into the methods, and thus already have a tested test harness. It's very rare that I have to build a separate harness for this, since every increment is small, and tested before the next level of complexity is added.
The benefit is that I don't have to hold too much complexity in mind at a time, and very little is added without the foundations it relies on already being well tested.
I've never found unit testing to be any kind of issue - what I really want is automated testing at a higher level than that.
As you did not mention in which language environment you code...
As I work in Smalltalk, syntax is checked in the editor while I type, and whenever I accept a method, so thats not an issue. (For those who don't know Smalltalk: it is not file-based, but object oriented; that means that you add method-objects one-at-a-time to a class object, and the system compiles each as it is "accepted" in the editor).
For small methods which are algorithmic or which do not need a big framework/setup, I add a little comment which tests that method and which can be executed by a click. There is also a test-runner to extract all these and run them as a unit test.
For bigger stuff, a TestCase class is updated for every few methods and the test-runner button clicked from time to time, stopping me on a red light.
So I would say, a test is done for every 10 lines or so.
I admit, doing so requiresd a highly reactive and incremental IDE - otherwise, it cannot be done so easily and I would revert to say a roughly a letter-size page-of-code before testing. I do not consider compilability as "a test", so syntactic correctness does not count.
EDIT: For your amusement, here is a concrete example from the Collection class:
For those who don't know smalltalk:
quoted strings are comments;
+/- is an operator to create a measurement value;
/ creates fractions;
{...} is array creation;
the testcases at the end are directly executable (so called doIt) from within the editor.
sum
"sum up all elements.
This is implemented using a variant of the normal inject:into: pattern.
The reason for this is that it is not known whether we are dealing with number
(i.e. if 0 is a good initial value for the sum).
Consider a collection of measurement or physical objects, 0 would be the unitless
value and would not be appropriate to add with the unit-ed objects."
| sum sample |
sample := self anElement.
sum := self inject: sample into: [:accum :each | accum + each].
^ sum - sample.
"
TestCase should: [ { } sum ] raise:Error.
TestCase should: [ '' sum ] raise:Error.
TestCase assert: ( { 1 } sum = 1 ).
TestCase assert: ( { 1. 2. 3. 4. } sum = 10 ).
TestCase assert: ( (1 to:10) sum = 55 ).
TestCase assert: ( 'abc' asByteArray sum = 294 ).
TestCase assert: ( { 10 +/- 2.
20 +/- 4.
100 +/- 10 } sum = (130 +/- 16) ).
TestCase assert: ( { (1 / 9).
(1 / 7).
} sum = (16 / 63) ).
"
Depends on the size/scale of the project. If its a short program (trivial to compile and run), I will test it early and often any time I add in any new functionality. This lets me catch most errors quickly.
In a large project (company-size), I'll test my piece in isolation like this, IF I can. Otherwise, pay attention to tests on those daily builds.
In short, test as often as possible, so long as the compile/run time doesn't take so long you consider taking up office swordfighting!
I tend to test each feature of a program. Not each function, but a series of functions that form a feature.
Benefits this way that I don't have a lot of overhead to test each function, but test it after each other.
The project I am on now is supposed to be Unit Test first then development, and for the most part it is, but sometimes the person writing the test and the person implementing are not always on the same page.
So I like having a unit test for checking the main functionality of the method needed, then having the person implementing the code to write several unit tests checking the various edges of code.
The older I get, the less code I write before running/testing.
In part, that's a consequence of technical advances: I started out writing code on COBOL coding sheets to be transformed into punched cards twice a week when the punch girl came in. I generally wouldn't even attempt a compile of a new program until it was largely complete and desk-checked, which was usually a couple of thousand lines and a few weeks.
These days, when I'm on my game, I don't write any code before testing, I write a test before coding. I'm weak and not always sure how to write the test, though, so sometimes I tell myself I'm being pragmatic by just doing it. It's surprising how often that turns out to have been a bad idea, though: code that I wrote as a consequence of TDD tends to be easier to test, easier to modify and mostly just better than code that got tests later.
But that's just me, YMMV.
Usually, as soon as I complete a function, I compile it, switch to the REPL, and test it with some ad hoc made up data (also edge cases). Sometimes (more often than I'd like) a few debug cycles (edit-compile-test) are necessary to get the desired behaviour. Of course, this kind of development style is only viable if you can individually compile functions into a running runtime that provides a REPL, otherwise you would spend too much time waiting for a complete compile. I use SBCL with SLIME.
I try to make the first time my code runs be var a unit test.
Sometimes I write the test first, sometimes I write the method/class first.
I like to feel good about my self,
Therefore I like to give myself positives
feedback often,
Therefore I try to
“prove” a new method works soon after I wrote it.

BDD GUI Automation

I've started a new role in my life. I was a front end web developer, but I've now been moved to testing web software, or more so, automating the testing of the software. I believe I am to pursue a BDD (Behavior Driven Development) methodology. I am fairly lost as to what to use, and how to piece it together.
The code that is being used/written is in Java to write a web interface for the application to test. I have documentation of the tests to run, but I've been curious how to go about automating it.
I've been directed to Cucumber as one of the "languages" to help with the automation. I have done some research and come across a web site for a synopsis of BDD Tools/Frame works,
8 Best Behavior Driven Development (BDD) Tools and Testing Frameworks. This helped a little but then I got a little confused of how to implement it. It seems that Selenium is a common denominator in a lot of the BDD frameworks for testing a GUI, but it still doesn't seem to help describe what to do.
I then came across the term Functional Testing tool, and I think that confused me even more. Do they all test a GUI?
I think the one that looked like it was all one package was SmartBear TestComplete, and then there is, what seems to be, another similar application by SmartBear called, SmartBear TestLeft, but I think I saw that they still used Cucumber for BDDing it. There a few others that looked like they might work as well, but I guess the other question is what's the cheapest route?
I guess the biggest problem I have is how to make these tests more dynamic, as the UI/browser dimensions can easily change from system to system, and how do I go about writing automation that can handle this, and tie into a BDD methodology?
Does anyone have any suggestions here? Does anybody out there do this?
Thanks in advance.
BDD Architecture
BDD automation typically consists of a few layers:
The natural language steps
The wiring that ties the steps to their definition
The step definitions, which usually access page objects
Page objects, which provide all the capabilities of a page or widget
Automation over the actual code being exercised, often through the GUI.
The wiring between natural language steps and the step definitions is normally done by the BDD tool (Cucumber).
The automation is normally done using the automation tool (Selenium). Sometimes people do skip the GUI, perhaps targeting an API or the MVC layer instead. It depends how complex the functionality in your web page is. If in doubt, give Selenium a try. I've written automation frameworks for desktop apps; the principle's the same regardless.
Keeping it maintainable
To make the steps easy to maintain and change, keep the steps at a fairly high level. This is frequently referred to as "declarative" as opposed to "imperative". For instance, this is too detailed:
When Fred provides his receipt
And his receipt is scanned
And the cashier clicks "Refund to original card"
And the card is inserted...
Think about what the user is trying to achieve:
When Fred gets a refund to his original card
Generally a scenario will have a few Givens or Thens, but typically only one When (unless you have something like users interacting or time passing, where both events are needed to illustrate the behaviour).
Your page objects in this scenario might well be a "RefundPageObject" or perhaps, if that's too large, a "RefundToCardPageObject". This pattern allows multiple scenario steps to access the same capabilities without duplication, which means that if the way the capabilities are exercised changes, you only need to change them in one place.
Different page objects could also be used for different systems.
Getting started
If you're attacking this for the first time, start by getting an empty scenario that just runs and passes without doing anything (make the steps empty). When you've done this, you'll have successfully wired up Cucumber.
Write the production code that would make the scenario run. (This is the other way round from the way you'd normally do it; normally you'd write the scenario code first. I've found this is a good way to get started though.)
When you can run your scenario manually, add the automation directly to the steps (you've only got one scenario at this point). Use your favourite assertion package (JUnit) to get the outcome you're after. You'll probably need to change your code so that you can automate over it easily, by e.g.: giving relevant test ids to elements in your webpage.
Once you've got one scenario running, try to write any subsequent scenarios first; this helps you think about your design and the testability of what you're about to do. When you start adding more scenarios, start extracting that automation out into page objects too.
Once you've got a few scenarios, have a think about how you might want to address different systems. Avoid using lots of "if" statements if you can; those are hard to maintain. Injecting different implementations of page objects is probably better (the frameworks may well support this by now; I haven't used them in a while).
Keep refactoring as you add more scenarios. If the steps are too big, split them up. If the page objects are too big, divide them into widgets. I like to organize my scenarios by user / stakeholder capabilities (normally related to the "when" but sometimes to the "then") then by different contexts.
So to summarize:
Write an empty scenario
Write the code to make that pass manually
Wire up the scenario using your automation tool; it should now run!
Write another scenario, this time writing the automation before the production code
Refactor the automation, moving it out of the steps into page objects
Keep refactoring as you add more scenarios.
Now you've got a fully wired BDD framework, and you're in a good place to keep going while making it maintainable.
A final hint
Think of this as living documentation, rather than tests. BDD scenarios hardly ever pick up bugs in good teams; anything they catch is usually a code design issue, so address it at that level. It helps people work out what the code does and doesn't do yet, and why it's valuable.
The most important part of BDD is having the conversations about how the code works. If you're automating tests for code that already exists, see if you can find someone to talk to about the complicated bits, at least, and verify your understanding with them. This will also help you to use the right language in the scenarios.
See my post on using BDD with legacy systems for more. There are lots of hints for beginners on that blog too.
Since you feel lost as to where to start, I will hint you about some blogs I have written that talks a bit about your problem.
Some categories that may help you:
http://www.thinkcode.se/blog/category/Cucumber
http://www.thinkcode.se/blog/category/Selenium
This, rather long and old post, might give you hints as well:
http://www.thinkcode.se/blog/2012/11/01/cucumberjvm-not-just-for-testing-guis
Notice that versions are dated, but hopefully it can give some ideas as what too look for.
I am not an expert on the test automation but I am currently working on this part. So let me share some idea and hope it will help you at the current stage.
We have used selenium+cucumber+intellij for testing web application. We have used testcomplete+cucumber+intellij for testing java desktop application.
As to the test of web application, we have provided a test mode in our web application, which allows us to get some useful details of the product and the environment; and also allows us to easily trigger events through clicking the button and inputting text into the test panel under test mode.
I hope these are helpful for you.