Does JProfiler have an equivalent -javaagent alternative to the native agent - jprofiler

I was planning to use JProfiler to...err...profile my current project. However, my target architecture is not one that is currently supported.
Does there exist -javaagent alternative to the native agent; a javaagent that the JProfiler GUI can connect to remotely and do its thing?
NDA's and the like preclude me from including anything more specific.

The JProfiler agent uses the profiling interface JVMTI which is a native interface. While it would be possible to record a small subset of the displayed data with a Java agent, it would require a duplicate implementation in Java code.
Java agents are less suitable for general profiling than native agents since they have to allocate all their resources on the heap.
If you're just interested in memory profiling, you can use tools like jmap (included in the JDK) to extract an HPROF snapshot that can be opened in JProfiler.

Related

Communication between Petrel and standalone app packaged as plugin

We (our team) saw that it's possible to include standalone app to a plugin. The app is used to modify Petrel's data in the specific way. See for example these plugins:
http://www.ocean.slb.com/Pages/Product.aspx?category=petrelgeophysics%28Petrel%29&cat=Petrel&pid=PCPT-B1%28Base%29&view=grid
http://www.ocean.slb.com/Pages/Product.aspx?category=petrelgeophysics%28Petrel%29&cat=Petrel&pid=PRPW-B1%28Base%29&view=grid
We want to do the same thing so there are questions:
How the plugin perform editing Petrel's data?
Does Petrel (Ocean) provide any mechanisms for IPC or should we develop our own architecture for communications between managed plugin code and native app process?
For most Petrel data, it is only safe to modify them inside the main thread of a Petrel plug-in.
If you already have a native process that does the number crunching, you will need to implement your own way to share the data between the plug-in and the native process. Eclipse does this by file sharing. If the overhead of IPC outweigh the actual computation, you may want to consider refactoring the native process to make it run inside the plug-in.

Platform Independent vs Machine Independent [duplicate]

I sometimes wonder why Java is referred as a Platform Independent Language?
I couldn't find a proper explanation of the below points :
Is the JVM same for Windows/Linux/Mac OS?
Are the bytecode generated same for a same Class in the above environments?
If the answer to the above questions are NO then how the platform independence is achieved.
Please help me out in learning this basic concept.
Is the JVM same for Windows/Linux/Mac OS?
Not at all. Compiler is same across the platforms. But, since it is an executable file, the file itself will be different i.e. on Windows, it would be .exe, on Linux, it would be Linux executable etc.
Are the bytecode generated same for a same Class in the above environments?
Yes. That is why Java is COMPILE ONCE. RUN ANYWHERE.
Before starting please read this doc by oracle
Machine Dependence: This means that whatever you want to execute on your hardware architecture will not be able to execute on another architecture. Like If you have created an executable for your AMD architecture it will not be able to run on Intel's architecture. Now comes Platform Dependence is that you have created some executable for your Windows OS which won't be able to run on Linux.Code written in Assembly(provided by your processor) or Machine Language are machine dependent but if you write code in C,CPP,JAVA then your code is machine independent which is provided by underlying OS.
Platform Independence:If you create some C or CPP code then it becomes platform dependent because it produces an intermediate file i.e. compiled file which matches to the instruction set provided by underlying OS. So you need some mediator which can understand both compiler and OS.Java achieved this by creating JVM. Note: No language is machine independent if you remove the OS which itself is a program created using some language which can directly talk to your underlying machine architecture. OS is such a program which takes your compiled code and run it ontop of the underlying architecture.
The meaning of platform independence is that you only have to distribute your Java program in one format.
This one format will be interpreted by JVMs on each platform (which are coded as different programs optimized for the platform they are on) such that it can run anywhere a JVM exists.
1 ) Is the JVM same for Windows/Linux/Mac OS?
Answer ===> NO , JVM is different for All
2 ) Are the bytecode generated same for a same Class in the above environments?
Answer ====> YES , Byte code generated will be the same.
Below explanation will give you more clarification.
{App1(Java code)------>App1byteCode}........{(JVM+MacOS) help work with App1,App2,App3}
{App2(Java Code)----->App2byteCode}........{(JVM+LinuxOS) help work with App1,App2,App3}
{App3(Java Code)----->App3byteCode}........{(JVM+WindowsOS) help work with App1,App2,App3}
How This is Happening ?
Ans--> JVM Has capability to Read ByteCode and Response In Accordance with the underlying OS As the JVM is in Sync with OS.
So we find, we need JVM with Sync with Platform.
But the main Thing is, That the programmer do not have to know specific knowledge of the Platform and program his application keeping one specific platform in mind.
This Flexibility of write Program in Java Language --- compile to bytecode and run on any Machine (Yes need to have Platform DEPENDENT JVM to execute it) makes Java Platform Independent.
Java is called a plattform indipendent language, because virtually all you need to run your code on any operating system, is that systems JVM.
The JVM "maps" your java codes commands to the system's commands, so you don't have to change your code for any operating system, but just install that system's JVM (which should be provided Oracle)
The credo is "Write once, run anywhere."
Watch this 2 min video tutorial hope this will help you understand that why java is platform independent? Everything is explained in just 2 min and 37 seconds.
Why Java is platform independent?
https://www.youtube.com/watch?v=Vn8hdwxkyKI
And here is explanation given below;
There are two steps required to run any java program i.e.
(i) Compilation &
(ii) Interpretation Steps.
Java compiler, which is commonly known as "javac" is used to compile any java file. During compilation process, java compiler will compile each & every statement of java file. If the java program contains any error then it will generate error message on the Output screen. On successful completion of compilation process compiler will create a new file which is known as Class File / Binary Coded File / Byte Code File / Magic Code File.
Generated class file is a binary file therefore java interpreter commonly known as Java is required to interpret each & every statement of class file. After the successful completion of interpretation process, machine will generate Output on the Output screen.
This generated class file is a binary coded file which is depends on the components provided by java interpreter (java) & does not depends on the tools & components available in operating system.
Therefore, we can run java program in any type of operating system provided java interpreter should be available in operating system. Hence, Java language is known as platform independent language.
Two things happen when you run an application in Java,
Java compiler (javac) will compile the source into a bytecode (stored in a .class file)
The java Byte Code (.class) is OS independent, it has same extension in all the different OSs. But since this is not specific to any OS or other environment no one can run this (Unless there is a machine whose native instruction set is bytecodes, i.e. they can understand bytecode itself)
JVM load and execute the bytecode
A virtual machine (VM) is a software implementation of a machine (i.e. a computer) that executes programs like a physical machine. Java also has a virtual machine called Java Virtual Machine (JVM).
JVM has a class loader that loads the compiled Java Bytecode to the Runtime Data Areas. And it has an execution engine which executes the Java Bytecode. And importantly he JVM is platform dependent. You will have different JVM for different operating systems and other environments.
The execution engine must change the bytecode to the language that can be executed by the machine in the JVM. This includes various tasks such as finding performance bottlenecks and recompiling (to native code) frequently used sections of code. The bytecode can be changed to the suitable language in one of two ways,
Interpreter : Reads, interprets and executes the bytecode instructions one by one
JIT (Just-In-Time) compiler : The JIT compiler has been introduced to compensate for the disadvantages of the interpreter. The execution engine runs as an interpreter first, and at the appropriate time, the JIT compiler compiles the entire bytecode to change it to native code. After that, the execution engine no longer interprets the method, but directly executes using native code. Execution in native code is much faster than interpreting instructions one by one. The compiled code can be executed quickly since the native code is stored in the cache.
So in a summary Java codes will get compiled into a bytecode which is platform independent and Java has a virtual machine (JVM) specific to each different platforms (Operation systems and etc) which can load and interpret those bytecodes to the machine specific code.
Refer :
https://www.cubrid.org/blog/understanding-jvm-internals/
https://docs.oracle.com/javase/tutorial/getStarted/intro/definition.html

How can I load the AIR runtime as a in-process shared library from a C program

I'd like to build a special AIR launcher program in C along the lines of java.exe.
I've looked at running AIR programs with a process viewer and was able to locate the AIR runtime DLL that is being used. AIR programs are different than Java in that they are installed as platform-specific executables that bind the AIR runtime as an in-process shared library once they're launched (their icon is double-clicked by the user).
Well, I want to make an AIR launcher that is instead like the java.exe.
The java.exe is launched as a platform OS process that binds to the Java JVM runtime (JRE) as an in-process shared library. The java application that is to be executed is specified as a command-line argument to java.exe. Once java.exe is running and the JVM is fully functional, the specified java application class is loaded by the JVM class loader for execution. That specified Java application then takes over, in a sense "hijacking" the process of java.exe. Of course, the specified java application shows up in any process listing as the java.exe program that host it.
I want to make AIR app launching work like this. Why? So I can explore ways to hack AIR and perhaps overcome some of its many, many deficiencies. For instance, for starters I want to extend the AIR runtime experience with some new APIs that become available to the running AIR application.
My first order of business would be to:
Implement a binding interface of
ActionScript3 to C that is comparable
to .NET PInvoke
Add an API for process launching that
is comparable to the APIs found in
Java SE for doing this (Runtime.exec,
ProcessBuilder, Process)
Add support for an AIR application to
be able to interact with stdin,
stdout, stderr. Strangely, though
Adobe added support for local file
access in AIR, they have omitted
interaction with these standard file
pipes (yet they are found on any
OS platform that AIR supports).
Implement support of AMF over stdin,
stdout, stderr - so AIR (or Java or
any AMF capable language) apps can
interprocess communication via
exchanging AMF objects. This would add
a touch of Microsoft's PowerShell
to AIR.
Currently Merapi provides a AMF bridge with Java, so that demonstrates the efficacy of this. Alas, Merapi has to use a localhost port and socket for doing the interprocess communication - which is a clumsy way to go relative to using stdin/stdout/stderr interprocess pipes instead.
It sounds like you want to do some very hardcore AIR hacking. I don't think hosting the AIR runtime in your own process will be very easy. But you might consider embedding the Flash Player ActiveX Control. Since it is just a COM object, any COM application can CoCreateInstance() the Flash Player. The COM interface is not well documented, but here are some examples that might be helpful:
F-IN-BOX is a developer's library to enhance Macromedia Flash Player ActiveX features. It does not use its own engine to display movies but provide a wrapper around official swflash.ocx/flash.ocx code instead.
How to embed Flash Player ActiveX using BoxedApp SDK
If you want to get even lower level access, you could embed the open-source Tarmain AS3 VM. The code has an example command-line shell called "avmshell". If you build the Tamarin VM yourself, you can add new ActionScript classes implemented in native C++. Tamarin (and the Flash Player) implement many of their features using this "AVM Glue" between AS and C++.
What my question posed attempting to do turns out to be prohibited by Adobe (so far as any potential commercial use):
From the Adobe® AIR™ Runtime Distribution FAQ:
Distribute or use the Adobe AIR
runtime, installer files, or extracted
installer files in an undocumented
manner or purpose. For example, you
may not distribute, call directly, or
write wrappers for any of the Adobe
AIR libraries or runtime components
within your software application.
Runtime.dll, Runtime executables,
template.exe, and template.app are
examples of Runtime Components.

Using Windows DLL from Linux

We need to interface to 3rd party app, but company behind the app doesn't disclose message protocol and provides only Windows DLL to interface to.
Our application is Linux-based so I cannot directly communicate with DLL. I couldn't find any existing solution so I'm considering writing socket-based bridge between Linux and Windows, however I'm sure it is not such a unique problem and somebody should have done it before.
Are you aware of any solution that allows to call Windows DDL functions from C app on Linux? It can use Wine or separate Windows PC - doesn't matter.
Many thanks in advance.
I wrote a small Python module for calling into Windows DLLs from Python on Linux. It is based on IPC between a regular Linux/Unix Python process and a Wine-based Python process. Because I have needed it in too many different use-cases / scenarios myself, I designed it as a "generic" ctypes module drop-in replacement, which does most of the required plumbing automatically in the background.
Example: Assume you're in Python on Linux, you have Wine installed, and you want to call into msvcrt.dll (the Microsoft C runtime library). You can do the following:
from zugbruecke import ctypes
dll_pow = ctypes.cdll.msvcrt.pow
dll_pow.argtypes = (ctypes.c_double, ctypes.c_double)
dll_pow.restype = ctypes.c_double
print('You should expect "1024.0" to show up here: "%.1f".' % dll_pow(2.0, 10.0))
Source code (LGPL), PyPI package & documentation.
It's still a bit rough around the edges (i.e. alpha and insecure), but it does handle most types of parameters (including pointers).
Any solution is going to need a TCP/IP-based "remoting" layer between the DLL which is running in a "windows-like" environment, and your linux app.
You'll need to write a simple PC app to expose the DLL functions, either using a homebrew protocol, or maybe XML-RPC, SOAP or JSON protocols. The RemObjects SDK might help you - but could be overkill.
I'd stick with a 'real' or virtualized PC. If you use Wine, the DLL developers are unlikely to offer any support.
MONO is also unlikely to be any help, because your DLL is probably NOT a .NET assembly.
This is a common problem. Fortunately, it now has a solution. Meet LoadLibrary, developed by Tavis Ormandy:
https://github.com/taviso/loadlibrary
I first stumbled across LoadLibrary in an article on Phoronix by Michael Larabel:
A Google researcher has been developing "LoadLibrary" as a means of
being able to load Windows Dynamic Link Libraries (DLLs) that in turn
can be used by native Linux code.
LoadLibrary isn't a replacement for Wine or the like but is intended
to allow Windows DLL libraries to be loaded that can then be accessed
by native Linux code, not trying to run Windows programs and the like
on Linux but simply loading the libraries.
This project is being developed by Tavis Ormandy, a well known Google
employee focused on vulnerability research. He worked on a custom
PE/COFF loader based on old ndiswrapper code, the project that was
about allowing Windows networking drivers to function on Linux.
LoadLibrary will handle relocations and imports and offers an API
inspired by dlopen. LoadLibrary at this stage appears to be working
well with self-contained Windows libraries and Tavis is using the
project in part for fuzzing Windows libraries on Linux.
Tavis noted, "Distributed, scalable fuzzing on Windows can be
challenging and inefficient. This is especially true for endpoint
security products, which use complex interconnected components that
span across kernel and user space. This often requires spinning up an
entire virtualized Windows environment to fuzz them or collect
coverage data. This is less of a problem on Linux, and I've found that
porting components of Windows Antivirus products to Linux is often
possible. This allows me to run the code I’m testing in minimal
containers with very little overhead, and easily scale up testing."
More details on LoadLibrary for loading Windows DLLs on Linux via
GitHub where he also demonstrated porting Windows Defender libraries
to Linux.
Sometimes it is better to pick a small vendor over a large vendor because the size of your business will give you more weight for them. We have certainly found this with AV engine vendors.
If you are sufficiently important to them, they should provide either a documented, supported protocol, a Linux build of the library, or the source code to the library.
Otherwise you'll have to run a Windows box in the loop using RPC as others have noted, which is likely to be very inconvenient, especially if the whole of the rest of your infrastructure runs Linux.
Will the vendor support the use of their library within a Windows VM? If performance is not critical, you might be able to do that.
Calling the DLL's functions themselves is of course only the tip of the iceberg. What if the DLL calls Win32, then you'd have a rather massive linking problem. I guess Wine could help you out there, not sure if they provide a solution.
IMO, the best bet is to use Sockets. I have done this previously and it works like a charm.
An alternate approach is to use objdump -d to disassemble the DLL, and then recompile/reassemble it. Don't expect to be able to recompile the code unedited. You might get pure, unadulterated rubbish, or code full of Windows calls, or both. Look for individual functions. Functions are often delimited by a series of push instructions and end with a ret instruction.

Can you freeze a C/C++ process and continue it on a different host?

I was wondering if it is possible to generate a "core" file, copy if to another machine and then continue execution of the a core file on that machine?
I have seen the gcore utility that will make a core file from a running process. But I do not think gdb can continue execution based on a core file.
Is there any way to just dump the heap/stack and and restore those at a later point?
it's called process migration.
mosix and OpenMosix used to be able to do that. nowadays it's easiest to migrate a whole VM.
On modern systems, not from a core file, no you can't. For freezing and restoring an individual process on Linux, CryoPID and the new Kernel-based checkpoint and restart are in the works, but their abilities are currently quite limited. OpenVZ and other virtualization-like softwares can freeze and restore an entire system.
Also checkout out the Condor project. Condor can do that with parallel jobs as well. Condor also include monitors that can automatically migrate your process when some, for example, starts using their workstation again. It's really designed for utilizing spare cycles in networked environments.
This won't, in general, be sufficient to let an arbitrary process continue on another machine. In addition to the heap and stack state, there may also also open I/O handles, allocated hardware resources, etc. etc.
Your options are either to explicitly write your software in a way that lets it dump state on a signal and later resume from the dumped state, or to run your software in a virtual machine and migrate that to the alternate host - Xen and Vmware both support freeze/restore as well as live migration.
That said, CryoPID attempts to do precisely this and occasionally succeeds.
As of Feb. 2017, there's a fairly stable and mature tool, called CRIU that depends on updates to the Linux Kernel made in version 3.11 (as this was done in Sep. 2013, most modern distros should have those incorporated into their kernel versions).
It can be installed via aptitude by simply calling sudo apt-get install criu.
Instructions on how to use it.
In some cases, this can be done. For example, part of the Emacs build process is to load up all the Lisp libraries and then dump the memory image on disk for quick loading. Some other language interpreters do that too (I'm thinking of Lisp and Scheme implementations, mostly). However, they're specially designed for that kind of use, so I don't know what special things they have to do to allow that to work.
I think this would be very hard to do for a random program, but if you wrote a framework where all objects supported serialisation/deserialisation, you can then serialise all objects used by your program, and then ship that elsewhere, and deserialise them at the other end.
The other people's answers about virtualisation are on the spot, too.
Depends on the machine. It's very doable in a very small embedded system, for instance. I think it's also implemented somewhat in Beowulf clusters and other supercomputeresque apps.
There are lots of reasons you can't do what you want very easily. For example, when you restore the core file on the other machine how do you resolve file descriptors that you process had open? What about sockets, named pipes, semaphores, or any other OS-level resource? Basically unless your system is specifically designed to handle such an operation you can't naively dump a core file and move it to another machine.
I don't believe this is possible. However, you might want to look into virtualization software - e.g. Xen - which make it possible to freeze and move entire system images fromone machine to another.