RAR password recovery on GPU using ATI Stream processor - gpu

I'm newbie in GPU programming , and i work on brute force RAR Password Recovery on ATI Stream Processor using brook+ language, but i see that the kernel written in brook+ language doesn't allow any calling to normal functions (except kernel functions) , my questions is :
1) how to use unrar.dll (to unrar archive files) API in this situation? and is this the only way to program RAR password recovery?
2) what about crack and ElcomSoft software that use GPU , how they work ?
3) what exactly the role for the function work inside GPU (ATI Stream processor or CUDA) in this program?
4) is nVidia/CUDA technology is easier/more flexible than ATI/brook+ language ?

1) unrar.dll is a compiled dynamic link library. These execute on the CPU. GPUs have vastly different machine code and a very different execution model, so they can't run dlls.
You could try to implement a callback from the GPU to the CPU via events, or build an x86 interpreter on the GPU, but these would almost certainly run slower than just running on the CPU.
Using unrar.dll is not the only way to program RAR password recovery. You could instead just build your own code for CPU and GPU from scratch.
2) They work by having the CPU code explicitly request that some GPU code run on the GPU.
3) I don't know exactly. I would guess though that it has a GPU program that tries many different combinations, and benefits from having these run in parallel.
4) CUDA is more mature than brook+. brook+ may be just as easy for simple tasks, but isn't as fully featured. For new projects most people would now choose OpenCL over brook+.
(I'm not sure what you're intending to do, but none of the above seems likely to enable anything sinister.)

Related

Simple explanation of terms while running configure command during Tensorflow installation

I am installing tensorflow from this link.
When I run the ./configurecommand, I see following terms
XLA JIT
GDR
VERBS
OpenCL
Can somebody explain in simple language, what these terms mean and what are they used for?
XLA stands for 'Accelerated Linear Algebra'. The XLA page states that: 'XLA takes graphs ("computations") [...] and compiles them into machine instructions for various architectures." As far as I understand will this take the computation you define in tensorflow and compile it. Think of producing code in C and then running it through the C compiler for the CPU and load the resulting shared library with the code for the full computation instead of making separate calls from python to compiled functions for each part of your computation. Theano does something like this by default. JIT stands for 'just in time compiler', i.e. the graph is compiled 'on the fly'.
GDR seems to be support for exchanging data between GPUs on different servers via GPU direct. GPU direct makes it possible that e.g. the network card which receives data from another server over the network writes directly into the local GPU's memory without going through the CPU or main memory.
VERBS refers to the Infiniband Verbs application programming interface ('library'). Infiniband is a low latency network used in many supercomputers for example. This can be used when you want to run tensorflow on more than one server for communication between them. The Verbs API is to Infiniband what the Berkeley Socket API is to TCP/IP communication (although there are many more communication options and different semantics optimized for performance with Verbs).
OpenCL is a programming language suited for executing parallel computing tasks on CPU and non-CPU devices such as GPUs, with a C like syntax. With respect to C however there are certain restrictions such as no support for recursion etc. One could probably say that OpenCL is to AMD what CUDA is to NVIDIA (although also OpenCL is also used by other companies like ALTERA).

Julia uses only 20-30% of my CPU. What should I do?

I am running a program that does numeric ODE integration in Julia. I am running Windows 10 (64bit), with Intel Core i7-4710MQ # 2.50Ghz (8 logical processors).
I noticed that when my code was running on julia, only max 30% of CPU is in usage. Going into the parallelazation documentation, I started Julia using:
C:\Users\*****\AppData\Local\Julia-0.4.5\bin\julia.exe -p 8 and expected to see improvements. I did not see them however.
Therefore my question is the following:
Is there a special way I have to write my code in order for it to use CPU more efficiently? Is this maybe a limitation posed by my operating system (windows 10)?
I submit my code in the julia console with the command:
include("C:\\Users\\****\\AppData\\Local\\Julia-0.4.5\\13. Fast Filesaving Format.jl").
Within this code I use some additional packages with:
using ODE; using PyPlot; using JLD.
I measure the CPU usage with windows' "Task Manager".
The -p 8 option to julia starts 8 worker processes, and disables multithreading in libraries like BLAS and FFTW so that the workers don't oversubscribe the physical threads on the system – since this kills performance in well-balanced distributed workloads. If you want to get more speed out of -p 8 then you need to distribute work between those workers, e.g. by having each of them do an independent computation, or by having the collaborate on a computation via SharedArrays. You can't just add workers and not change the program. If you are using BLAS (doing lots of matrix multiplies) or FFTW (doing lots of Fourier transforms), then if you don't use the -p flag, you'll automatically get multithreading from those libraries. Otherwise, there is no (non-experimental) user-level threading in Julia yet. There is experimental threading support and version 1.0 will support threading, but I wouldn't recommend that yet unless you're an expert.

What is the relation between an interpreter and CPU?

I have read that the interpreter (VM) is a software that executes code. I have also read that the CPU executes the instructions. What is the difference between the two execution? The VM does not convert the byte code into machine code. What does it do exactly?
The VM does not convert the byte code into machine code.
A virtual machine does convert the bytecode into machine code. That is precisely its main purpose, because it allows you to execute your program on every OS and architecture where the virtual machine is present, without the need to recompile it. Plus it can do other things, like security controls etc.
EDIT
I am more used to the Java world, where the virtual machine actually compiles the bytecode into CPU instructions, in order to speed up (a lot) things. It seems however that in Python the code to fulfill those instructions is instead part of the interpreter, which simply read your program and do internally what is needed for. I suggest you to read your link, which seems to be quite explaining. Plus, I have read somewhere that Python is introducing a JIT compiler too.

Can GPU be used to run programs that run on CPU?

Can Gpu be used to run programs that run on Cpu like getting input from keyboard and mouse or playing music or reading the contents of a text file using Direct3D and OpenGL Api?
The GPU has no direct access on any memory that is mapped by the OS to be accessed within client code (i.e. code, which is executed in user-mode while the instructions are executed on the CPU).
In addition the GPU is not supposed to perform stuff like this, it aims to perform floating point arithmetic at a high speed. And finally you would never use Direct3D or OpenGL to perform anything that is not related to graphics, except you are only going to use the compute shader.
General purpose computations are performed with OpenCL or CUDA on the GPU, such as image manipulation or physics simulations.
You can, however, gather any data on the CPU, send it to the GPU for further processing and finally write it back again into memory accessible from the CPU.

GPU Processing in vb.net

I've got a program that takes about 24 hours to run. It's all written in VB.net and it's about 2000 lines long. It's already multi-threaded and this works perfectly (after some sweat and tears). I typically run the processes with 10 threads but I'd like to increase that to reduce processing time, which is where using the GPU comes into it. I've search google for everything related that I can think of to find some info but no luck.
What I'm hoping for is a basic example of a vb.net project that does some general operations then sends some threads to the GPU for processing. Ideally I don't want to have to pay for it. So something like:
'Do some initial processing eg.
dim x as integer
dim y as integer
dim z as integer
x=int(textbox1.text)
y=int(textbox2.text)
z=x*y
'Do some multi-threaded operations on the gpu eg.
'show some output to the user once this has finished.
Any help or links will be much appreciated. I've read plenty of articles about it in c++ and other languages but I'm rubbish at understanding other languages!
Thanks all!
Fraser
The VB.NET compiler does not compile onto the GPU, it compiles down to an intermediate language (IL) that is then just-in-time compiled (JITed) for the target architecture at runtime. Currently only x86, x64 and ARM targets are supported. CUDAfy (see below) takes the IL and translates it into CUDA C code. In turn this is compiled with NVCC to generate code that the GPU can execute. Note that this means that you are limited to NVidia GPUs as CUDA is not supported on AMD.
There are other projects that have taken the same approach, for example a Python to CUDA translator in Copperhead.
CUDAfy - A wrapper on top of the CUDA APIs with additional libraries for FFTs etc. There is also a commercial version. This does actually
CUDAfy Translator
Using SharpDevelop's decompiler ILSpy as basis the translator converts .NET code to CUDA C.
There are other projects to allow you to use GPUs from .NET languages. For example:
NMath - A set of math libraries that can be used from .NET and are GPU enabled.
There may be others but these seem to be the main ones. If you decide to use CUDAfy then you will still need to invest some time in understanding enough of CUDA and how GPUs work to port your algorithm to fit the GPU data-parallel model. Unless it is something that can be done out of the box with one of the math libraries.
It's important to realize that there is still a performance hit for accessing the GPU from a .NET language. You must pay a cost for moving (marshaling) data from the .NET managed runtime into the native runtime. The overhead here largely depends on not only the size but also the type of data and if it can be marshaled without conversion. This is in addition to the cost of moving data from the CPU to the GPU etc.