I found a tool (Intel Extreme Tuning Utility, os win 7 ulti x64) what I can use to change the gpu clock on my laptop (cpu type Intel Core i5-4210U, built in Intel HD Graphics 4400). I marked the slider (belongs to that function) with red on this screen of Intel XTU to avoid any mistake.
I would be happy to build such functionality into my own program. That is enough if my program works at least on my own processor (processor type linked above).
My problem is, that I do not know, how to access gpu clock rate or absolute gpu clock, or whatever really exists behind the scene. Some documentation or any advice in advance will be great.
Related
I'd like to buy an eGPU for my MaxBook Pro to use for simple deep learning tasks. My setup is:
MacBook Pro (15-inch, 2017)
Graphics: Radeon Pro 555 2 GB
Intel HD Graphics 630 1536 MB
Version: Mojave 10.14.5
I understand for Deep Learning (i.e. use of tensorflow-gpu) this is not currently supported for my Mac. Due to previous disputes between Nvidia and Apple I assume that Nvidia's support is reluctant to offer any kind of hacky solution with their graphic cards. On saying that, I was recommended to purchase the NVIDIA TITAN RTX or NVIDIA Quadro® GV100, but they're quite pricey at 1000s of euros/dollars a piece. At first, I just want something to play around with.
I watched this and this to see how to configure the Mac with an eGPU that is CUDA supported.
What Nvidia eGPU would you recommend for simple i.e. not mega large data sets for DL processing? There seems to be so many models to choose from that it's not clear what would satisfy my needs. Would a GIGABYTE GeForce® GTX 1050 Ti OC 4GB suffice?
I decided to ditch using my Mac for using an external Nvidia graphic card. There are apparently some hacky solutions but I figured - after reading a many forum posts and online articles - the best way to proceed is just to buy a new (gaming) Desktop PC. One reoccurring theme I did see come up was that in order to have an effective Deep Learning workstation, one should consider at least 16 GB Ram (32 or more is ideal, Intel i7 or better, and 0.5 TB of SDD).
I am part of a team working on a 3D game engine which has a vulkan rendering system. So far we have been testing on NVIDIA graphics cards, like the GTX 970 and have had decent performance.
But recently we tested a scene on an AMD card and got really low fps:
For example, rendering a sponza scene:
AMD R9 Fury: 5 fps
NVIDIA GeForce GTX 970: 64 fps
The NVIDIA fps is not great, but much better than on AMD.
Do you guys have any idea what could be causing this difference in fps on the AMD card?
Or do you know how I could go about isolating what is causing the low fps on the AMD card?
Thanks in advance for your help.
AMD drivers have issues when accessing numerous vkDeviceMemory values per submission. This is particularly a problem on Windows 7/8, which do not have WDDM 2.0. In fact, if you use too many (~1000) on Window 7, it is easy to reproduce a BSOD. Nvidia drivers seem to be doing something behind the scenes, and aren't subject to these limitations. However, as a result, their driver implementation may be hiding some opportunity for optimization from the user.
Regardless, the recommendation is to pool your memory allocations, such that VkImage and VkBuffers are allocated from the same segmented vkDeviceMemory. There is a open source library, called Vulkan Memory Allocater which attempts to aid in implementing this behavior (and it is suspiciously authored by AMD!).
I am thinking about getting a Vive and I wanted to check if my PC can handle it. My motherboard and processor are pretty old (Asus M4A79XTD EVO ATX AM3 and AMD Phenom II X4 965 3.4GHz respectively) but I recently upgraded to a GeForce GTX 980 Ti graphics card.
When I ran the Steam VR test program, I was expecting it to say that my graphics card was OK but that my CPU was a bit too slow. Actually, it's the other way round. Screenshot of steamVR.
Your system isn't capable of rendering low quality VR and it appears to be >mostly bound by its GPU.
We recommend upgrading your Graphics Card
I've made sure I have updated my NVidia drivers.
When I look in GeForce Experience, I get the picture I was expecting to see:
GeForce Experience screenshot. It thinks my graphics card is OK but my processor doesn't meet the minimum spec.
But, since the Steam VR test is actually rendering stuff, whereas the GeForce experience is just going by the hardware I've got, it makes we think that my GPU should be capable but something about my setup is throttling it.
I'd love to know what the problem might be. Perhaps because I'm using an NVidia card in an AMD chipset MB?
Well, I never found out exactly what the cause was but the problem is now resolved. I bought a new Motherboard, processor and RAM but kept the graphics card. After getting everything booted up, the system is reporting "high-quality VR" for both CPU and graphics card.
So, for whatever reason, it does seem like the MB/processor was throttling the graphics card in some way.
Steam VR only tests if your rig is able to keep steady frames over 75fps. I can run VR on my laptop and it's only got a GTX 960m. My CPU is a little more up to date. I7 6700k 16gb of ddr4. I also have a buddy able to run VR on a 780ti.
I am using kinect xbox1 for window camera, for computing skeleton data and rgb data.I am retrieving 30 frames per second. Also calculating joint values of human body and then calculate angle between joints. I want that my laptop/system compute faster values of joints and angle. And store into directory.But recently i am using my laptop which compute the joint values and angle very slowly.
Specification of my laptop are:
500GB Hard
600GB RAM
1.7GHZ processor
Kindly tell me which system i am used to calculate faster calculation. I want really fast system/laptop to calculate very fast calculations. Anyone have idea please tell me.
And also tell me the complete specifications of system. I want to use latest fastest technology or any machine which resolve my issue.
Your computer must have the following minimum capabilities:
32-bit (x86) or 64-bit (x64) processors
Dual-core, 2.66-GHz or faster processor
USB 2.0 bus dedicated to the Kinect
2 GB of RAM
Graphics card that supports DirectX 9.0c
Source: MSDN
Anyway I suggest you:
A Desktop PC
with a Processor with 3Ghz (More are usually better) multi-core processor
with a GPU compatible with DirectX 11 and C++ AMP
I am interested to try out GPU programming. One thing not clear to me is, what hardware do I need? Is it right any PC with graphics card is good? I have very little knowledge of GPU programming, so the starting learning curve is best not steep. If I have to make a lot of hacks just in order to run some tutorial because my hardware is not good enough, I'd rather to buy a new hardware.
I have a retired PC (~10 year old) installed with Ubuntu Linux, I am not sure what graphics card it has, must be some old one.
I am also planning to buy a new sub-$500 desktop which to my casual research normally has AMD Radeon 7x or Nvidia GT 6x graphics card. I assume any new PC is good enough for the programming learning.
Anyway any suggestion is appreciated.
If you want to use CUDA, you'll need a GPU from NVidia, and their site explains the compute capabilities of their different products.
If you want to learn OpenCL, you can start right now with an OpenCL implementation that has a CPU back-end. The basics of writing OpenCL code targeting CPUs or GPUs is the same, and they differ mainly in performance tuning.
For GPU programming, any AMD or NVidia GPU made in the past several years will have some degree of OpenCL support, though there have been some new features introduced with newer generations that can't be easily emulated for older generations.
Intel's integrated GPUs in Ivy Bridge and later support OpenCL, but Intel only provides a GPU-capable OpenCL implementation for Windows, not Linux.
Also be aware that there is a huge difference between a mid-range and high-end GPU in terms of compute capabilities, especially where double-precision arithmetic is supported. Some low-end GPUs don't support double-precision at all, and mid-range GPUs often perform double-precision arithmetic 24 times slower than single-precision. When you want to do a lot of double-precision calculations, it's absolutely worth it to get a compute-oriented GPU (Radeon 7900 series or GeForce Titan and up).
If you want a low-end system with a non-trivial GPU power, you best bet at the moment is probably to get a system built around an AMD APU.