Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
From a history of graphics hardware:
Indeed, in the most recent hardware era, hardware makers have added features to GPUs that have somewhat... dubious uses in the field of graphics, but substantial uses in GPGPU tasks.
What is the author referring to here?
I would assume that it is referring to the extra hardware features, as well as abstraction to support GPGPU initiatives such as CUDA and OpenCL.
From the description of CUDA:
CUDA has several advantages over traditional general-purpose
computation on GPUs (GPGPU) using graphics APIs: Scattered reads –
code can read from arbitrary addresses in memory Shared memory – CUDA
exposes a fast shared memory region (up to 48KB per Multi-Processor)
that can be shared amongst threads. This can be used as a user-managed
cache, enabling higher bandwidth than is possible using texture
lookups. Faster downloads and readbacks to and from the GPU Full
support for integer and bitwise operations, including integer texture
lookups
These are all features that are relevant when implement for CUDA and OpenCL, but are somewhat irrelevant (at least directly) to graphics APIs such as OpenGL. GPGPU features still can be leveraged in unconventional ways to supplement the traditional graphics pipeline.
The example of "CUDA exposes a fast shared memory region" would be an additional hardware requirement potentially useless to OpenGL.
You can read this detailed document describing the architecture required for CUDA, and the differences between it and traditional graphics exclusive GPUs.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I've recently installed OpenVINO following this tutorial and in the beginning it warned me that it doesn't see the graphics card. I assumed that it was talking about Intel HD Graphics (or similar), as I don't have one of these, but I have Nvidia GTX 1080ti.
I haven't seen anyone talking about this and it is not mentioned in the installation guide, but can it even work with Nvidia graphics cards? And if not, what's the point in using OpenVINO?
OpenVINO™ toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance.
OpenVINO™ toolkit is officially supported by Intel hardware only. OpenVINO™ toolkit does not support other hardware, including Nvidia GPU.
For supported Intel® hardware, refer to System Requirements.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Can I run PyTorch or Tensorflow on Windows on a GPU that is also acting as the system's graphics card (e.g. there is no graphics built-in to a Ryzen 3600 CPU)? If so, is there any downside, or would I be better off getting a CPU with built-in graphics?
Yes, it is possible to run i.e. Tensorflow on GPU while also using the GPU for your system. You do not need a 2nd graphics-card or integrated GPU.
Keep in mind, your graphicscard will share memory and processing-power between all your programs. GPU-intensive work might slow down the fps on your system and the other way around. Also keep an eye on the memory usage.
I had tensorflow_gpu with a multi-layer CNN running, while playing AAA-Games (i.e. GTA V) using a Ryzen 3600. It worked on my super old NVIDIA GTX 260 (2GB memory) but it crashed quite often cause of the limited memory. Upgraded to a GTX 1080 with 8GB and it worked quite well. Needless to say, you can always fill up your GPU-memory and crash, no matter the size.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I was looking to benchmarks and I can't see a difference. OpenGL 4.5 is same with Vulkan. Can an API effect the graphics quality?
It is a bit broad, but it cannot hurt to have The Motivation question answered.
This official video presentation discusses some of the differences: https://www.youtube.com/watch?v=iwKGmm3lw8Q
Vulkan API is a complete rework.
It also gives the programmer more control (but doing so requires him to do more, and know more).
Because of the above any graphics application also requires whole-hearted rework. Otherwise the benefits won't simply manifest. I don't keep updated, but I think big engines like UE4 and Unity still work on a way how to incorporate Vulkan in a non-naive manner.
Some benefits can be seen already in benchmarks. Though not in every benchmark. Some workload is fine for OpenGL and so Vulkan cannot show any improvement. Some aplications perhaps add Vulkan support just as an afterthought, making it unfair comparison. Some Vulkan drivers optimization may not be priority (e.g. for older GPU cards).
Main benefit of Vulkan is on the CPU side. It may manifest in other ways than FPS, such as less ventilator noise (temperature), more battery life and simply having more free CPU for other tasks.
Vulkan also gives more control to the programmer. If exploited it may also translate in other non-FPS benefits, like improving input latency and preventing hitching.
Vulkan also requires less of the driver, hopefully making it easier to optimize and GPU companies more willing to adopt it and implement it even on older cards.
Everything being the same (including the program itself as much as it can be), there should be no overall resulting image quality difference. Pixel values can differ slightly here and there though.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I'm trying to measure the effects of cpu overcommitting on a KVM setup (both host and guest). I can detect performance is degraded when the number of vCPUs is increased but ideally I want to look at some more objective metric (like CPU Ready in esxtop). Is there an equivalent to esxtop for KVM that provides a similar metric.
There is a fundamental difference between how you monitor VMs in KVM and how you monitor them with ESXi.
Since a lot of people run KVM in Linux, I'm going to assume your underlying OS is a Linux based one.
How to get CPU Ready like functionality with KVM?
With htop enable additional metrics and watch the gu section. This tells you how much CPU usage a guest is using.
Use virt-top which tells you overall CPU usage (among other things) of a guest.
The oversubscription principles that apply to ESXi also apply to KVM. Although KVM does not use CPU bonding (by default) like ESXi does, you still do not want to go more than 1:5 ratio pCPU to vCPU ratio in KVM. Of course, this depends on how much you're utilizing the CPUs. You also do not want to give more CPU cores than necessary either. Start with 1 core and move up.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As far as I understand it, embedded software is just software (that runs on a general purpose CPU) that has little if any user input or configuration. Embedded software powers IP routers, cars, computer mice, etc.
My question is:
When (roughly) was the historical moment when embedded software was first considered cost-effective for some applications (rather than an equal technical solution not involving embedded software)? Which applications and why?
Detail: Obviously there is a tradeoff between the cost of a CPU fast enough to perform X in software versus the cost of designing hardware that performs X.
Embedded systems date from the Apollo moon landings. Specifically the Apollo Guidance Computer (AGC) - widely held to be one of the first examples of embedded systems.
Commercially in the early 1970's early microprocessors were being employed in products, famously the 4-bit Intel 4004 used in the Busicom 141-PF. Bill Gates and Paul Allen saw the potential for embedded microprocessors early with their pre-Microsoft endeavour the Traf-O-Data traffic survey counter.
So I would suggest around 1971/72 at the introduction of the Intel 4004 and the more powerful 8-bit 8008. Note that unlike the more powerful still Intel 8080 which inspired the first home-brew microcomputers and the MITS Altair, the 4004 and 8008 were barely suitable for use a general purpose "computer" as such, and therefore embedded computing systems pre-date general purpose microcomputers.
I would dispute your characterisation of what an embedded system is; if you were asking that question here's my answer to a similar question.