Question 1
If I want to build an application with OpenCL support, do I have any guarantees that the OpenCL.lib implementation from my vendor is able to work with all devices from other Vendors? If yes what's the difference between the implementation?
Question 2
Is it possible to use different OpenCL versions in the same application? For example AMD has released a preview driver for OpenCL 2.0 support. On the other hand the lovely company called Nvidia is still trying to ignore everything past OpenCL 1.1. It would be nice if I could write platform specific code in different versions.
1: On Windows, OpenCL.lib is a static wrapper around OpenCL.dll, which is the ICD loader, and exposes all of the available platforms. It is provided by Khronos and redistributed by the OpenCL platform vendors. So go ahead and link to it; it will work with whatever is installed (although if nothing is installed your application won't run because it can't find OpenCL.dll; this is solved other ways).
2: Yes. As long as the ICD loader is the latest, you can get at the newer API on newer platforms / devices. Just don't use new API on old devices; that will crash or worse.
Related
Can ios apps be compiled on the new M1 chipset?
Is there any schedule for official support?
The short answer is yes.
The latest version of XCode (version 12) is compiled as a universal app. This means that it runs on both Intel-based and Mac Sillicon machines natively. From Apple's website:
Xcode 12 is built as a Universal app that runs 100% natively on Intel-based CPUs and Apple Silicon for great performance and a snappy interface.* It also includes a unified macOS SDK that includes all the frameworks, compilers, debuggers, and other tools you need to build apps that run natively on Apple Silicon and the Intel x86_64 CPU.
This means that you should be able to compile iOS with the latest version of XCode without a problem. It would be kind of crazy for Apple to release professional hardware (MacBook Pro) without this capability.
Keep in mind that a number of third party applications may not work well on the ARM machines yet. VSCode is not currently supported on M1 devices (although Microsoft have said that it's coming). VSCode is an Electron based app which currently can't be emulated with Apple's Rosetta II platform. You might not use VSCode, but keep in mind that any Electron based apps that you use may not work straight away.
If you exclusively use XCode and don't critically rely on any third-party apps you should be ok.
EDIT: I just noticed that you tagged your post for react-native. Information is pretty slim for compatibility at the moment, so I would be cautious. If you need a Macbook Pro to do commercial work or school projects right now then you run the risk of things not working as intended. The M1 MacBooks will undoubtedly support everything that you need as a developer in the future and they're particularly great candidates for iOS development because of the parallels made possible by the shared ARM architecture.
If you're relying on a new machine to get work done right now, going with an Intel-based machine is probably the best option. For reference, I recently got an Intel-based 16" MacBook Pro with work because I need to get things done right now without any issues. The commercial value far outweighs the potential benefits that an M1 machine might bring in a year or two. If you're ok with running into some issues over the next few months, I'm sure that the M1 machines will provide plenty of value for years ahead.
While there are problems that do not allow compiling the application.
brew and cocoapods are installed in the console with rosetta enabled.
pod install / update fails because flipper and some parts of RN are not supported by the platform
if you use expo - without cli then everything is ok
updates: now cli working (after update all - homebrew, cocoapods and other to last version)
from what I know, iOS app only compiles on Mac os, so it should work with whatever macOS uses.
I have NVIDIA driver v 378.92 installed, and according to the nvidia website since driver version 377.14, driver supports vulkan api 1.0.42.1. My vulkan SDK api version is 1.0.42.2. However when I check for my device support info, using vkjson_info.exe in the vulkan SDK, there's stated that only apiVersion 1.0.37 is supported.
I'm a bit confused how this works, can anyone enlighten this?
The reported version could be limited by the Vulkan Loader/Runtime it finds. First is this Windows or Linux?
If you have the Vulkan SDK 1.0.42.2 installed, can you run the VIA tool? It should generate an HTML output. If you look at the "Runtimes" section, you should see which ones are available and which one it's using. For best results, try running it from the same folder as vkjson_info.exe. But, it should give you a good idea if you just run it anywhere.
"1.0.42.1" is not a Vulkan version. Vulkan only has three levels (i.e. major.minor.patch). So the "1.0.37" is likely correct and the "1.0.42.1" is likely the version of some LunarG Vulkan SDK or possibly Vulkan Runtime that comes with it.
There are usually several types of versions flying around:
The Vulkan driver version. It is of the major.minor.patch format and it is in VkPhysicalDeviceProperties::apiVersion or can be obtained by a tool such as VHCV.
Optionally SDK/Layers version on the runtime machine. LunarG Vulkan SDK versioning of the form vulkan_major.vulkan_minor.vulkan_patch.optionally_SDK_patch.
Vulkan Runtime of the runtime machine — It is basically The Vulkan Loader dll (if the application uses that). Both SDK and drivers install this (and coexist) and they use their own versioning scheme. The SDK version also installs the Validation Layers to the system.
SDK/Header on the application developer machine. Versioning as described above. The vulkan.h header is always 1.0 and so has only single number version — VK_HEADER_VERSION (which matches the Vulkan patch version — but does not have to in the future)
SDK/Header on the driver developer machine. Versioning as described above. Should really be the same as Vulkan driver version. And most likely the Vulkan RT installed by the driver will be the same version. But I think I have seen this to differ.
It should not matter, because all patch versions are supposed to be both-ways compatible (in reality not really — there were some changes, but driver makers seem to keep up so far providing updated drivers, so it is not an issue). And in fact that is the only thing I could find in the driver documentation: "Vulkan 1.0" support.
I hope you are so enlightened now that you reached the ultimate state of boredom.
377 is a beta version driver from https://developer.nvidia.com/vulkan-driver . There is no guarantee that beta feature will be carried over to the subsequent release version. And according to http://vulkan.gpuinfo.org/listreports.php it didn't (378 indeed have 1.0.37 and 377 have 1.0.42 and more importantly has the extensions you want to try). Continue to use the beta for now if you want the features within it. As for Layers and other SDK features you should not need newer drivers — in fact you should always use the latest to benefit from Validation Layer bugfixes and improvements.
I'm working on a project which uses the D2XX drivers from FTDI chip.
We are delivering the ftd2xx.dll file as part of our application. As far as I understand, the other files (e.g. ftdibus.sys) are installed on the system (at least for Windows) where the application runs. Linux is also a target for us, but let's ignore that for simplicity now.
My question is regarding the relation between these files? If, for example, I upgrade the ftd2xx.dll file delivered with our application, will users have to install the newest drivers? What if they do not?
In addition to the specific FTDI drivers, any general source of information on this area is also very welcome.
I used to work on a project that made use of FTDI's D2XX library.
There is no tight coupling between the exact version of ftd2xx.dll and ftdibus.sys. I believe the interface between ftd2xx.dll and the kernel mode driver is version independent at the basic feature level (at least since version 2.04.06 or so). There is even functions in the DLL to query DLL and driver versions respectively.
Thus, it can happen that they are 'out of sync'. I.e. ftd2xx.dll could be one from a more recent release than ftdibys.sys or vice versa. That is not a problem as such.
You are of course in complete control of the ftd2xx.dll version, but how does the driver package get installed? Is it installed as part of your application, or do you rely on the user obtaining the driver from another source? If your application has an installer, it could be an option to include FTDI's driver package in the installation. Thereby you will know which driver version is available.
What can be really tricky (and what has caused me headaches) is if there are other devices on the PC that use FTDI's chip. If such a device is supplied with media containing an older version of the driver and the user chooses to install this driver, it will simply overwrite any existing version of the driver (e.g. the version that your installer provided). This is a potential cause of regression, because FTDI has resolved a lot of bugs in the driver over the years.
If I were you, I would check the driver version at runtime and compare it to a version that is known to work (the version that you tested your application with). If the driver is older, suggest the user to upgrade it. Otherwise I would assume it is compatible.
When I create a new OS X application project, I noticed many target options those confuse me quite a lot:
(1) The top-left setting of Xcode window:
(2) The "Base SDK":
(3) "Deployment Target":
(4) Architectures:
Here comes my questions:
For (2) and (3), I think it was clearer to understand. These are what I comprehend:
(2) This identifies what I develop with.
(3) This identifies what OS version my application will be used on.
Please tell me whether I am right...
But I could not understand (1). I just know that if I select 32-bit here, I could not use ARC.
Neither with (4), what are they? Are they represent the bit-width of the CPU? What was the difference between (1) and (4)?
I'll explain your items out of order.
The Base SDK
This defines the largest set of APIs you can use. You can use anything that existed as of the version number identified here. For example, if you use the 10.8 SDK, you can use -[NSColor CGColor] (introduced in 10.8), but not -[NSData base64EncodedDataWithOptions:] (first public in 10.9).
(Of course, you can also use anything older than that version.)
Accordingly, the SDK version is also known as the “max[imum] allowed” version in the Availability macros.
The SDK version also sometimes becomes important when Apple changes the behavior of an API. When they do that, they sometimes keep the old behavior around for applications linked with older SDKs. This is called an “on-or-after check”, as in “checks whether you're on 10.8 [SDK] or later”. (The concept and term pre-date Xcode having SDKs for each OS version. It used to just go by whatever OS you were running Xcode and building on.)
The Deployment Target
This is the minimum OS version you require. If something was removed in a prior version (rare, but it happens), you can't use it.
This tends to affect link-time and run-time things more than compile-time things. For example, ARC won't work if your deployment target is 10.5 or earlier.
Accordingly, the Deployment Target is also known as the “min[imum] required” version in the Availability macros.
The Info.plist can also specify a minimum OS version. Nowadays, this is set by default and it's set by macro expansion to the Deployment Target.
The Architectures build setting
Different CPUs have different architectures. Essentially, they fit into broad categories, such as:
PowerPC 32-bit (ppc)
PowerPC 64-bit (ppc64)
Intel 32-bit (i386)
Intel 64-bit (x86_64)
ARM 32-bit
ARM 64-bit
(PowerPC architectures aren't supported anymore. You can add them to the Architectures list, as ppc and ppc64, but Xcode will just ignore them.)
Macs nowadays have Intel processors. Almost all Intel Macs have 64-bit processors. You only need to worry about 32-bit Intel if you want to support Macs all the way back to 2006. That's probably more hassle than it's worth.
iOS devices run ARM processors, and most are still 32-bit. The A7 (iPhone 5S, iPad Air, iPad Mini with Retina Display) is 64-bit. But, if you run on an iOS Simulator, it's running on your Mac (it's a Simulator, not an emulator), so it'll target an Intel architecture (formerly always i386, but probably can now be x86_64 if needed).
The “top-left setting of Xcode window”
This is the build scheme and run destination. (Yes, it's two separate things in one pop-up menu. Actually, it's two separate pop-up menus in one control. Try it.)
“My Mac 64-bit” is the run destination. You'll be running the 64-bit version of your app on your Mac, not in an iOS Simulator or on an iOS device. Your choice for a Mac app is merely which architecture you want to run, and they should behave the same (this is, obviously, something you sometimes need to test).
iOS apps have more choices here. Some apps are iPhone-only, some are iPad-only, some are universal, and some may be set to build for both 32-bit and 64-bit architectures. You'll have a Simulator offered for each combination of form factor and architecture (e.g., iPhone Simulator 64-bit) you can run on. You'll also have the option to run your app on any iOS device that's connected and enabled for development (you get this prompt when you plug in the device in Xcode's sight).
TL;DR
Deployment Target is the lowest OS version your app will run on.
Base SDK is the highest OS version you can use stuff from. If it didn't exist yet, it doesn't exist at all for you.
Architectures are the set of hardware your app will run on.
Run Destination is the hardware you're going to run it on from within Xcode.
Just like with most OSes these days you can develop either a 32bit or a 64bit application. The "bitness" refers mostly to how memory addresses are structured (either using 32bit allowing so at most 4GB to address or 64bit (computation left as an exercise to the reader)). However the choosen architecture usually has more implications (like the missing ARC support for 32bit apps) but also how wide CPU registers are, how much memory a structure uses in RAM etc.
OS X also supports socalled fat binaryies that is, a bundle containing both 32bit and 64bit variants of your application. This is however only needed if you normally prefer to run 64bit code, but want your app also to run on OS versions that only support 32bit.
In XCode you can define for what architecture to build your project, either 32bit only, 64bit only or a fat bundle. In the project settings you can set what is allowed and in the top bar in XCode you can quickly switch between the allowed architectures (your questions 1 and 4).
The base SDK determines what you want to use to compile your application. If you select for instance 10.7 you cannot use new APIs that were introduced in 10.8 or 10.9 (which might perfectly be ok if you want your application to run on earlier OS versions only). However if you want to dynamically use new features if they are availble you'd select the latest OS as base SDK and check in code what OS you are running on and only use new features if they are available. It is totally ok to compile an application with access to new features and run it on older systems if you don't use the new APIs there (they are late-bound and hence only crash when you access them the first time and they are not available).
The deployment target determines the minimum OS version your application needs to run properly. This is a runtime check done when the application is started. The OS will refuse to start an application that is made for a later version.
I'm doing a little app that I want to distribute in different platforms, at least the 3 major ones.
Is it possible to use only Windows has the host OS to compile the binaries for Linux, Mac OS X and other supported platforms without resorting to virtual machines?
Or should I ask around in some community to help me compile on, well OS X, actually, since I can virtualize a Linux machine quite easy?
It is possible to compile from one plateform to another, it is called cross-compilation. You will find extensive informations at http://www.stack.nl/~marcov/buildfaq.pdf
The buildfaq above contains sample cross-compilation :
from Windows to Linux,
from FreeBSD to AMD64 Linux
The FPC download page contains :
the i386-win32 to x86_64-win64 cross-compiler
the i386-win32 to arm-wince cross-compiler
The FPC mailing lists are at http://www.freepascal.org/maillist.var
You will find more informations about FPC at http://www.freepascal.org/moreinfo.var
(I'm the author of the buildfaq document above)
There are some limitations. You can't target x86 from powerpc, because powerpc misses an "extended" type. But in generally it works.
I have generated a complete Lazarus for OS X on Windows.
I would virtualize Linux, as even if you can cross-compile, it means you're not testing the binaries on their native platforms. OS X is a trickier problem.
It is not possible to compile from one platform to another. We have a Mac and use FPC quite often. If you need some help with compiling on a mac, drop me a message.