NVIDIA: It is easier to parallelize the code in CUDA and OpenCL on x86
A couple of weeks ago intel announced its new accelerators Phi Xeon 5110P , and following the tradition of their aggressive campaigns marketeras, touted as the solution much easier to program compared to Nvidia Tesla accelerators and AMD FirePro S Series ; campaign before the we knew that sooner or later would answer from Nvidia or AMD.
Nvidia is the first to respond, and if not belie the claims of Intel, mentioned that both its Tesla GPUs as its competitor amd FirePro, by using programming environments such as CUDA, and OpenCL OpenACC allow more easily parallelize code compared to programming multicore CPU (making a clear allusion to Xeon Phi), achieving a yield up to 100 times (or more) than the CPUs.
Although Nvidia agrees that succulent performance gains (from 100 to 200 times) are not possible in all HPC world scenarios, as these figures are possible only in some scenarios where the code is not fully optimized for multi-core CPU operation mention that optimizing code can double the CPU performance and even get performance gains between 5-10 times, which obviously would not so Succulent advantage of GPUs.
Nvidia believes that many developers bet for the GPUs, which thanks to APIs like CUDA and OpenCL OpenACC, produce greater performance gains and easiest way to optimize their applications for multicore x86.
– Too good to be true, bad coding versus GPGPU compute power (PC Perspective)
– Nvidia GPGPU says speed up large claims due to bad Were original code (The Inquirer)