CUDA: Difference between revisions

Content deleted Content added
m Section heading change: Multiprocessor Architecture → Multiprocessor architecture using a script
(9 intermediate revisions by 8 users not shown)
Line 9:
| developer = [[Nvidia]]
| released = {{Start date and age|2007|06|23}}
| latest_release_version = 12.4.16
| latest_release_date = {{Start date and age|2024|04|1208}}
| operating_system = [[Windows]], [[Linux]]
| platform = [[#GPUs supported|Supported GPUs]]
Line 17:
| website = {{URL|https://developer.nvidia.com/cuda-zone}}
}}
In [[computing]], '''CUDA''' (originally '''Compute Unified Device Architecture''') is a proprietary<ref name=":0">{{Cite web |last=Shah |first=Agam |title=Nvidia not totally against third parties making CUDA chips |url=https://www.theregister.com/2021/11/10/nvidia_cuda_silicon/ |access-date=2024-04-25 |website=www.theregister.com |language=en}}</ref> [[parallel computing]] platform and [[application programming interface]] (API) that allows software to use certain types of [[graphics processing units]] (GPUs) for accelerated general-purpose processing, an approach called general-purpose computing on GPUs ([[GPGPU]]). CUDA API and its runtime: The CUDA API is an extension of the C programming language that adds the ability to specify thread-level parallelism in C and also to specify GPU device specific operations (like moving data between the CPU and the GPU). <ref>{{cite web |last1=Nvidia |title=What is CUDA? |url=https://nvidia.custhelp.com/app/answers/detail/a_id/2132/~/what-is-cuda%3F |website=Nvidia |access-date=21 March 2024}}</ref> CUDA is a software layer that gives direct access to the GPU's virtual [[instruction set]] and parallel computational elements for the execution of [[compute kernel]]s.<ref name="CUDA intro - TomsHardware">{{cite web |url=https://www.tomshardware.com/reviews/nvidia-cuda-gpu,1954.html |title=Nvidia's CUDA: The End of the CPU? |last=Abi-Chahla |first=Fedy |date=June 18, 2008 |publisher=Tom's Hardware |access-date=May 17, 2015}}</ref> In addition to drivers and runtime kernels, the CUDA platform includes compilers, libraries and developer tools to help programmers accelerate their applications.
 
CUDA is designed to work with programming languages such as [[C (programming language)|C]], [[C++]], [[Fortran]] and [[Python (programming language)|Python]]. This accessibility makes it easier for specialists in parallel programming to use GPU resources, in contrast to prior APIs like [[Direct3D]] and [[OpenGL]], which required advanced skills in graphics programming.<ref>{{Cite news |url=https://www.videomaker.com/article/c15/19313-cuda-vs-opencl-vs-opengl |title=CUDA vs. OpenCL vs. OpenGL |last=Zunitch |first=Peter |date=2018-01-24 |work=Videomaker |access-date=2018-09-16 |language=en-US}}</ref> CUDA-powered GPUs also support programming frameworks such as [[OpenMP]], [[OpenACC]] and [[OpenCL]].<ref>{{Cite web |url=https://developer.nvidia.com/opencl |title=OpenCL |date=2013-04-24 |website=NVIDIA Developer |language=en |access-date=2019-11-04}}</ref><ref name="CUDA intro - TomsHardware" />
Line 548:
| colspan="4" rowspan="1" {{yes}}
|-
| Uniform Datapath <ref>[https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9839-discovering-the-turing-t4-gpu-architecture-with-microbenchmarks.pdf Dissecting the Turing GPU Architecture through Microbenchmarking]</ref>
| colspan="6" rowspan="1" {{no}}
| colspan="3" rowspan="1" {{yes}}
Line 899:
|}
 
===Technical Specificationspecification===
<div style="overflow-x:auto">
{| class="wikitable" style="font-size:85%;"
Line 1,239:
</div>
 
===Multiprocessor Architecturearchitecture===
<div style="overflow-x:auto">
{| class="wikitable" style="font-size:85%;"
Line 1,569:
{{Main|OneAPI (compute acceleration)}}
 
'''oneAPI''' is an initiative based in open standards, created to support software development for multiple hardware architectures.<ref>{{Cite web |title=oneAPI Programming Model |url=https://www.oneapi.io/ |access-date=2024-07-27 |website=oneAPI.io |language=en-US}}</ref> The oneAPI libraries must implement open specifications that are discussed publicly by the Special Interest Groups, offering the possibility for any developer or organization to implemente their own versions of oneAPI libraries.<ref>{{Cite web |title=Specifications {{!}} oneAPI |url=https://www.oneapi.io/spec/ |access-date=2024-07-27 |website=oneAPI.io |language=en-US}}</ref><ref>{{Cite web |title=oneAPI Specification — oneAPI Specification 1.3-rev-1 documentation |url=https://oneapi-spec.uxlfoundation.org/specifications/oneapi/v1.3-rev-1/ |access-date=2024-07-27 |website=oneapi-spec.uxlfoundation.org}}</ref>
'''oneAPI''' is open source, and all the corresponding libraries are published on its GitHub Page.
 
Originally made by Intel, other hardware adopters include Fujitsu and Huawei.
Line 1,575:
==== Unified Acceleration Foundation (UXL) ====
 
Unified Acceleration Foundation (UXL) is a new technology consortium that are working on the continuation of the OneAPI initiative, with tothe goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal willis competeto withoffer open alternatives to Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.<ref>{{Cite web |title=Exclusive: Behind the plot to break Nvidia's grip on AI by targeting software |website=[[Reuters]] |url=https://www.reuters.com/technology/behind-plot-break-nvidias-grip-ai-by-targeting-software-2024-03-25/ |access-date=2024-04-05}}</ref>
 
=== AMD ROCm ===