Content deleted Content added
m Section heading change: Multiprocessor Architecture → Multiprocessor architecture using a script |
|||
(9 intermediate revisions by 8 users not shown) | |||
Line 9:
| developer = [[Nvidia]]
| released = {{Start date and age|2007|06|23}}
| latest_release_version = 12.
| latest_release_date = {{Start date and age|2024|
| operating_system = [[Windows]], [[Linux]]
| platform = [[#GPUs supported|Supported GPUs]]
Line 17:
| website = {{URL|https://developer.nvidia.com/cuda-zone}}
}}
In [[computing]], '''CUDA''' (originally '''Compute Unified Device Architecture''') is a proprietary<ref name=":0">{{Cite web |last=Shah |first=Agam |title=Nvidia not totally against third parties making CUDA chips |url=https://www.theregister.com/2021/11/10/nvidia_cuda_silicon/ |access-date=2024-04-25 |website=www.theregister.com |language=en}}</ref> [[parallel computing]] platform and [[application programming interface]] (API) that allows software to use certain types of [[graphics processing units]] (GPUs) for accelerated general-purpose processing, an approach called general-purpose computing on GPUs ([[GPGPU]]). CUDA API and its runtime: The CUDA API is an extension of the C programming language that adds the ability to specify thread-level parallelism in C and also to specify GPU device specific operations (like moving data between the CPU and the GPU).
CUDA is designed to work with programming languages such as [[C (programming language)|C]], [[C++]], [[Fortran]] and [[Python (programming language)|Python]]. This accessibility makes it easier for specialists in parallel programming to use GPU resources, in contrast to prior APIs like [[Direct3D]] and [[OpenGL]], which required advanced skills in graphics programming.<ref>{{Cite news |url=https://www.videomaker.com/article/c15/19313-cuda-vs-opencl-vs-opengl |title=CUDA vs. OpenCL vs. OpenGL |last=Zunitch |first=Peter |date=2018-01-24 |work=Videomaker |access-date=2018-09-16 |language=en-US}}</ref> CUDA-powered GPUs also support programming frameworks such as [[OpenMP]], [[OpenACC]] and [[OpenCL]].<ref>{{Cite web |url=https://developer.nvidia.com/opencl |title=OpenCL |date=2013-04-24 |website=NVIDIA Developer |language=en |access-date=2019-11-04}}</ref><ref name="CUDA intro - TomsHardware" />
Line 548:
| colspan="4" rowspan="1" {{yes}}
|-
| Uniform Datapath
| colspan="6" rowspan="1" {{no}}
| colspan="3" rowspan="1" {{yes}}
Line 899:
|}
===Technical
<div style="overflow-x:auto">
{| class="wikitable" style="font-size:85%;"
Line 1,239:
</div>
===Multiprocessor
<div style="overflow-x:auto">
{| class="wikitable" style="font-size:85%;"
Line 1,569:
{{Main|OneAPI (compute acceleration)}}
'''oneAPI''' is an initiative based in open standards, created to support software development for multiple hardware architectures.<ref>{{Cite web |title=oneAPI Programming Model |url=https://www.oneapi.io/ |access-date=2024-07-27 |website=oneAPI.io |language=en-US}}</ref> The oneAPI libraries must implement open specifications that are discussed publicly by the Special Interest Groups, offering the possibility for any developer or organization to implemente their own versions of oneAPI libraries.<ref>{{Cite web |title=Specifications {{!}} oneAPI |url=https://www.oneapi.io/spec/ |access-date=2024-07-27 |website=oneAPI.io |language=en-US}}</ref><ref>{{Cite web |title=oneAPI Specification — oneAPI Specification 1.3-rev-1 documentation |url=https://oneapi-spec.uxlfoundation.org/specifications/oneapi/v1.3-rev-1/ |access-date=2024-07-27 |website=oneapi-spec.uxlfoundation.org}}</ref>
Originally made by Intel, other hardware adopters include Fujitsu and Huawei.
Line 1,575:
==== Unified Acceleration Foundation (UXL) ====
Unified Acceleration Foundation (UXL) is a new technology consortium
=== AMD ROCm ===
|