NVIDIA® T1000, 1065 - 1395MHz, 8GB GDDR6, Graphics Card
Full Performance in a Small Form Factor Solution
Built on NVIDIA Turing GPU architecture, the NVIDIA T1000 8GB GPU is a powerful, low-profile solution that delivers the performance and capabilities required by demanding professional applications in a compact professional graphics card. With 896 CUDA Cores, up to 8 GB of GDDR6 memory, and the ability to drive up to four 5K displays, the NVIDIA T1000 8GB (or T1000) is ready to take your work to the next level.
Small form factor computing solutions are becoming more common as professionals look to minimize their desktop workstation footprint—without compromising performance. Today’s professional workflows require small form factor workstations to provide full-size features and performance in a compact package.
- Turing GPU Architecture Based on state-of-the-art 12nm FFN (FinFET NVIDIA) high-performance manufacturing process customized for NVIDIA to incorporate 896 CUDA cores, the NVIDIA T1000 8GB (or T1000) GPU is the most powerful Single Slot professional solution for CAD, DCC, financial service industry (FSI) and visualization professionals in general looking to reach excellence performance in a compact and efficient form factor. The Turing GPU architecture enables the biggest leap in computer real-time graphics rendering since NVIDIA’s invention of programmable shaders in 2001.
- Advanced Shading Technologies
The Turing GPU architecture features the following new advanced shader technologies.
Mesh Shading: Compute-based geometry pipeline to speed geometry processing and culling on geometrically complex models and scenes. Mesh shading provides up to 2x performance improvement on geometry-bound workloads.
Variable Rate Shading (VRS): Gain rendering efficiency by varying the shading rate based on scene content, direction of gaze, and motion. Variable rate shading provides similar image quality with 50% reduction in shaded pixels.
Texture Space Shading: Object/texture space shading to improve the performance of pixel shader-heavy workloads such as depth-of-field and motion blur. Texture space shading provides greater throughput with increased fidelity by reusing pre-shaded texels for pixel-shader heavy VR workloads. - Advanced Streaming Multiprocessor (SM) Architecture Combined shared memory and L1 cache improve performance significantly, while simplifying programing and reducing the tuning required to attain best application performance. Each SM contains 96 KB of L1 shared memory, which can be configured for various capabilities depending on compute or graphics workload. For compute cases, up to 64KB can be allocated to the L1 cache or shared memory, while graphics workload can allocate up to 48 KB for shared memory; 32 KB for L1 and 16KB for texture units. Combining the L1 data cache with the shared memory reduces latency and provide higher bandwidth.
- High Performance GDDR6 Memory Built with Turing’s vastly optimized 4GB GDDR6 memory subsystem for the industry’s fastest graphics, this version of the NVIDIA T1000 features 8GB of frame buffer capacity and 160 GB/s of peak bandwidth for double the throughput of the previous generation. NVIDIA T1000 8GB (or T1000) boards are the ideal platform for 3D professionals and high demanding with vast arrays of datasets and multi display environments.
- Single Instruction, Multiple Thread (SIMT) New independent thread scheduling capability enables finer-grain synchronization and cooperation between parallel threads by sharing resources among small jobs.
- Mixed-Precision Computing Double the throughput and reduce storage requirements with 16-bit floating point precision computing to enable the training and deployment of larger neural networks. With independent parallel integer and floating-point data paths, the Turing SM is also much more efficient on workloads with a mix of computation and addressing calculations.
- Graphics Preemption Pixel-level preemption provides more granular control to better support time-sensitive tasks such as VR motion tracking.
- Compute Preemption Preemption at the instruction-level provides finer grain control over compute tasks to prevent long-running applications from either monopolizing system resources or timing out.
- H.264 and HEVC Encode/Decode Engines Deliver faster than real-time performance for transcoding, video editing, and other encoding applications with two dedicated H.264 and HEVC encode engines and a dedicated decode engine that are independent of 3D/compute pipeline.
- NVIDIA GPU BOOST 4.0 Automatically maximize application performance without exceeding the power and thermal envelope of the card. Allows applications to stay within the boost clock state longer under higher temperature threshold before dropping to a secondary temperature setting base clock.
CUDA Cores | 896 |
GPU Memory | 8 GB GDDR6 |
Peak FP32 Performance | 2.50 TFLOPS |
Memory Interface | 128-bit |
Memory Bandwidth | 160 GB/s |
Max Power Consumption | 50 W |
System Interface | PCI Express 3.0 x16 |
Display Connectors | 4x mDP |
Max Digital Resolution | 7680 x 4320 at 60 Hz |
Max Displays | 4x at 5K | 5120 x 2880 at 60 Hz |
HDR Support | Yes |
Quad Buffered Stereo | Yes |
NVENC | NVDEC | 2x H.264 and HVEC Encoders 1x Decode Engine |
Form Factor | 2.713” H x 6.137” L, Low-Profile Single Slot |
Thermal Solution | Active Fansink |