IEI Mustang-T100-T5

Computing Accelerator Card with 5 x Google Coral edge TPU, PCIe Gen2 x4 interface

» 5 x Google Edge TPU ML accelerator

 

» 20 TOPS peak performance (int8)

 

» Host interface PCIe Gen2 x 4

 

» Low-profile PCIe form factor

 

» Support Multiple cards

 

» Approximate 15W

 

» RoHS compliant

The Next Wave for Edge AI Computing with Coral Edge TPU™

Artificial intelligence (AI) has been applied in various industries and has changed the world in recent years. Edge AI has become a virtual role when customers spend more time and move services to local devices to increase privacy, security and minimize latency.

 

IEI’s Mustang-T100 leverages the powerful Coral technology, a complete and fast toolkit for building products with local AI, to bring AI developers getting into a different AI inference platform at the edge. The Mustang-T100 integrates five Coral Edge TPU™ co-processors in a half-height, half-length PCIe card, and offers a well-computing speed up to 20 TOPS and extremely low power consumption (only 15W). Moreover, powered by a well-developed Tensorflow Lite community, it can smoothly and simply implement the existing model to your edge inference project. The Mustang-T100 is an ideal AI PCIe computing accelerator card for image classification, object detection, and image segmentation.

Coral is Bring New Power for On-device Intelligence Solutions

The Carol Edge TPU™ is capable of performing 4 trillion operations per second (4 TOPS), using only 2 watts of power—that‘s 2 TOPS per watt and can be connected over a PCIe Gen2 x1 or USB2 interface. Coral's on-device ML acceleration technology helps developers build fast, efficient, and affordable solutions for the edge.

High Compatibility from The Start

To support diverse needs from IT or AI developers, the Mustang-T100 can be implemented in various operating systems, such as Linux and Windows, and also can be deployed in the X86 and Arm platform to accelerate and maximize edge AI performance. More, combined with TensorFlow Lite, no need to build ML training models from the ground up. TensorFlow Lite models can be compiled models to run on the Edge TPU completely.

Multitasking or Pipelining, Select Your Inferencing Mode

Clients can select from two different modes for numerous AI applications at the edge to run your inferencing project depending on their needs.

 

Multitasking function to run each model in parallel

ja

If you need to run multiple models, you can assign each model to a specific Edge TPU and run them in parallel at the same time for extreme computing performance.

Model pipelining to get faster throughput and low latency

For other scenarios that require very fast throughput or large models, pipelining your model allows you to execute different segments of the same model on different Edge TPUs. This can improve throughput for high-speed applications and can reduce total latency for large models.

Optimize Pre-trained Models at The Edge

The Mustang-T100 supports AutoML Vision Edge that AI developers can use Edge TPU to accelerate transfer-learning with a pre-trained model at the edge devices directly. The models can be optimized to increase model accuracy and save training time by just running fewer than 200 images.

Hardware Specifications

Form FactorLow-profile PCIe card
CPU5 x Google Coral Edge TPU
Cooling method / System FanActive Fan
Dimensions (LxWxH) (mm)167.64 (L) x 64.41 (W) x 18 (H)
Operating Temperature0°C ~ 55°C (32°F ~ 132°F)
Storage Temperature-20°C ~ 75°C (-4°F ~ 167°F)
Humidity5% ~ 90% RH, Non-condensing
Safety & EMCRoHS

Ordering Information

Mustang-T100-T5-R10Computing Accelerator Card with 5 x Google Coral edge TPU, PCIe gen2 x4 interface, RoHS

Package Contents

Package Content
  • 1 x Full height bracket

  • 1 x QIG