{"id":1321,"date":"2026-05-08T09:46:14","date_gmt":"2026-05-08T01:46:14","guid":{"rendered":"https:\/\/www.ndnlab.com\/?p=1321"},"modified":"2026-05-08T09:46:16","modified_gmt":"2026-05-08T01:46:16","slug":"a-switch-centric-in-network-architecture-for-accelerating-llm-inference-in-shared-memory-network","status":"publish","type":"post","link":"https:\/\/www.ndnlab.com\/?p=1321","title":{"rendered":"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\"><strong>1. \u6458\u8981\uff08Abstract\uff09<\/strong><\/h2>\n\n\n\n<p>\u672c\u6587\u7814\u7a76\u7684\u662f\u5927\u6a21\u578b\u63a8\u7406\u4e2d\u7684\u901a\u4fe1\u74f6\u9888\u95ee\u9898\uff0c\u5177\u4f53\u805a\u7126\u5728 Tensor Parallelism\uff08TP\uff09\u63a8\u7406\u573a\u666f\u4e0b\u7684 All-Reduce \u52a0\u901f\u3002\u968f\u7740 LLM \u53c2\u6570\u89c4\u6a21\u4e0d\u65ad\u6269\u5927\uff0c\u5355\u4e2a GPU \u5f88\u96be\u72ec\u7acb\u5b8c\u6210\u4f4e\u5ef6\u8fdf\u63a8\u7406\uff0c\u591a\u52a0\u901f\u5668\u5e76\u884c\u5df2\u7ecf\u6210\u4e3a\u5e38\u6001\u3002\u4f46\u5728 TP \u63a8\u7406\u4e2d\uff0c\u6bcf\u4e00\u5c42 attention \u548c MLP block \u540e\u901a\u5e38\u90fd\u9700\u8981 All-Reduce\uff0c\u540c\u6b65\u5404\u4e2a\u52a0\u901f\u5668\u4e0a\u7684\u4e2d\u95f4\u7ed3\u679c\u3002\u8fd9\u4e9b\u901a\u4fe1\u64cd\u4f5c\u76f4\u63a5\u4f4d\u4e8e\u63a8\u7406\u5173\u952e\u8def\u5f84\u4e0a\uff0c\u65e0\u6cd5\u50cf\u8bad\u7ec3\u53cd\u5411\u4f20\u64ad\u90a3\u6837\u8f83\u597d\u5730\u4e0e\u8ba1\u7b97\u91cd\u53e0\uff0c\u56e0\u6b64\u4f1a\u660e\u663e\u62d6\u6162\u7aef\u5230\u7aef\u63a8\u7406\u901f\u5ea6\u3002&nbsp;<\/p>\n\n\n\n<p>\u73b0\u6709\u65b9\u6848\u4e2d\uff0cNVIDIA NVLink SHARP\uff08NVLS\uff09\u5df2\u7ecf\u628a\u90e8\u5206 reduction \u64cd\u4f5c\u4e0b\u6c89\u5230\u4ea4\u6362\u673a\u4e2d\u6267\u884c\uff0c\u4f46\u5b83\u4ecd\u7136\u5c5e\u4e8e accelerator-centric \u67b6\u6784\uff1a\u7531 GPU \u53d1\u8d77 load \u6307\u4ee4\u89e6\u53d1\u4ea4\u6362\u673a\u5185 reduction\uff0c\u7ed3\u679c\u8fd8\u8981\u5148\u8fd4\u56de\u53d1\u8d77 GPU\uff0c\u518d\u7531 GPU \u63a8\u9001\u56de\u4ea4\u6362\u673a\u8fdb\u884c\u5e7f\u64ad\u3002\u8fd9\u4f1a\u9020\u6210\u989d\u5916\u6570\u636e\u5f80\u8fd4\u3002\u540c\u65f6\uff0cNVLS \u4f9d\u8d56 memory-semantic instructions\uff0c\u96be\u4ee5\u652f\u6301\u66f4\u7075\u6d3b\u7684\u7f51\u7edc\u5185\u64cd\u4f5c\uff0c\u4f8b\u5982\u672c\u6587\u63d0\u51fa\u7684 in-network quantization\uff08INQ\uff09\u3002&nbsp;<\/p>\n\n\n\n<p>\u9488\u5bf9\u8fd9\u4e9b\u95ee\u9898\uff0c\u8bba\u6587\u63d0\u51fa SCIN\uff08Switch-Centric In-Network Architecture\uff09\uff0c\u5373\u4e00\u79cd\u9762\u5411\u591a\u52a0\u901f\u5668\u5171\u4eab\u5185\u5b58\u7f51\u7edc\u7684\u4ea4\u6362\u673a\u4e2d\u5fc3\u5f0f\u7f51\u7edc\u5185\u8ba1\u7b97\u67b6\u6784\u3002SCIN \u7684\u6838\u5fc3\u662f\u628a\u4e00\u4e2a In-Switch Accelerator\uff08ISA\uff09 \u653e\u5230\u4ea4\u6362\u673a\u4e2d\uff0c\u7531 ISA \u4e3b\u52a8\u8bbf\u95ee\u5404\u52a0\u901f\u5668\u5185\u5b58\u3001\u6267\u884c All-Reduce\uff0c\u5e76\u628a\u7ed3\u679c\u76f4\u63a5\u5199\u56de\u53c2\u4e0e\u8bbe\u5907\u3002\u8fd9\u6837\u53ef\u4ee5\u51cf\u5c11\u5197\u4f59\u6570\u636e\u79fb\u52a8\u548c\u540c\u6b65\u5f00\u9500\u3002\u540c\u65f6\uff0cISA \u5185\u90e8\u8fd8\u52a0\u5165\u91cf\u5316\u6a21\u5757\uff0c\u4f7f All-Reduce \u53ef\u964d\u5230 8-bit \u7cbe\u5ea6\uff0c\u63a5\u8fd1\u5b9e\u73b0 2 \u500d\u901a\u4fe1\u538b\u7f29\uff0c\u4e14\u51e0\u4e4e\u4e0d\u635f\u5931\u6a21\u578b\u7cbe\u5ea6\u3002&nbsp;<\/p>\n\n\n\n<p>\u5b9e\u9a8c\u65b9\u9762\uff0c\u4f5c\u8005\u5b9e\u73b0\u4e86\u4e00\u4e2a\u591a FPGA \u539f\u578b\u7cfb\u7edf\uff0c\u5e76\u7528\u8be5\u539f\u578b\u6821\u51c6\u7f51\u7edc\u6a21\u62df\u5668\u3002\u5728 8-GPU \u7cfb\u7edf\u6a21\u62df\u4e2d\uff0cSCIN \u76f8\u6bd4\u8f6f\u4ef6 ring All-Reduce\uff0c\u5c0f\u6d88\u606f\u6700\u9ad8\u52a0\u901f 8.7\u00d7\uff0c\u5927\u6d88\u606f\u6700\u9ad8\u52a0\u901f 3.8\u00d7\uff1b\u5728 LLaMA-2 \u6a21\u578b\u4e0a\uff0c\u6700\u9ad8\u5e26\u6765 1.74\u00d7 TTFT \u52a0\u901f\u548c 1.34\u00d7 TPOT \u52a0\u901f\u3002\u6574\u4f53\u6765\u770b\uff0c\u8fd9\u7bc7\u8bba\u6587\u4e0d\u662f\u5355\u7eaf\u4f18\u5316\u4e00\u4e2a\u901a\u4fe1\u7b97\u6cd5\uff0c\u800c\u662f\u4ece\u7cfb\u7edf\u67b6\u6784\u5c42\u9762\u91cd\u65b0\u601d\u8003\uff1a\u5927\u6a21\u578b\u63a8\u7406\u4e2d\u7684 collective communication \u662f\u5426\u5e94\u8be5\u7531 GPU \u4e3b\u5bfc\uff0c\u8fd8\u662f\u5e94\u8be5\u771f\u6b63\u4ea4\u7ed9\u4ea4\u6362\u673a\u6765\u4e3b\u5bfc\u3002&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"393\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-1024x393.png\"  class=\"wp-image-1322\" style=\"aspect-ratio:2.60566110895696;width:644px;height:auto\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-1024x393.png 1024w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-300x115.png 300w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-768x295.png 768w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-1536x590.png 1536w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image.png 1578w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>2. \u7814\u7a76\u80cc\u666f\u4e0e\u95ee\u9898\u52a8\u673a\uff08Introduction\uff09<\/strong><\/h2>\n\n\n\n<p>LLM \u63a8\u7406\u5bf9\u591a\u52a0\u901f\u5668\u7cfb\u7edf\u7684\u4f9d\u8d56\u8d8a\u6765\u8d8a\u5f3a\u3002\u4e00\u65b9\u9762\uff0c\u5927\u6a21\u578b\u53c2\u6570\u91cf\u548c KV cache \u5360\u7528\u4e0d\u65ad\u589e\u957f\uff0c\u5355\u5361\u663e\u5b58\u5f88\u96be\u627f\u8f7d\uff1b\u53e6\u4e00\u65b9\u9762\uff0c\u7ebf\u4e0a\u63a8\u7406\u53c8\u8981\u6c42\u8f83\u4f4e\u5ef6\u8fdf\uff0c\u56e0\u6b64\u9700\u8981\u901a\u8fc7 tensor parallelism \u628a\u77e9\u9635\u8ba1\u7b97\u5207\u5206\u5230\u591a\u4e2a\u52a0\u901f\u5668\u4e0a\u6267\u884c\u3002TP \u7684\u597d\u5904\u662f\u964d\u4f4e\u5355\u4e2a\u52a0\u901f\u5668\u7684\u8ba1\u7b97\u548c\u5b58\u50a8\u538b\u529b\uff0c\u4f46\u4ee3\u4ef7\u662f\u6bcf\u5c42\u90fd\u4f1a\u5f15\u5165\u5927\u91cf All-Reduce \u901a\u4fe1\u3002\u8bba\u6587\u6307\u51fa\uff0c\u5728 TP \u63a8\u7406\u4e2d\uff0c\u6bcf\u4e2a transformer layer \u901a\u5e38\u4f1a\u5728 attention \u548c MLP \u540e\u5404\u8fdb\u884c\u4e00\u6b21 All-Reduce\u3002&nbsp;<\/p>\n\n\n\n<p>\u8fd9\u4e2a\u95ee\u9898\u5728\u63a8\u7406\u9636\u6bb5\u5c24\u5176\u4e25\u91cd\u3002\u8bad\u7ec3\u4e2d\u7684\u68af\u5ea6 All-Reduce \u6709\u65f6\u53ef\u4ee5\u548c\u53cd\u5411\u4f20\u64ad\u8ba1\u7b97\u91cd\u53e0\uff0c\u4f46\u63a8\u7406\u4e2d\u7684 All-Reduce \u4f4d\u4e8e\u524d\u5411\u8ba1\u7b97\u5173\u952e\u8def\u5f84\u4e0a\uff0cGPU \u5fc5\u987b\u7b49\u901a\u4fe1\u5b8c\u6210\u540e\u624d\u80fd\u7ee7\u7eed\u6267\u884c\u4e0b\u4e00\u5c42\u3002\u56e0\u6b64\uff0c\u901a\u4fe1\u5ef6\u8fdf\u4f1a\u76f4\u63a5\u8f6c\u5316\u4e3a\u7528\u6237\u611f\u77e5\u7684\u63a8\u7406\u5ef6\u8fdf\u3002\u8bba\u6587\u4e2d\u8fd8\u533a\u5206\u4e86\u4e24\u4e2a\u63a8\u7406\u9636\u6bb5\uff1aprefill \u9636\u6bb5 All-Reduce \u6d88\u606f\u5927\uff0c\u66f4\u504f\u5e26\u5bbd\u74f6\u9888\uff1bdecode \u9636\u6bb5 All-Reduce \u6d88\u606f\u5c0f\u4f46\u6b21\u6570\u591a\uff0c\u66f4\u504f\u5ef6\u8fdf\u74f6\u9888\u3002\u4e5f\u5c31\u662f\u8bf4\uff0c\u597d\u7684\u4e92\u8fde\u7cfb\u7edf\u5fc5\u987b\u540c\u65f6\u6ee1\u8db3\u9ad8\u5e26\u5bbd\u548c\u4f4e\u5ef6\u8fdf\u3002&nbsp;<\/p>\n\n\n\n<p>\u73b0\u6709 NVLS \u901a\u8fc7\u5728 NVSwitch \u4e2d\u6267\u884c reduction\uff0c\u5df2\u7ecf\u6bd4\u7eaf\u8f6f\u4ef6 ring All-Reduce \u66f4\u8fdb\u4e00\u6b65\u3002\u4f46\u5b83\u7684\u95ee\u9898\u5728\u4e8e\uff0c\u67b6\u6784\u4ecd\u7136\u56f4\u7ed5 GPU \u5c55\u5f00\u3002GPU \u5148\u53d1\u8d77 pull request\uff0c\u4ea4\u6362\u673a\u5b8c\u6210 reduction \u540e\uff0c\u7ed3\u679c\u5fc5\u987b\u8fd4\u56de GPU\uff0c\u518d\u7531 GPU \u53d1\u8d77 push request \u505a\u5e7f\u64ad\u3002\u8fd9\u79cd\u8def\u5f84\u5929\u7136\u591a\u4e86\u4e00\u6b21\u65e0\u7528\u6570\u636e\u4f20\u8f93\u3002\u6b64\u5916\uff0c\u7531\u4e8e NVLS \u901a\u8fc7 GPU memory instruction \u89e6\u53d1\u7f51\u7edc\u5185\u64cd\u4f5c\uff0c\u5b83\u80fd\u652f\u6301\u7684\u64cd\u4f5c\u7c7b\u578b\u6709\u9650\uff0c\u65e0\u6cd5\u5f88\u597d\u5730\u652f\u6301 INQ \u8fd9\u7c7b\u66f4\u590d\u6742\u7684\u7f51\u7edc\u5185\u5904\u7406\u3002&nbsp;<\/p>\n\n\n\n<p>\u672c\u6587\u7684\u52a8\u673a\u6b63\u662f\uff1a\u5982\u679c All-Reduce \u7684\u4e3b\u8981\u74f6\u9888\u5df2\u7ecf\u53d1\u751f\u5728\u7f51\u7edc\u4e2d\uff0c\u90a3\u4e48\u662f\u5426\u5e94\u8be5\u8ba9\u4ea4\u6362\u673a\u771f\u6b63\u6210\u4e3a\u6267\u884c\u4e3b\u4f53\uff1fSCIN \u7684\u56de\u7b54\u662f\u80af\u5b9a\u7684\u3002\u5b83\u628a\u67b6\u6784\u4ece GPU \u4e3b\u5bfc\u8f6c\u5411\u4ea4\u6362\u673a\u4e3b\u5bfc\uff0c\u8ba9 ISA \u4e3b\u52a8\u8bfb\u5199\u52a0\u901f\u5668\u5185\u5b58\uff0c\u5b8c\u6210 reduction\u3001quantization \u548c\u7ed3\u679c\u5199\u56de\u3002\u8fd9\u4e2a\u8bbe\u8ba1\u672c\u8d28\u4e0a\u662f\u5728\u628a\u4ea4\u6362\u673a\u4ece\u201c\u8f6c\u53d1\u8bbe\u5907\u201d\u5347\u7ea7\u6210\u201c\u63a8\u7406\u901a\u4fe1\u8def\u5f84\u4e0a\u7684\u4e3b\u52a8\u8ba1\u7b97\u8282\u70b9\u201d\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"754\" height=\"708\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-1.png\"  class=\"wp-image-1323\" style=\"aspect-ratio:1.0650077760497667;width:364px;height:auto\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-1.png 754w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-1-300x282.png 300w\" sizes=\"auto, (max-width: 754px) 100vw, 754px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe1\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe1\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"742\" height=\"564\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-2.png\"  class=\"wp-image-1324\" style=\"aspect-ratio:1.3156621843861855;width:343px;height:auto\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-2.png 742w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-2-300x228.png 300w\" sizes=\"auto, (max-width: 742px) 100vw, 742px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe2\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe2\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>3. \u7cfb\u7edf\u67b6\u6784\u4e0e\u6574\u4f53\u8bbe\u8ba1\uff08System Overview\uff09<\/strong><\/h2>\n\n\n\n<p>SCIN \u7684\u6838\u5fc3\u8bbe\u8ba1\u662f switch-centric in-network computing\u3002\u4e0e NVLS \u4e0d\u540c\uff0cSCIN \u4e0d\u518d\u8ba9 GPU \u53d1\u8d77\u548c\u8c03\u5ea6\u6574\u4e2a All-Reduce\uff0c\u800c\u662f\u5728\u4ea4\u6362\u673a\u4e2d\u52a0\u5165 In-Switch Accelerator\uff08ISA\uff09\u3002ISA \u53ef\u4ee5\u901a\u8fc7\u5171\u4eab\u5185\u5b58\u7f51\u7edc\u76f4\u63a5\u8bbf\u95ee\u8fde\u63a5\u5728\u4ea4\u6362\u673a\u4e0a\u7684\u52a0\u901f\u5668\u5185\u5b58\uff0c\u4e3b\u52a8\u53d1\u8d77 read request\uff0c\u6536\u96c6\u6570\u636e\u540e\u5728\u4ea4\u6362\u673a\u5185\u90e8\u5b8c\u6210 reduction\uff0c\u518d\u76f4\u63a5\u628a\u7ed3\u679c\u5199\u56de\u5230\u5404\u53c2\u4e0e\u52a0\u901f\u5668\u3002&nbsp;<\/p>\n\n\n\n<p>\u8fd9\u4e2a\u8bbe\u8ba1\u5e26\u6765\u51e0\u4e2a\u76f4\u63a5\u597d\u5904\u3002\u7b2c\u4e00\uff0c\u51cf\u5c11\u5197\u4f59\u4f20\u8f93\u3002NVLS \u4e2d reduction \u7ed3\u679c\u8981\u5148\u56de\u5230\u53d1\u8d77 GPU\uff0c\u518d\u5e7f\u64ad\u7ed9\u5176\u4ed6 GPU\uff1bSCIN \u5219\u53ef\u4ee5\u5728\u4ea4\u6362\u673a\u5185\u5b8c\u6210 reduction \u540e\u76f4\u63a5\u5e7f\u64ad\u6216\u5199\u56de\uff0c\u51cf\u5c11\u6570\u636e\u7ed5\u8def\u3002\u7b2c\u4e8c\uff0c\u964d\u4f4e\u540c\u6b65\u5f00\u9500\u3002\u7531\u4e8e ISA \u4f4d\u4e8e\u4ea4\u6362\u673a\u4e2d\u5fc3\uff0c\u7aef\u70b9\u52a0\u901f\u5668\u53ea\u9700\u548c\u4ea4\u6362\u673a\u540c\u6b65\uff0c\u4e0d\u9700\u8981\u7ecf\u8fc7\u989d\u5916\u7684 GPU \u95f4\u534f\u8c03\u8def\u5f84\u3002\u7b2c\u4e09\uff0c\u91ca\u653e\u52a0\u901f\u5668\u8d44\u6e90\u3002SCIN \u628a collective operation \u5c3d\u53ef\u80fd\u4e0b\u6c89\u5230 ISA\uff0cGPU \u53ea\u8d1f\u8d23\u8f7b\u91cf\u540c\u6b65\uff0c\u66f4\u591a\u8d44\u6e90\u53ef\u4ee5\u7559\u7ed9\u6a21\u578b\u8ba1\u7b97\u3002&nbsp;<\/p>\n\n\n\n<p>SCIN \u7684\u53e6\u4e00\u70b9\u91cd\u8981\u8bbe\u8ba1\u662f\u53ef\u5b9a\u5236 ISA data plane\u3002\u4f5c\u8005\u4e0d\u4ec5\u7528\u5b83\u505a\u666e\u901a All-Reduce\uff0c\u8fd8\u5728 ISA pipeline \u4e2d\u52a0\u5165 quantization\/dequantization \u6a21\u5757\uff0c\u5b9e\u73b0 INQ All-Reduce\u3002\u8fd9\u4f7f\u5f97 All-Reduce \u53ef\u4ee5\u5728\u7f51\u7edc\u5185\u88ab\u538b\u7f29\u5230 8-bit\uff0c\u51cf\u5c11\u901a\u4fe1\u91cf\u3002\u76f8\u6bd4 ring-based quantization \u9700\u8981\u968f\u7740 TP size \u589e\u52a0\u8fdb\u884c\u591a\u8f6e\u91cf\u5316\uff0cSCIN \u7684 INQ \u53ea\u5f15\u5165\u4e00\u6b21\u989d\u5916\u91cf\u5316\uff0c\u56e0\u6b64\u66f4\u5bb9\u6613\u63a7\u5236\u8bef\u5dee\u3002&nbsp;<\/p>\n\n\n\n<p>\u6574\u4f53\u6765\u770b\uff0cSCIN \u7684\u67b6\u6784\u4e0d\u662f\u5355\u70b9\u4f18\u5316\uff0c\u800c\u662f\u56f4\u7ed5 TP \u63a8\u7406\u7684\u4e24\u4e2a\u74f6\u9888\u5206\u522b\u5904\u7406\uff1adecode \u9636\u6bb5\u901a\u8fc7\u51cf\u5c11\u7f51\u7edc\u8df3\u6570\u548c\u540c\u6b65\u5ef6\u8fdf\u89e3\u51b3\u5c0f\u6d88\u606f\u4f4e\u5ef6\u8fdf\u95ee\u9898\uff1bprefill \u9636\u6bb5\u901a\u8fc7 INQ \u51cf\u5c11\u5927\u6d88\u606f\u901a\u4fe1\u91cf\uff0c\u89e3\u51b3\u5e26\u5bbd\u95ee\u9898\u3002\u8fd9\u4e00\u70b9\u662f\u5b83\u6bd4\u666e\u901a\u901a\u4fe1\u4f18\u5316\u66f4\u5b8c\u6574\u7684\u5730\u65b9\u3002<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"592\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-3-1024x592.png\"  class=\"wp-image-1325\" style=\"aspect-ratio:1.730620654133402;width:720px;height:auto\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-3-1024x592.png 1024w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-3-300x173.png 300w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-3-768x444.png 768w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-3.png 1336w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe3\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe3\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>4. \u5173\u952e\u6280\u672f\u8bbe\u8ba1\uff08Design and Implementation\uff09<\/strong><\/h2>\n\n\n\n<p>SCIN \u7684\u5b9e\u73b0\u4e3b\u8981\u89e3\u51b3\u4e09\u4e2a\u95ee\u9898\uff1a\u534f\u8bae\u5982\u4f55\u652f\u6301 ISA \u8bbf\u95ee\u52a0\u901f\u5668\u5185\u5b58\u3001ISA \u5982\u4f55\u4e0e\u52a0\u901f\u5668\u540c\u6b65\u3001\u4ea4\u6362\u673a\u5185\u90e8\u5982\u4f55\u9ad8\u6548\u5b8c\u6210\u4e71\u5e8f\u6570\u636e\u5904\u7406\u4e0e All-Reduce\u3002<\/p>\n\n\n\n<p>\u9996\u5148\u662f\u534f\u8bae\u6269\u5c55\u3002\u4f20\u7edf\u4ea4\u6362\u673a\u534f\u8bae\u4e3b\u8981\u8d1f\u8d23 packet forwarding\uff0c\u4f46 SCIN \u8981\u8ba9 ISA \u76f4\u63a5\u53c2\u4e0e memory transaction\u3002\u4e3a\u6b64\uff0c\u4f5c\u8005\u6ca1\u6709\u5927\u5e45\u91cd\u5199\u534f\u8bae\uff0c\u800c\u662f\u5728 packet header \u4e2d\u52a0\u5165\u4e00\u4e2a 1-bit INC flag\uff0c\u7528\u6765\u533a\u5206\u666e\u901a\u8f6c\u53d1\u6d41\u91cf\u548c ISA \u76f8\u5173\u6d41\u91cf\u3002\u4ea4\u6362\u673a\u7aef\u53e3\u5185\u90e8\u5c06\u961f\u5217\u62c6\u6210\u4e24\u7ec4\uff1a\u4e00\u7ec4\u670d\u52a1\u666e\u901a switch forwarding\uff0c\u4e00\u7ec4\u670d\u52a1 ISA memory transaction\u3002\u8fd9\u6837\u65e2\u652f\u6301 ISA \u4e3b\u52a8\u8bbf\u95ee\u52a0\u901f\u5668\u5185\u5b58\uff0c\u53c8\u5c3d\u91cf\u51cf\u5c11\u5bf9\u539f\u6709\u7f51\u7edc\u529f\u80fd\u7684\u5e72\u6270\u3002&nbsp;<\/p>\n\n\n\n<p>\u5176\u6b21\u662f\u540c\u6b65\u673a\u5236\u3002TP \u63a8\u7406\u4e2d\u7684 All-Reduce \u4f4d\u4e8e attention \u548c MLP \u4e4b\u95f4\uff0c\u4efb\u4f55\u540c\u6b65\u7b49\u5f85\u90fd\u4f1a\u8fdb\u5165\u5173\u952e\u8def\u5f84\u3002SCIN \u4e2d\uff0c\u52a0\u901f\u5668\u5b8c\u6210\u524d\u4e00\u6bb5\u8ba1\u7b97\u540e\uff0c\u4f1a\u5bf9 ISA \u4e2d\u7684\u540c\u6b65\u8ba1\u6570\u5668\u505a atomic increment\uff0c\u7136\u540e\u7b49\u5f85\u672c\u5730 flag\u3002ISA \u770b\u5230\u6240\u6709\u53c2\u4e0e\u8005\u90fd\u5230\u8fbe\u540e\uff0c\u6267\u884c All-Reduce\uff1b\u5b8c\u6210\u5199\u56de\u540e\uff0c\u518d\u901a\u77e5\u5404\u52a0\u901f\u5668\u7ee7\u7eed\u8ba1\u7b97\u3002\u76f8\u6bd4 accelerator-centric \u65b9\u6cd5\u9700\u8981 GPU \u95f4\u901a\u8fc7\u4ea4\u6362\u7f51\u7edc\u540c\u6b65\uff0cSCIN \u628a\u540c\u6b65\u8def\u5f84\u7f29\u77ed\u5230\u52a0\u901f\u5668\u4e0e\u4ea4\u6362\u673a\u4e4b\u95f4\u7684\u4e00\u8df3\uff0c\u56e0\u6b64\u540c\u6b65\u5ef6\u8fdf\u66f4\u4f4e\u3002&nbsp;<\/p>\n\n\n\n<p>\u7b2c\u4e09\u662f ISA \u5185\u90e8\u7684\u6570\u636e\u6d41\u8bbe\u8ba1\u3002\u7531\u4e8e\u591a\u4e2a\u52a0\u901f\u5668\u7684 DMA \u5e76\u53d1\u8bbf\u95ee\u4f1a\u5bfc\u81f4\u6570\u636e\u5305\u4e71\u5e8f\u8fd4\u56de\uff0cISA \u9700\u8981\u63d0\u524d\u9884\u7559 buffer\uff0c\u5e76\u628a\u8fd4\u56de\u6570\u636e\u653e\u5230\u6b63\u786e\u4f4d\u7f6e\u3002\u8bba\u6587\u63d0\u51fa wave-based regulation\uff1a\u628a\u5927\u8bf7\u6c42\u62c6\u6210\u591a\u4e2a wave\uff0c\u6bcf\u4e2a wave \u5360\u7528\u4e00\u90e8\u5206 buffer\uff0c\u5e76\u5141\u8bb8\u591a\u4e2a wave \u540c\u65f6\u5728\u7f51\u7edc\u4e2d outstanding\u3002\u8fd9\u6837\u53ef\u4ee5\u5728\u6709\u9650 buffer \u4e0b\u9690\u85cf\u540c\u6b65\u7a7a\u9699\uff0c\u63d0\u9ad8\u5e26\u5bbd\u5229\u7528\u7387\u3002ISA \u5185\u90e8\u901a\u8fc7 wave controller \u548c wave table \u7ba1\u7406\u8bf7\u6c42\u53d1\u51fa\u3001\u6570\u636e\u7f13\u5b58\u3001reduction\u3001\u5199\u56de\u548c\u8d44\u6e90\u91ca\u653e\u3002&nbsp;<\/p>\n\n\n\n<p>\u6700\u540e\u662f INQ All-Reduce\u3002\u4f5c\u8005\u91c7\u7528 block-wise quantization\uff0c\u6bcf 64 \u4e2a hidden dimension \u5143\u7d20\u5171\u4eab\u4e00\u4e2a scale factor\u3002\u6267\u884c\u65f6\uff0cISA \u5148\u8bfb\u53d6 scale factor \u548c activation\uff0c\u518d\u5728 pipeline \u4e2d\u5b8c\u6210 dequantization\u3001reduction \u548c quantization\u3002\u8fd9\u4e2a\u65b9\u6848\u7684\u5173\u952e\u4e0d\u662f\u201c\u91cf\u5316\u201d\u672c\u8eab\uff0c\u800c\u662f\u91cf\u5316\u53d1\u751f\u5728\u4ea4\u6362\u673a\u5185\uff0c\u4e14\u53ea\u53d1\u751f\u4e00\u6b21\uff0c\u56e0\u6b64\u4e0d\u4f1a\u50cf ring-based \u65b9\u6cd5\u90a3\u6837\u968f\u7740\u901a\u4fe1\u8f6e\u6570\u7d2f\u79ef\u8bef\u5dee\u3002&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"622\" height=\"418\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-4.png\"  class=\"wp-image-1326\" style=\"aspect-ratio:1.4880876544782775;width:403px;height:auto\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-4.png 622w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-4-300x202.png 300w\" sizes=\"auto, (max-width: 622px) 100vw, 622px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe4\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe4\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"644\" height=\"464\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-5.png\"  class=\"wp-image-1327\" style=\"aspect-ratio:1.3879787792222935;width:405px;height:auto\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-5.png 644w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-5-300x216.png 300w\" sizes=\"auto, (max-width: 644px) 100vw, 644px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe5\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe5\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"766\" height=\"326\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-6.png\"  class=\"wp-image-1328\" style=\"width:468px;height:auto\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-6.png 766w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-6-300x128.png 300w\" sizes=\"auto, (max-width: 766px) 100vw, 766px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe6\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe6\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>5. \u539f\u578b\u7cfb\u7edf\u4e0e\u5b9e\u9a8c\u8bbe\u7f6e\uff08Prototype and Methodology\uff09<\/strong><\/h2>\n\n\n\n<p>\u4e3a\u4e86\u8bc1\u660e SCIN \u4e0d\u662f\u7eaf\u6982\u5ff5\u8bbe\u8ba1\uff0c\u4f5c\u8005\u5b9e\u73b0\u4e86\u4e00\u4e2a\u591a FPGA \u539f\u578b\u7cfb\u7edf\u3002\u8be5\u539f\u578b\u5305\u542b 4 \u4e2a endpoint FPGA\uff0c\u7528\u6765\u6a21\u62df\u52a0\u901f\u5668\uff0c\u4ee5\u53ca 1 \u4e2a switch FPGA\uff0c\u7528\u6765\u5b9e\u73b0\u4ea4\u6362\u673a\u548c ISA\u3002\u7cfb\u7edf\u4f7f\u7528 AMD Aurora IP \u5b9e\u73b0\u7269\u7406\u5c42\u548c\u94fe\u8def\u5c42\uff0c\u4f20\u8f93\u5c42\u5219\u6309\u7167\u4e0d\u540c memory transaction \u7c7b\u578b\u8bbe\u7f6e\u72ec\u7acb buffer\uff0c\u5e76\u4f7f\u7528 credit-based flow control\u3002&nbsp;<\/p>\n\n\n\n<p>\u5728\u539f\u578b\u4e2d\uff0c\u6bcf\u6761\u94fe\u8def\u7531 4 \u6761 GT lane \u7ec4\u6210\uff0c\u6bcf\u6761 lane \u63d0\u4f9b 32 Gbps \u53cc\u5411\u5e26\u5bbd\uff0c\u56e0\u6b64\u6bcf\u6761 link \u7684\u805a\u5408\u53cc\u5411\u5e26\u5bbd\u4e3a 128 Gbps\u3002\u7cfb\u7edf flit size \u4e3a 32B\uff0c\u8fd0\u884c\u9891\u7387\u4e3a 250MHz\uff0cwave size \u8bbe\u7f6e\u4e3a 4KB\u3002\u539f\u578b\u6d4b\u5f97 4KB \u6d88\u606f All-Reduce \u5ef6\u8fdf\u4e3a 2.62 \u03bcs\uff0c16MB \u6d88\u606f\u5ef6\u8fdf\u4e3a 2.27 ms\uff1b\u5927\u6d88\u606f\u4e0b All-Reduce \u5e26\u5bbd\u5229\u7528\u7387\u8fbe\u5230 92.4%\u3002\u8fd9\u8bf4\u660e SCIN \u7684\u6570\u636e\u901a\u8def\u5728\u771f\u5b9e\u786c\u4ef6\u4e0a\u53ef\u4ee5\u8dd1\u8d77\u6765\uff0c\u5e76\u4e14\u5e26\u5bbd\u5229\u7528\u7387\u8f83\u9ad8\u3002&nbsp;<\/p>\n\n\n\n<p>\u4e0d\u8fc7\uff0cFPGA \u539f\u578b\u5728\u5e26\u5bbd\u548c\u89c4\u6a21\u4e0a\u65e0\u6cd5\u76f4\u63a5\u7b49\u4ef7\u4e8e\u771f\u5b9e GPU \u96c6\u7fa4\u3002\u56e0\u6b64\u4f5c\u8005\u8fdb\u4e00\u6b65\u6784\u5efa\u4e86\u4e00\u4e2a cycle-level network simulator\uff0c\u5e76\u7528 FPGA \u539f\u578b\u7ed3\u679c\u8fdb\u884c\u6821\u51c6\u3002\u6821\u51c6\u540e\uff0c\u6a21\u62df\u7ed3\u679c\u548c\u5b9e\u6d4b\u7ed3\u679c\u8bef\u5dee\u4f4e\u4e8e 6%\uff0c\u8bf4\u660e\u8fd9\u4e2a\u6a21\u62df\u5668\u53ef\u4ee5\u7528\u4e8e\u8bc4\u4f30\u66f4\u5927\u89c4\u6a21\u3001\u66f4\u63a5\u8fd1\u5b9e\u9645\u90e8\u7f72\u7684 8 \u52a0\u901f\u5668\u7cfb\u7edf\u3002&nbsp;<\/p>\n\n\n\n<p>\u8ba1\u7b97\u4fa7\uff0c\u4f5c\u8005\u57fa\u4e8e TensorRT-LLM \u5bf9 LLaMA-2 \u6a21\u578b\u5728 H200 GPU \u4e0a\u8fdb\u884c profiling\uff0c\u518d\u7ed3\u5408\u7f51\u7edc\u6a21\u62df\u5668\u8bc4\u4f30 SCIN \u5bf9\u7aef\u5230\u7aef TP \u63a8\u7406\u7684\u5f71\u54cd\u3002\u91cf\u5316\u8bc4\u4f30\u5219\u57fa\u4e8e SmoothQuant \u4ee3\u7801\u6846\u67b6\uff0c\u6d4b\u8bd5 INQ All-Reduce \u5bf9\u591a\u4e2a\u6a21\u578b\u548c\u591a\u4e2a\u4efb\u52a1\u7cbe\u5ea6\u7684\u5f71\u54cd\u3002&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"566\" height=\"534\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-7.png\"  class=\"wp-image-1329\" style=\"width:319px;height:auto\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-7.png 566w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-7-300x283.png 300w\" sizes=\"auto, (max-width: 566px) 100vw, 566px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe7\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe7\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"780\" height=\"460\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-8.png\"  class=\"wp-image-1330\" style=\"aspect-ratio:1.6957061305534873;width:454px;height:auto\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-8.png 780w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-8-300x177.png 300w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-8-768x453.png 768w\" sizes=\"auto, (max-width: 780px) 100vw, 780px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe8\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe8\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>6. \u5b9e\u9a8c\u7ed3\u679c\u4e0e\u6027\u80fd\u5206\u6790\uff08Evaluation\uff09<\/strong><\/h2>\n\n\n\n<p>\u9996\u5148\u662f INQ \u7684\u7cbe\u5ea6\u5f71\u54cd\u3002\u4f5c\u8005\u6bd4\u8f83\u4e86 INQ All-Reduce \u548c ring-based quantized All-Reduce\uff08RQ\uff09\u3002\u7ed3\u679c\u663e\u793a\uff0c\u5728 INT8\u3001block size=64 \u7684\u8bbe\u7f6e\u4e0b\uff0cINQ \u57fa\u672c\u4fdd\u6301 FP16 baseline \u7684 perplexity\uff0c\u4ec5\u6709\u6781\u5c0f\u9000\u5316\uff1b\u5728 INT4 \u66f4\u6fc0\u8fdb\u7684\u8bbe\u7f6e\u4e0b\uff0cINQ \u4e5f\u660e\u663e\u4f18\u4e8e RQ\u3002\u8fd9\u8bf4\u660e SCIN \u7684\u7f51\u7edc\u5185\u91cf\u5316\u786e\u5b9e\u6bd4 ring-based \u591a\u8f6e\u91cf\u5316\u66f4\u7a33\u5b9a\uff0c\u56e0\u4e3a\u5b83\u907f\u514d\u4e86\u91cf\u5316\u8bef\u5dee\u968f TP \u901a\u4fe1\u8f6e\u6570\u53cd\u590d\u7d2f\u79ef\u3002&nbsp;<\/p>\n\n\n\n<p>\u8fdb\u4e00\u6b65\u7684\u591a\u6a21\u578b\u7cbe\u5ea6\u5b9e\u9a8c\u4e5f\u652f\u6301\u8fd9\u4e00\u70b9\u3002\u4f5c\u8005\u5728 LLaMA-2-7B\u3001LLaMA-2-13B\u3001Mistral-7B\u3001Mixtral-8x7B \u7b49\u6a21\u578b\u4e0a\u6d4b\u8bd5 8-bit INQ All-Reduce\uff0c\u4efb\u52a1\u5305\u62ec MMLU \u548c\u591a\u4e2a commonsense QA benchmark\u3002\u603b\u4f53\u7ed3\u679c\u663e\u793a\uff0cINQ \u5728 FP16 \u548c FP8 \u6a21\u578b\u4e0a\u90fd\u53ea\u5e26\u6765\u6781\u5c0f\u7cbe\u5ea6\u6ce2\u52a8\uff0c\u6709\u4e9b\u4efb\u52a1\u751a\u81f3\u7565\u6709\u63d0\u5347\u3002\u56e0\u6b64\uff0c\u672c\u6587\u6700\u7ec8\u9009\u62e9 INT8\u3001block size=64 \u4f5c\u4e3a\u9ed8\u8ba4\u65b9\u6848\uff0c\u7528\u63a5\u8fd1\u65e0\u635f\u7684\u65b9\u5f0f\u6362\u53d6\u63a5\u8fd1 2 \u500d\u901a\u4fe1\u538b\u7f29\u3002&nbsp;<\/p>\n\n\n\n<p>\u5176\u6b21\u662f All-Reduce \u5e26\u5bbd\u548c\u5ef6\u8fdf\u3002\u6a21\u62df\u7ed3\u679c\u663e\u793a\uff0cSCIN \u5728\u5927\u6d88\u606f\u573a\u666f\u4e0b\u53ef\u4ee5\u63a5\u8fd1\u6700\u5927 payload bandwidth\uff1bINQ All-Reduce \u7531\u4e8e\u51cf\u5c11\u901a\u4fe1\u91cf\uff0c\u5728\u7b49\u6548\u5e26\u5bbd\u4e0a\u8fdb\u4e00\u6b65\u63d0\u9ad8\u3002\u4e0e\u8f6f\u4ef6 ring All-Reduce \u76f8\u6bd4\uff0cSCIN \u5bf9\u5c0f\u6d88\u606f\u548c\u5927\u6d88\u606f\u90fd\u6709\u660e\u663e\u52a0\u901f\uff0c\u5c0f\u6d88\u606f\u6700\u9ad8\u8fbe\u5230 8.7\u00d7\uff0c\u5927\u6d88\u606f\u6700\u9ad8\u8fbe\u5230 3.8\u00d7\u3002\u8fd9\u5206\u522b\u5bf9\u5e94 decode \u9636\u6bb5\u7684\u4f4e\u5ef6\u8fdf\u9700\u6c42\u548c prefill \u9636\u6bb5\u7684\u9ad8\u5e26\u5bbd\u9700\u6c42\u3002&nbsp;<\/p>\n\n\n\n<p>\u4f5c\u8005\u8fd8\u4e13\u95e8\u8bc4\u4f30\u4e86 wave regulation\u3002\u6ca1\u6709 wave regulation \u65f6\uff0c\u5373\u4f7f buffer \u80fd\u8986\u76d6 round-trip latency\uff0c\u4e5f\u53ea\u80fd\u8fbe\u5230\u5927\u7ea6 2\/3 \u603b\u5e26\u5bbd\uff1b\u52a0\u5165\u591a wave \u673a\u5236\u540e\uff0c\u968f\u7740 wave \u6570\u589e\u52a0\uff0c\u5e26\u5bbd\u660e\u663e\u63d0\u5347\uff0c\u5e76\u4e14 16 waves \u57fa\u672c\u8db3\u4ee5\u652f\u6491\u6ee1\u5e26\u5bbd\u3002\u8fd9\u4e2a\u5b9e\u9a8c\u8bf4\u660e\uff0cSCIN \u7684\u6027\u80fd\u5e76\u4e0d\u53ea\u662f\u6765\u81ea\u201c\u628a\u8ba1\u7b97\u653e\u5230\u4ea4\u6362\u673a\u91cc\u201d\uff0c\u8fd8\u4f9d\u8d56\u4e8e ISA \u5185\u90e8\u5bf9 buffer\u3001\u8bf7\u6c42\u548c\u8fd4\u56de\u6570\u636e\u7684\u7ec6\u7c92\u5ea6\u8c03\u5ea6\u3002&nbsp;<\/p>\n\n\n\n<p>\u6700\u540e\u662f\u7aef\u5230\u7aef LLM TP inference\u3002\u4f5c\u8005\u5728 LLaMA-2 \u7cfb\u5217\u6a21\u578b\u4e0a\u8bc4\u4f30 SCIN \u5bf9 TTFT \u548c TPOT \u7684\u5f71\u54cd\u3002\u7ed3\u679c\u663e\u793a\uff0c\u5728 FP16 \u4e0b\uff0cSCIN \u6700\u9ad8\u5e26\u6765 1.52\u00d7 TTFT \u548c 1.29\u00d7 TPOT \u52a0\u901f\uff1b\u5728 FP8 \u4e0b\uff0c\u7531\u4e8e\u8ba1\u7b97\u66f4\u5feb\u3001\u901a\u4fe1\u5360\u6bd4\u66f4\u9ad8\uff0cSCIN \u7684\u6536\u76ca\u66f4\u660e\u663e\uff0c\u6700\u9ad8\u8fbe\u5230 1.74\u00d7 TTFT \u548c 1.34\u00d7 TPOT\u3002\u8fd9\u4e5f\u8bf4\u660e\u4e00\u4e2a\u8d8b\u52bf\uff1a\u672a\u6765 GPU \u7b97\u529b\u7ee7\u7eed\u63d0\u5347\u540e\uff0c\u901a\u4fe1\u4f1a\u8d8a\u6765\u8d8a\u6210\u4e3a\u74f6\u9888\uff0cSCIN \u8fd9\u7c7b\u7f51\u7edc\u5185\u8ba1\u7b97\u67b6\u6784\u7684\u4ef7\u503c\u53ef\u80fd\u4f1a\u8fdb\u4e00\u6b65\u653e\u5927\u3002&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"441\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-9-1024x441.png\"  class=\"wp-image-1331\" style=\"aspect-ratio:2.3208426869279917;width:673px;height:auto\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-9-1024x441.png 1024w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-9-300x129.png 300w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-9-768x331.png 768w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-9.png 1360w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe9\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe9\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"770\" height=\"908\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-10.png\"  class=\"wp-image-1332\" style=\"aspect-ratio:0.8480385050305894;width:363px;height:auto\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-10.png 770w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-10-254x300.png 254w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-10-768x906.png 768w\" sizes=\"auto, (max-width: 770px) 100vw, 770px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe10\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe10\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"273\" src=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-11-1024x273.png\"  class=\"wp-image-1333\" srcset=\"https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-11-1024x273.png 1024w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-11-300x80.png 300w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-11-768x204.png 768w, https:\/\/www.ndnlab.com\/wp-content\/uploads\/2026\/05\/image-11.png 1518w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" title=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe11\" alt=\"A Switch-Centric In-Network Architecture for Accelerating LLM Inference in Shared-Memory Network\u63d2\u56fe11\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>7. \u8d21\u732e\u4e0e\u7ed3\u8bba\uff08Contributions and Conclusion\uff09<\/strong><\/h2>\n\n\n\n<p>\u672c\u6587\u7684\u4e3b\u8981\u8d21\u732e\u53ef\u4ee5\u6982\u62ec\u4e3a\u56db\u70b9\uff1a<\/p>\n\n\n\n<p>\uff081\uff09\u63d0\u51fa <strong>SCIN<\/strong>\uff0c\u5373\u9996\u4e2a\u9762\u5411\u591a\u52a0\u901f\u5668\u5171\u4eab\u5185\u5b58\u7f51\u7edc\u7684 switch-centric in-network computing \u67b6\u6784\uff0c\u7528\u4ea4\u6362\u673a\u4e2d\u5fc3\u5f0f\u8bbe\u8ba1\u66ff\u4ee3\u4f20\u7edf accelerator-centric \u8bbe\u8ba1\u3002<\/p>\n\n\n\n<p>\uff082\uff09\u8bbe\u8ba1 <strong>In-Switch Accelerator\uff08ISA\uff09<\/strong> \u53ca\u5176\u914d\u5957\u901a\u4fe1\u673a\u5236\uff0c\u4f7f\u4ea4\u6362\u673a\u80fd\u591f\u4e3b\u52a8\u8bbf\u95ee\u52a0\u901f\u5668\u5185\u5b58\u3001\u6267\u884c All-Reduce\uff0c\u5e76\u76f4\u63a5\u5199\u56de\u7ed3\u679c\uff0c\u4ece\u800c\u51cf\u5c11\u5197\u4f59\u6570\u636e\u4f20\u8f93\u548c\u540c\u6b65\u5f00\u9500\u3002<\/p>\n\n\n\n<p>\uff083\uff09\u63d0\u51fa <strong>In-Network Quantization\uff08INQ\uff09<\/strong>\uff0c\u5728 ISA \u5185\u90e8\u5b8c\u6210 All-Reduce \u91cf\u5316\uff0c\u5c06\u901a\u4fe1\u7cbe\u5ea6\u964d\u81f3 8-bit\uff0c\u5b9e\u73b0\u63a5\u8fd1 2 \u500d\u901a\u4fe1\u538b\u7f29\uff0c\u540c\u65f6\u51e0\u4e4e\u4e0d\u635f\u5931\u6a21\u578b\u7cbe\u5ea6\u3002<\/p>\n\n\n\n<p>\uff084\uff09\u5b9e\u73b0\u591a FPGA \u539f\u578b\uff0c\u5e76\u901a\u8fc7\u6821\u51c6\u6a21\u62df\u5668\u5728\u66f4\u5927\u89c4\u6a21\u7cfb\u7edf\u4e2d\u8bc4\u4f30\uff0c\u6700\u7ec8\u5728 LLaMA-2 TP \u63a8\u7406\u4e2d\u53d6\u5f97\u660e\u663e TTFT \u548c TPOT \u52a0\u901f\u3002&nbsp;<\/p>\n\n\n\n<p>\u8fd9\u7bc7\u8bba\u6587\u7684\u7814\u7a76\u95ee\u9898\u6293\u5f97\u6bd4\u8f83\u51c6\u3002\u5b83\u6ca1\u6709\u505c\u7559\u5728\u201c\u901a\u4fe1\u6162\uff0c\u6240\u4ee5\u4f18\u5316\u901a\u4fe1\u7b97\u6cd5\u201d\u8fd9\u4e00\u5c42\uff0c\u800c\u662f\u8fdb\u4e00\u6b65\u6307\u51fa\uff1a\u73b0\u6709\u7f51\u7edc\u5185\u8ba1\u7b97\u67b6\u6784\u672c\u8eab\u4ecd\u7136\u88ab GPU \u4e3b\u5bfc\uff0c\u5bfc\u81f4\u6570\u636e\u8def\u5f84\u3001\u540c\u6b65\u8def\u5f84\u548c\u53ef\u652f\u6301\u64cd\u4f5c\u90fd\u53d7\u9650\u5236\u3002SCIN \u7684\u5173\u952e\u4ef7\u503c\u5c31\u5728\u4e8e\u628a collective communication \u7684\u63a7\u5236\u6743\u4ece GPU \u4fa7\u8f6c\u79fb\u5230 switch \u4fa7\uff0c\u8ba9\u4ea4\u6362\u673a\u771f\u6b63\u627f\u62c5\u8d77\u63a8\u7406\u901a\u4fe1\u8def\u5f84\u4e2d\u7684\u8ba1\u7b97\u548c\u8c03\u5ea6\u529f\u80fd\u3002<\/p>\n\n\n\n<p>\u4ece\u7cfb\u7edf\u610f\u4e49\u4e0a\u770b\uff0c\u8fd9\u7bc7\u5de5\u4f5c\u4e5f\u53cd\u6620\u4e86\u4e00\u4e2a\u8d8b\u52bf\uff1a\u672a\u6765\u5927\u6a21\u578b\u63a8\u7406\u4f18\u5316\u4e0d\u53ea\u662f\u6a21\u578b\u538b\u7f29\u3001kernel \u4f18\u5316\u6216\u8c03\u5ea6\u7b56\u7565\u95ee\u9898\uff0c\u4e92\u8fde\u7f51\u7edc\u672c\u8eab\u4e5f\u4f1a\u6210\u4e3a AI \u7cfb\u7edf\u8bbe\u8ba1\u7684\u4e00\u90e8\u5206\u3002SCIN \u7684\u8d21\u732e\u4e0d\u662f\u5355\u7eaf\u8ba9 All-Reduce \u66f4\u5feb\uff0c\u800c\u662f\u63d0\u4f9b\u4e86\u4e00\u4e2a\u66f4\u6709\u6269\u5c55\u6027\u7684\u65b9\u5411\uff1a\u5728 shared-memory accelerator network \u4e2d\uff0c\u901a\u8fc7\u4ea4\u6362\u673a\u5185\u53ef\u7f16\u7a0b\u8ba1\u7b97\uff0c\u628a\u901a\u4fe1\u3001\u91cf\u5316\u548c\u540c\u6b65\u4e00\u8d77\u534f\u540c\u4f18\u5316\u3002<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>1. \u6458\u8981\uff08Abstract\uff09 \u672c\u6587\u7814\u7a76\u7684\u662f\u5927\u6a21\u578b\u63a8\u7406\u4e2d\u7684\u901a\u4fe1\u74f6\u9888\u95ee\u9898\uff0c\u5177\u4f53\u805a\u7126\u5728 Tensor Parallelism\uff08TP\uff09\u63a8\u7406\u573a\u666f\u4e0b\u7684 All-Reduce \u52a0\u901f\u3002\u968f\u7740 LLM \u53c2\u6570\u89c4\u6a21\u4e0d\u65ad\u6269\u5927\uff0c\u5355\u4e2a GPU \u5f88\u96be\u72ec\u7acb\u5b8c\u6210\u4f4e\u5ef6\u8fdf\u63a8\u7406\uff0c\u591a\u52a0\u901f\u5668\u5e76\u884c\u5df2\u7ecf\u6210\u4e3a\u5e38\u6001\u3002\u4f46\u5728 TP \u63a8\u7406\u4e2d\uff0c\u6bcf\u4e00\u5c42 attention \u548c MLP block \u540e\u901a\u5e38\u90fd\u9700\u8981 All-Reduce\uff0c\u540c\u6b65\u5404\u4e2a\u52a0\u901f\u5668\u4e0a\u7684\u4e2d\u95f4\u7ed3\u679c\u3002\u8fd9\u4e9b\u901a\u4fe1\u64cd\u4f5c\u76f4\u63a5\u4f4d\u4e8e\u63a8\u7406\u5173\u952e\u8def\u5f84\u4e0a\uff0c\u65e0\u6cd5\u50cf\u8bad\u7ec3\u53cd\u5411\u4f20\u64ad\u90a3\u6837\u8f83\u597d\u5730\u4e0e\u8ba1\u7b97\u91cd\u53e0\uff0c\u56e0 &hellip; <a href=\"https:\/\/www.ndnlab.com\/?p=1321\">\u7ee7\u7eed\u9605\u8bfb <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":7,"featured_media":1322,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5,23,6],"tags":[],"class_list":["post-1321","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-rengongzhineng","category-23","category-weilaiwangluo"],"_links":{"self":[{"href":"https:\/\/www.ndnlab.com\/index.php?rest_route=\/wp\/v2\/posts\/1321","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.ndnlab.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ndnlab.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ndnlab.com\/index.php?rest_route=\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ndnlab.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1321"}],"version-history":[{"count":1,"href":"https:\/\/www.ndnlab.com\/index.php?rest_route=\/wp\/v2\/posts\/1321\/revisions"}],"predecessor-version":[{"id":1334,"href":"https:\/\/www.ndnlab.com\/index.php?rest_route=\/wp\/v2\/posts\/1321\/revisions\/1334"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.ndnlab.com\/index.php?rest_route=\/wp\/v2\/media\/1322"}],"wp:attachment":[{"href":"https:\/\/www.ndnlab.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1321"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ndnlab.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1321"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ndnlab.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1321"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}