高速LZSS编码解码的并行架构

需积分: 46 0 下载量 158 浏览量 更新于2024-08-07 收藏 212KB PDF 举报
"平行架构用于高速LZSS数据编码/解码" 本文主要探讨了平行架构在实现高速LZSS(Lempel-Ziv-Storer-Szymanski)数据编码和解码中的应用。随着计算机技术的飞速发展,对高效信息传输和存储的需求日益增长,数据压缩技术因此成为一种关键的技术手段。LZSS编码是一种广泛使用的数据压缩方法,它通过消除LZ77编码表达中的冗余来提供较高的压缩比率。 在当前,大量数据在网络中的快速传输需求促使人们考虑专门的硬件来实现数据压缩。作者Toyota Fujioka和Hirotomo Aso提出了一个名为PAHL-LZSS(Parallel Architecture for High-Speed LZSS)的高速LZSS编码和解码并行处理架构。这个架构的目标是解决高速LZSS编码的需求,以满足大数据量的实时传输场景。 PAHL-LZSS架构的优势在于其能够通过积极利用生成代码的统计特性,降低计算负担,从而实现高效的压缩。在并行处理的框架下,该架构能够分解和分配任务,使得多个处理单元同时工作,大大提升了数据处理速度,减少了编码和解码的时间延迟。 LZSS编码的基本原理是查找输入数据中的重复模式,并用这些模式的引用替换它们,以减少数据的存储需求。PAHL-LZSS架构可能包含了查找、匹配和编码等多个并行处理模块,每个模块都针对特定的步骤进行优化,以最大化系统整体性能。 在实际应用中,这样的并行处理架构对于网络通信、数据存储和流媒体服务等领域具有重要意义,尤其是在需要实时处理大量数据的场景,如云计算、物联网(IoT)设备和高清晰度视频传输等。通过并行化处理,不仅可以提高数据传输速率,还可以减轻服务器的计算压力,提高系统的能源效率。 PAHL-LZSS架构展示了在硬件层面优化数据压缩算法的潜力,为高速数据处理提供了解决方案。通过并行处理和利用数据的统计特性,该架构能够在保持高效压缩的同时,显著提升LZSS编码的速度,为未来的高速数据传输和存储技术的发展开辟了新的路径。

Traceback (most recent call last): File "train.py", line 354, in <module> fit_one_epoch(model_train, model, yolo_loss, loss_history, optimizer, epoch, epoch_step, epoch_step_val, gen, gen_val, UnFreeze_Epoch, Cuda, save_period, save_dir) File "/hy-tmp/yolov5-pytorch-bilibili/yolov5-pytorch-bilibili/utils/utils_fit.py", line 34, in fit_one_epoch outputs = model_train(images) File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 169, in forward return self.module(*inputs[0], **kwargs[0]) File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/hy-tmp/yolov5-pytorch-bilibili/yolov5-pytorch-bilibili/nets/yolo.py", line 102, in forward self.h3 = self.bottlenecklstm3(P3, self.h3, self.c3) # lstm File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/hy-tmp/yolov5-pytorch-bilibili/yolov5-pytorch-bilibili/nets/bottleneck_lstm.py", line 141, in forward new_h, new_c = self.cell(inputs, h, c) File "/usr/local/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/hy-tmp/yolov5-pytorch-bilibili/yolov5-pytorch-bilibili/nets/bottleneck_lstm.py", line 68, in forward y = torch.cat((x, h),1) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument tensors in method wrapper_cat)

2023-06-07 上传

File "/home/zrb/anaconda3/envs/open-mmlab/bin/mmskl", line 7, in <module> exec(compile(f.read(), __file__, 'exec')) File "/home/zrb/mmskeleton/tools/mmskl", line 123, in <module> main() File "/home/zrb/mmskeleton/tools/mmskl", line 117, in main call_obj(**cfg.processor_cfg) File "/home/zrb/mmskeleton/mmskeleton/utils/importer.py", line 24, in call_obj return import_obj(type)(**kwargs) File "/home/zrb/mmskeleton/mmskeleton/processor/recognition.py", line 47, in test output = model(data) File "/home/zrb/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/zrb/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/zrb/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/zrb/mmskeleton/mmskeleton/models/backbones/st_gcn_aaai18.py", line 94, in forward x = self.data_bn(x) File "/home/zrb/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/zrb/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 81, in forward exponential_average_factor, self.eps) File "/home/zrb/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/functional.py", line 1656, in batch_norm training, momentum, eps, torch.backends.cudnn.enabled RuntimeError: running_mean should contain 60 elements not 54

2023-07-25 上传