/msg_store_persistent 可以删么
时间: 2023-08-03 20:03:07 浏览: 194
可以删除`msg_store_persistent`,但是不建议这样做,因为它是RabbitMQ的一个核心模块,并且对于消息的持久化非常重要。如果删除此模块,将无法保证消息的持久性,并且在RabbitMQ服务器崩溃或重启后,可能会丢失未处理的消息。
如果您需要禁用持久性,可以将消息标记为非持久性,而不是删除`msg_store_persistent`模块。要将消息标记为非持久性,请在发送消息时将`delivery_mode`属性设置为1,例如:
```
channel.basic_publish(exchange='my_exchange', routing_key='my_key', body='Hello World', properties=pika.BasicProperties(delivery_mode=1))
```
这将确保消息不会被持久化,并且在RabbitMQ服务器重启后将被删除。
相关问题
Traceback (most recent call last): File "DT_001_X01_P01.py", line 150, in DT_001_X01_P01.Module.load_model File "/home/kejia/Server/tf/Bin_x64/DeepLearning/DL_Lib_02/mmdet/apis/inference.py", line 42, in init_detector checkpoint = load_checkpoint(model, checkpoint, map_location=map_loc) File "/home/kejia/Server/tf/Bin_x64/DeepLearning/DL_Lib_02/mmcv/runner/checkpoint.py", line 529, in load_checkpoint checkpoint = _load_checkpoint(filename, map_location, logger) File "/home/kejia/Server/tf/Bin_x64/DeepLearning/DL_Lib_02/mmcv/runner/checkpoint.py", line 467, in _load_checkpoint return CheckpointLoader.load_checkpoint(filename, map_location, logger) File "/home/kejia/Server/tf/Bin_x64/DeepLearning/DL_Lib_02/mmcv/runner/checkpoint.py", line 244, in load_checkpoint return checkpoint_loader(filename, map_location) File "/home/kejia/Server/tf/Bin_x64/DeepLearning/DL_Lib_02/mmcv/runner/checkpoint.py", line 261, in load_from_local checkpoint = torch.load(filename, map_location=map_location) File "torch/serialization.py", line 594, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "torch/serialization.py", line 853, in _load result = unpickler.load() File "torch/serialization.py", line 845, in persistent_load load_tensor(data_type, size, key, _maybe_decode_ascii(location)) File "torch/serialization.py", line 834, in load_tensor loaded_storages[key] = restore_location(storage, location) File "torch/serialization.py", line 175, in default_restore_location result = fn(storage, location) File "torch/serialization.py", line 157, in _cuda_deserialize return obj.cuda(device) File "torch/_utils.py", line 71, in _cuda with torch.cuda.device(device): File "torch/cuda/__init__.py", line 225, in __enter__ self.prev_idx = torch._C._cuda_getDevice() File "torch/cuda/__init__.py", line 164, in _lazy_init "Cannot re-initialize CUDA in forked subprocess. " + msg) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ('异常抛出', None) DT_001_X01_P01 load_model ret=1, version=V1.0.0.0
根据你提供的错误信息,看起来是在使用多进程时遇到了 CUDA 初始化错误。错误信息中指出,无法在 fork 的子进程中重新初始化 CUDA。为了在多进程中使用 CUDA,你需要使用 'spawn' 启动方法。
你可以尝试在主模块中添加以下代码来设置 'spawn' 启动方法:
```python
import multiprocessing as mp
if __name__ == '__main__':
mp.set_start_method('spawn')
# 在这里编写你的主程序逻辑
```
将你的主程序逻辑放在 `if __name__ == '__main__':` 块中,并在该块中调用 `mp.set_start_method('spawn')` 来设置 'spawn' 启动方法。
在上述代码中,`mp.set_start_method('spawn')` 将设置 'spawn' 启动方法为当前进程的默认启动方法。这样,在创建子进程时就会使用 'spawn' 启动方法,避免 CUDA 初始化错误。
希望这个解决方案对你有所帮助。如果你还有其他问题,请随时提问。
文件目录均写在app/school_apps/emms下
好的,以下是在 `app/school_apps/emms` 目录下进行文件操作的异步任务队列案例:
首先,你需要在 `config/queue.php` 配置文件中添加队列连接和驱动设置,如使用 Redis 驱动:
```php
return [
'default' => 'redis',
'connections' => [
'redis' => [
'type' => 'redis',
'host' => '127.0.0.1',
'port' => 6379,
'password' => '',
'select' => 0,
'timeout' => 0,
'persistent' => false,
'prefix' => '',
],
],
];
```
接下来,在 `app/school_apps/emms/job` 目录下创建一个名为 `FileProcessJob.php` 的队列任务类,并添加以下内容:
```php
<?php
namespace app\school_apps\emms\job;
use think\queue\Job;
class FileProcessJob
{
public function handle(Job $job, $data)
{
// 获取传递的文件路径
$filePath = $data['file_path'];
// 处理文件逻辑
// 例如,可以进行文件读写、解析等操作
// 任务执行成功后删除任务
$job->delete();
}
}
```
然后,在控制器中调用队列任务。在你想要调用异步任务的控制器方法中添加以下代码:
```php
<?php
namespace app\school_apps\emms\controller;
use app\school_apps\emms\job\FileProcessJob;
use think\facade\Queue;
class FileController
{
public function upload()
{
// 上传文件逻辑
// ...
// 获取上传的文件路径
$filePath = '/path/to/uploaded/file.txt';
// 推送任务到队列
Queue::push(new FileProcessJob(['file_path' => $filePath]));
return '任务已推送到队列';
}
}
```
最后,在 `app/school_apps/emms/command` 目录下创建一个名为 `QueueWorker.php` 的命令文件,用于启动队列消费者。添加以下内容:
```php
<?php
namespace app\school_apps\emms\command;
use think\console\Command;
use think\console\Input;
use think\console\Output;
use think\facade\Queue;
class QueueWorker extends Command
{
protected function configure()
{
$this->setName('emms:queue:work')
->setDescription('Start EMMS queue worker');
}
protected function execute(Input $input, Output $output)
{
// 持续监听队列并执行任务
Queue::listen();
}
}
```
现在,你可以通过运行以下命令启动队列消费者:
```shell
php think emms:queue:work
```
当你调用 `FileController` 控制器的 `upload` 方法时,文件处理任务会被推送到队列中,并由消费者进行处理。
请确保目录和命名空间的设置与你的实际项目相匹配。希望对你有所帮助!