--- -- [Configuration]
* *** *
--
-
-
-
-
- --- . broker: amqp://guest@localhost:5672//
__main__:0x1012d8590
*
**
**
**
**
***
****
---------- . app:
---------- . concurrency: 8 (processes)
---------- . events:
----------
OFF (enable -E to monitor this worker)
- --- --- [Queues]
*
--
---
---- . celery:
-----
exchange:celery(direct) binding:celery
*******
*****
[2012-06-08 16:23:51,078: WARNING/MainProcess] celery@halcyon.local has started.
3.2. Getting Sta rted 29
Celery Documentation, Release 4.3.0
– The broker is the URL you specified in the broker argument in our celery module, you can also specify a different
broker on the command-line by using the -b option.
– Concurrency is the number of prefork worker process used to process your tasks concurrently, when all of these are
busy doing work new tasks will have to wait for one of the tasks to finish before it can be processed.
The default concurrency number is the number of CPU’s on that machine (including cores), you can specify a custom
number using the celery worker -c option. There’s no recommended value, as the optimal number depends on
a number of factors, but if your tasks are mostly I/O-bound then you can try to increase it, experimentation has shown
that adding more than twice the number of CPU’s is rarely effective, and likely to degrade performance instead.
Including the default prefork pool, Celery also supports using Eventlet, Gevent, and running in a single thread (see
Concurrency).
– Events is an option that when enabled causes Celery to send monitoring messages (events) for actions occurring
monitor, that you can read about in the Monitoring and Management guide.
– Queues is the list of queues that the worker will consume tasks from. The worker can be told to consume from several
queues at once, and this is used to route messages to specific workers as a means for Quality of Service, separation of
concerns, and prioritization, all described in the Routing Guide.
$ celery worker --help
These options are described in more detailed in the Workers Guide.
Stopping the worker
To stop the worker simply hit Control-c. A list of signals supported by the worker is detailed in the Workers Guide.
In the background
In production you’ll want to run the worker in the background, this is described in detail in the daemonization tutorial.
The daemonization scripts uses the celery multi command to start one or more workers in the background:
$ celery multi start w1 -A proj -l info
celery multi v4.0.0 (latentcall)
> Starting nodes...
> w1.halcyon.local: OK
$ celery multi restart w1 -A proj -l info
celery multi v4.0.0 (latentcall)
> Stopping nodes...
> w1.halcyon.local: TERM -> 64024
> Waiting for 1 node.....
> w1.halcyon.local: OK
> Restarting node w1.halcyon.local: OK
celery multi v4.0.0 (latentcall)
> Stopping nodes...
> w1.halcyon.local: TERM -> 64052
or stop it:
30 Chapter 3. Contents
in the worker. These can be used by monitor programs like celery events, and Flower - the real-time Celery
You can get a complete list of command-line arguments by passing in the --help flag:
You can restart it too:
Celery Documentation, Release 4.3.0
$ celery multi stop w1 -A proj -l info
stopwait command instead, this ensures all currently executing tasks are completed before exiting:
$ celery multi stopwait w1 -A proj -l info
Note: celery multi doesn’t store information about workers so you need to use the same command-line argu-
ments when restarting. Only the same pidfile and logfile arguments must be used when stopping.
By default it’ll create pid and log files in the current directory, to protect against multiple workers launching on top of
each other you’re encouraged to put these in a dedicated directory:
$ mkdir -p /var/run/celery
$ mkdir -p /var/log/celery
$ celery multi start w1 -A proj -l info --pidfile=/var/run/celery/%n.pid \
--logfile=/var/log/celery/%n%I.log
With the multi command you can start multiple workers, and there’s a powerful command-line syntax to specify
arguments for different workers too, for example:
$ celery multi start 10 -A proj -l info -Q:1-3 images,video -Q:4,5 data \
-Q default -L:4,5 debug
For more examples see the multi module in the API reference.
About the --app argument
The --app argument specifies the Celery app instance to use, it must be in the form of module.path:attribute
But it also supports a shortcut form If only a package name is specified, where it’ll try to search for the app instance,
in the following order:
With --app=proj:
1) an attribute named proj.app, or
2) an attribute named proj.celery, or
3) any attribute in the module proj where the value is a Celery application, or
If none of these are found it’ll try a submodule named proj.celery:
4) an attribute named proj.celery.app, or
5) an attribute named proj.celery.celery, or
6) Any attribute in the module proj.celery where the value is a Celery application.
This scheme mimics the practices used in the documentation – that is, proj:app for a single contained module, and
proj.celery:app for larger projects.
Calling Tasks
The stop command is asynchronous so it won’t wait for the worker to shutdown. You’ll probably want to use the