1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
use crate::task::JoinHandle;
cfg_rt_multi_thread! {
/// Runs the provided blocking function on the current thread without
/// blocking the executor.
///
/// In general, issuing a blocking call or performing a lot of compute in a
/// future without yielding is problematic, as it may prevent the executor
/// from driving other tasks forward. Calling this function informs the
/// executor that the currently executing task is about to block the thread,
/// so the executor is able to hand off any other tasks it has to a new
/// worker thread before that happens. See the [CPU-bound tasks and blocking
/// code][blocking] section for more information.
///
/// Be aware that although this function avoids starving other independently
/// spawned tasks, any other code running concurrently in the same task will
/// be suspended during the call to `block_in_place`. This can happen e.g.
/// when using the [`join!`] macro. To avoid this issue, use
/// [`spawn_blocking`] instead of `block_in_place`.
///
/// Note that this function cannot be used within a [`current_thread`] runtime
/// because in this case there are no other worker threads to hand off tasks
/// to. On the other hand, calling the function outside a runtime is
/// allowed. In this case, `block_in_place` just calls the provided closure
/// normally.
///
/// Code running behind `block_in_place` cannot be cancelled. When you shut
/// down the executor, it will wait indefinitely for all blocking operations
/// to finish. You can use [`shutdown_timeout`] to stop waiting for them
/// after a certain timeout. Be aware that this will still not cancel the
/// tasks — they are simply allowed to keep running after the method
/// returns.
///
/// [blocking]: ../index.html#cpu-bound-tasks-and-blocking-code
/// [`spawn_blocking`]: fn@crate::task::spawn_blocking
/// [`join!`]: macro@join
/// [`thread::spawn`]: fn@std::thread::spawn
/// [`shutdown_timeout`]: fn@crate::runtime::Runtime::shutdown_timeout
///
/// # Examples
///
/// ```
/// use tokio::task;
///
/// # async fn docs() {
/// task::block_in_place(move || {
/// // do some compute-heavy work or call synchronous code
/// });
/// # }
/// ```
///
/// Code running inside `block_in_place` may use `block_on` to reenter the
/// async context.
///
/// ```
/// use tokio::task;
/// use tokio::runtime::Handle;
///
/// # async fn docs() {
/// task::block_in_place(move || {
/// Handle::current().block_on(async move {
/// // do something async
/// });
/// });
/// # }
/// ```
///
/// # Panics
///
/// This function panics if called from a [`current_thread`] runtime.
///
/// [`current_thread`]: fn@crate::runtime::Builder::new_current_thread
pub fn block_in_place<F, R>(f: F) -> R
where
F: FnOnce() -> R,
{
crate::runtime::thread_pool::block_in_place(f)
}
}
cfg_rt! {
/// Runs the provided closure on a thread where blocking is acceptable.
///
/// In general, issuing a blocking call or performing a lot of compute in a
/// future without yielding is problematic, as it may prevent the executor from
/// driving other futures forward. This function runs the provided closure on a
/// thread dedicated to blocking operations. See the [CPU-bound tasks and
/// blocking code][blocking] section for more information.
///
/// Tokio will spawn more blocking threads when they are requested through this
/// function until the upper limit configured on the [`Builder`] is reached.
/// After reaching the upper limit, the tasks are put in a queue.
/// The thread limit is very large by default, because `spawn_blocking` is often
/// used for various kinds of IO operations that cannot be performed
/// asynchronously. When you run CPU-bound code using `spawn_blocking`, you
/// should keep this large upper limit in mind. When running many CPU-bound
/// computations, a semaphore or some other synchronization primitive should be
/// used to limit the number of computation executed in parallel. Specialized
/// CPU-bound executors, such as [rayon], may also be a good fit.
///
/// This function is intended for non-async operations that eventually finish on
/// their own. If you want to spawn an ordinary thread, you should use
/// [`thread::spawn`] instead.
///
/// Closures spawned using `spawn_blocking` cannot be cancelled. When you shut
/// down the executor, it will wait indefinitely for all blocking operations to
/// finish. You can use [`shutdown_timeout`] to stop waiting for them after a
/// certain timeout. Be aware that this will still not cancel the tasks — they
/// are simply allowed to keep running after the method returns.
///
/// Note that if you are using the single threaded runtime, this function will
/// still spawn additional threads for blocking operations. The basic
/// scheduler's single thread is only used for asynchronous code.
///
/// # Related APIs and patterns for bridging asynchronous and blocking code
///
/// In simple cases, it is sufficient to have the closure accept input
/// parameters at creation time and return a single value (or struct/tuple, etc.).
///
/// For more complex situations in which it is desirable to stream data to or from
/// the synchronous context, the [`mpsc channel`] has `blocking_send` and
/// `blocking_recv` methods for use in non-async code such as the thread created
/// by `spawn_blocking`.
///
/// Another option is [`SyncIoBridge`] for cases where the synchronous context
/// is operating on byte streams. For example, you might use an asynchronous
/// HTTP client such as [hyper] to fetch data, but perform complex parsing
/// of the payload body using a library written for synchronous I/O.
///
/// Finally, see also [Bridging with sync code][bridgesync] for discussions
/// around the opposite case of using Tokio as part of a larger synchronous
/// codebase.
///
/// [`Builder`]: struct@crate::runtime::Builder
/// [blocking]: ../index.html#cpu-bound-tasks-and-blocking-code
/// [rayon]: https://docs.rs/rayon
/// [`mpsc channel`]: crate::sync::mpsc
/// [`SyncIoBridge`]: https://docs.rs/tokio-util/0.6/tokio_util/io/struct.SyncIoBridge.html
/// [hyper]: https://docs.rs/hyper
/// [`thread::spawn`]: fn@std::thread::spawn
/// [`shutdown_timeout`]: fn@crate::runtime::Runtime::shutdown_timeout
/// [bridgesync]: https://tokio.rs/tokio/topics/bridging
///
/// # Examples
///
/// Pass an input value and receive result of computation:
///
/// ```
/// use tokio::task;
///
/// # async fn docs() -> Result<(), Box<dyn std::error::Error>>{
/// // Initial input
/// let mut v = "Hello, ".to_string();
/// let res = task::spawn_blocking(move || {
/// // Stand-in for compute-heavy work or using synchronous APIs
/// v.push_str("world");
/// // Pass ownership of the value back to the asynchronous context
/// v
/// }).await?;
///
/// // `res` is the value returned from the thread
/// assert_eq!(res.as_str(), "Hello, world");
/// # Ok(())
/// # }
/// ```
///
/// Use a channel:
///
/// ```
/// use tokio::task;
/// use tokio::sync::mpsc;
///
/// # async fn docs() {
/// let (tx, mut rx) = mpsc::channel(2);
/// let start = 5;
/// let worker = task::spawn_blocking(move || {
/// for x in 0..10 {
/// // Stand in for complex computation
/// tx.blocking_send(start + x).unwrap();
/// }
/// });
///
/// let mut acc = 0;
/// while let Some(v) = rx.recv().await {
/// acc += v;
/// }
/// assert_eq!(acc, 95);
/// worker.await.unwrap();
/// # }
/// ```
#[track_caller]
pub fn spawn_blocking<F, R>(f: F) -> JoinHandle<R>
where
F: FnOnce() -> R + Send + 'static,
R: Send + 'static,
{
crate::runtime::spawn_blocking(f)
}
}