I'm trying to intentionally exhaust an API limit (900 calls) by running the following function:
#[get("/exhaust")]
pub async fn exhaust(_pool: web::Data<PgPool>, config: web::Data<Arc<Settings>>) -> impl Responder {
let mut handles = vec![];
for i in 1..900 {
let inner_config = config.clone();
let handle = thread::spawn(move || async move {
println!("running thread {}", i);
get_single_tweet(inner_config.as_ref().deref(), "1401287393228038149")
.await
.unwrap();
});
handles.push(handle);
}
for h in handles {
h.join().unwrap().await;
}
HttpResponse::Ok()
My machine has 16 cores so I expected the above to run 16x faster than a single-threaded function, but it doesn't. In fact it runs exactly as slow as the single-threaded version.
Why is that? What am I missing?
Note: the move || async move
part looks a little weird to me, but I got there by following suggestions from the compiler. It wouldn't let me put async next to the first move due to async closures being unstable
. Could that be the issue?
This code will indeed run your
async
blocks synchronously. Anasync
block creates a type that implementsFuture
, but one thing to know is thatFuture
s don't start running on their own, they have to either beawait
-ed or given to an executor to run.Calling
thread::spawn
with a closure that returns aFuture
as you've done will not execute them; the threads are simply creating theasync
block and returning. So theasync
blocks aren't actually being executed until youawait
them in the loop overhandles
, which will process the futures in order.One way to fix this is to use
join_all
from thefutures
crate to run them all simultaneously.