I'm trying to intentionally exhaust an API limit (900 calls) by running the following function:
#[get("/exhaust")]
pub async fn exhaust(_pool: web::Data<PgPool>, config: web::Data<Arc<Settings>>) -> impl Responder {
let mut handles = vec![];
for i in 1..900 {
let inner_config = config.clone();
let handle = thread::spawn(move || async move {
println!("running thread {}", i);
get_single_tweet(inner_config.as_ref().deref(), "1401287393228038149")
.await
.unwrap();
});
handles.push(handle);
}
for h in handles {
h.join().unwrap().await;
}
HttpResponse::Ok()
My machine has 16 cores so I expected the above to run 16x faster than a single-threaded function, but it doesn't. In fact it runs exactly as slow as the single-threaded version.
Why is that? What am I missing?
Note: the move || async move part looks a little weird to me, but I got there by following suggestions from the compiler. It wouldn't let me put async next to the first move due to async closures being unstable. Could that be the issue?
This code will indeed run your
asyncblocks synchronously. Anasyncblock creates a type that implementsFuture, but one thing to know is thatFutures don't start running on their own, they have to either beawait-ed or given to an executor to run.Calling
thread::spawnwith a closure that returns aFutureas you've done will not execute them; the threads are simply creating theasyncblock and returning. So theasyncblocks aren't actually being executed until youawaitthem in the loop overhandles, which will process the futures in order.One way to fix this is to use
join_allfrom thefuturescrate to run them all simultaneously.