How to use co_await operator in C++ the simpliest way?

363 Views Asked by At

What is the minimum set of actions I should do to use C++ co_await operator like C# await operator?

There is an article on cppreferense about corutines, where there is C# like class task<> which is used as return type, but I can't find it in standard library. When function return void, this predictably leads to a compile error.

1

There are 1 best solutions below

0
On

task in that case is just a placeholder for the plumbing you put together yourself.

From https://en.cppreference.com/w/cpp/language/coroutines

Every coroutine must have a return type that satisfies a number of requirements

When you use co_* in your code you need to #include <coroutine>. The compiler will do stuff to your function. Example:

void csp_await() {
  bidirectional_channel<int> ch;

  co_await ch.send(1);
}

The above code results in the following error:

class "std::coroutine_traits" has no member "promise_type"

Whatever the compiler is doing, it is looking for a promise_type but it cannot find it.

To get past this we need to introduce the following code.

struct task_handle {};

struct task : std::coroutine_handle<task_handle> {
  using promise_type = task_handle;
};

The return type of the csp_await function is changed to use our task type which results in a different error:

"task::promise_type" has no member "initial_suspend"

So we add some more stuff

struct task_handle;

struct task : std::coroutine_handle<task_handle> {
  using promise_type = task_handle;
};

struct task_handle {
  task                get_return_object() { return {task::from_promise(*this)}; }
  std::suspend_always initial_suspend() noexcept { return {}; }
  std::suspend_always final_suspend() noexcept { return {}; }
  void                return_void() {}
  void                unhandled_exception() {}
};

The next thing to do is to make the send function of the bidirectional_channel coroutine friendly. It is currently a blocking function with the return type void which cannot be used as is.

template <typename T> struct send_async_awaiter {
  bidirectional_channel<int> &ch_;
  T                           val_;

  bool await_ready() const noexcept { return false; }
  void await_suspend(std::coroutine_handle<> handle) {}
  void await_resume() noexcept {}
};

template <typename T> send_async_awaiter<T> send_async(bidirectional_channel<T> &ch, const T &val) {
  return send_async_awaiter{ch, val};
}

send_async_awaiter is our awaitable, coroutine friendly type. send_async is just what the send function would need to be changed into to be awaitable. This code compiles but it doesn't do anything.

I don't want to make this about CSP but I need to go into a little bit of detail about what needs to happen in the channel send function. In this example I'm using an unbuffered channel which should suspend until there is a ready receiver. Assuming the receiver isn't ready to begin with we need to park the send operation somewhere. To do this we need to only consider await_ready and await_suspend. await_ready is returning false, so the next thing to happen will be await_suspend. we get a generic handle to our coroutine that we can use to park it.

The implementation of bidirectional_channel that I started out with is based on threads. It uses a mutex and conditional variables to wake up blocked threads but I don't think a bidirectional_channel_async type should be implemented in the same way. Instead, I'm going to consider an invasive approach where the suspended coroutine is parked inside the channel. I think this makes sense because this is where I later need to be able to find a parked coroutine to resume when the receiver is ready. It also implies that we could write a completely single threaded CSP library that use cooperative instead of preemptive multithreading. I think adding parallelism to this async CSP library would be easier than going the other way.

I didn't go into all the details of everything here but it's a start. I'm kinda glad that it is decoupled like this because you can build whatever you think is best on top of this.