Grpc service fails to allocate memory within kubernetes pod

309 Views Asked by At

I have a grpc service deployed at kubernetes, all the memory allocation goes through tcmalloc. I frequently find the out-of-memory issue within pod.

Stacktrace here:

terminate called after throwing an instance of 'std::length_error'
  what():  basic_string::_M_create
*** Aborted at 1668932192 (unix time) try "date -d @1668932192" if you are using GNU date ***
PC: @                0x0 (unknown)
*** SIGABRT (@0x1) received by PID 1 (TID 0x7f2e45bd3700) from PID 1; stack trace: ***
    @     0x7f2e4c7d63c0 (unknown)
    @     0x7f2e4c4a918b gsignal
    @     0x7f2e4c488859 abort
    @     0x7f2e4c8a1951 (unknown)
    @     0x7f2e4c8ad47c (unknown)
    @     0x7f2e4c8ad4e7 std::terminate()
    @     0x7f2e4c8ad799 __cxa_throw
    @     0x7f2e4c8a4366 std::__throw_length_error()
    @     0x7f2e4c9459fc std::__cxx11::basic_string<>::_M_create()
    @     0x561821534960 __gnu_cxx::new_allocator<>::construct<>()
    @     0x56182153390b metrics::ReadReporter::Report()
    @     0x56182125cb54 std::_Function_handler<>::_M_invoke()
    @     0x56182151f2aa std::_Function_handler<>::_M_invoke()
    @     0x56182154092e std::_Function_handler<>::_M_invoke()
    @     0x56182153f5e5 file::HttpRequest::Invoke()
    @     0x7f2e4c8d9d84 (unknown)
    @     0x7f2e4c7ca609 start_thread
    @     0x7f2e4c585293 clone
    @                0x0 (unknown)
terminate called recursively
terminate called recursively
external/com_google_tcmalloc/tcmalloc/tcmalloc_policy.h:102] Unable to allocate (new failed) 8111000728417 @ 0x561822037742 0x56182200fc67 0x561822003b0f 0x561821534960 0x56182153390b 0x56182125cb54 0x56182125d10a 0x56182151f2aa 0x56182154092e 0x56182153f5e5 0x561821545432 0x7f2e4c8d9d84
See https://github.com/google/tcmalloc/tree/master/docs/stats.md for an explanation of this page

I have registered readiness and liveness probe for the service. It's simply a thread listen and responding with TCP.

My confusion is:

  1. Why the pod doesn't fail as OOM (since SIGABRT has be thrown as you could see in the stacktrace), and will be restarted by Kubernetes?
  2. I know maybe I could add more memory allocation to the container, and add rate limiter to work-around this issue, but wondering that's the best practice for this issue?
0

There are 0 best solutions below