How to avoid SIGABRT when generating RSA Signature at EVP_SignFinal

390 Views Asked by At

I'm trying to generate a RSA Signature with libopenssl for c++: But when I run my code, I get a SIGABRT. I did some deep debugging into libopenssl internal stuff to see where the Segfault comes from. I'll come to this later on.

First I want to make clear, that the RSA PrivateKey was successfully loaded from a .pem file. So Im pretty sure that's not the problem's origin.

So my question is: How to avoid the SIGABRT and what is the cause of it ?

I'm doing this for my B.Sc. Thesis so I really appreciate your help :)


Signature Generation Function:

DocumentSignature* RSASignatureGenerator::generateSignature(ContentHash* ch, CryptographicKey* pK) throw(PDVSException) {
    OpenSSL_add_all_algorithms();
    OpenSSL_add_all_ciphers();
    OpenSSL_add_all_digests();

    if(pK == nullptr)
        throw MissingPrivateKeyException();

    if(pK->getKeyType() != CryptographicKey::KeyType::RSA_PRIVATE || !dynamic_cast<RSAPrivateKey*>(pK))
        throw KeyTypeMissmatchException(pK->getPem()->getPath().string(), "Generate RSA Signature");

    //get msg to encrypt
    const char* msg = ch->getStringHash().c_str();

    //get openssl rsa key
    RSA* rsaPK = dynamic_cast<RSAPrivateKey*>(pK)->createOpenSSLRSAKeyObject();

    //create openssl signing context
    EVP_MD_CTX* rsaSignCtx = EVP_MD_CTX_create();
    EVP_PKEY* priKey  = EVP_PKEY_new();
    EVP_PKEY_assign_RSA(priKey, rsaPK);

    //init ctxt
    if (EVP_SignInit(rsaSignCtx, EVP_sha256()) <=0)
        throw RSASignatureGenerationException();

    //add data to sign
    if (EVP_SignUpdate(rsaSignCtx, msg, std::strlen(msg)) <= 0) {
        throw RSASignatureGenerationException();
    }

    //create result byte signature struct
    DocumentSignature::ByteSignature* byteSig = new DocumentSignature::ByteSignature();
    //set size to max possible
    byteSig->size = EVP_MAX_MD_SIZE;
    //alloc buffer memory
    byteSig->data = (unsigned char*)malloc(byteSig->size);

    //do signing
    if (EVP_SignFinal(rsaSignCtx, byteSig->data, (unsigned int*) &byteSig->size, priKey) <= 0)
        throw RSASignatureGenerationException();


    DocumentSignature* res = new DocumentSignature(ch);
    res->setByteSignature(byteSig);

    EVP_MD_CTX_destroy(rsaSignCtx);
    //TODO open SSL Memory leaks -> where to free open ssl stuff?!

    return res;
}

RSA* rsaPK = dynamic_cast(pK)->createOpenSSLRSAKeyObject();

virtual RSA* createOpenSSLRSAKeyObject() throw (PDVSException) override {
        RSA* rsa = NULL;
        const char* c_string = _pem->getContent().c_str();
        BIO * keybio = BIO_new_mem_buf((void*)c_string, -1);

        if (keybio==NULL)
            throw OpenSSLRSAPrivateKeyObjectCreationException(_pem->getPath());

        rsa = PEM_read_bio_RSAPrivateKey(keybio, &rsa, NULL, NULL);

        if(rsa == nullptr)
            throw OpenSSLRSAPrivateKeyObjectCreationException(_pem->getPath());

        //BIO_free(keybio);

        return rsa;
    }

SigAbrt origin in file openssl/crypto/mem.c

void CRYPTO_free(void *str, const char *file, int line)
{
    if (free_impl != NULL && free_impl != &CRYPTO_free) {
        free_impl(str, file, line);
        return;
    }

#ifndef OPENSSL_NO_CRYPTO_MDEBUG
    if (call_malloc_debug) {
        CRYPTO_mem_debug_free(str, 0, file, line);
        free(str);
        CRYPTO_mem_debug_free(str, 1, file, line);
    } else {
        free(str);
    }
#else
    free(str); // <<<<<<< HERE
#endif
}

the stacktrace

stacktrace screenshot from debugger (clion - gdb based)

1

There are 1 best solutions below

5
On BEST ANSWER

I just found the Bug (and Im really not sure if this is a libopenssl bug..)

//set size to max possible
byteSig->size = EVP_MAX_MD_SIZE;
//alloc buffer memory
byteSig->data = (unsigned char*)malloc(byteSig->size);

The problem was when I set the buffer size to EVP_MAX_MD_SIZE!

The (in my opinion) very very strange thing is, that you have to keep the size uninitialized! (not even set to 0 - just "size_t size;" ).

Strange thing here is that then you also HAVE TO allocate memory just like I did. I dont understand this because then an undefined size of memory gets allocated..

What the really weird is that libopenssl internally sets the size back to 0 and allocates the memory itself.. (I detected this by browsing the libopenssl source code)