Why does call to recv timeout when waiting for ICMPv6 ping replies?

135 Views Asked by At

I am trying to programmatically send and receive ICMPv6 ping packets from a Windows 7 machine. The code I am using is adapted from existing code that is used successfully to send/receive IPv4 ping packets. The only difference I can see is that I am using IPv6 instead of IPv4, and that I am using link local addresses for both the source and destination addresses.

The destination address I am pinging is fe80::b617:80ff:fe40:fe21%12 where the %12 selects the appropriate interface. Running ipconfig shows several network adapters on my machine:

Ethernet adapter Local Area Connection:

   Connection-specific DNS Suffix  . : dti.lan
   Link-local IPv6 Address . . . . . : fe80::49d5:a4a1:1d10:7e42%11
   IPv4 Address. . . . . . . . . . . : 192.168.0.71
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 192.168.0.1

Ethernet adapter ARM-dev-board-10.x.x.x:

   Connection-specific DNS Suffix  . : 
   Link-local IPv6 Address . . . . . : fe80::e540:1d52:7bf7:3e4%12
   IPv4 Address. . . . . . . . . . . : 10.86.11.123
   Subnet Mask . . . . . . . . . . . : 255.0.0.0
   IPv4 Address. . . . . . . . . . . : 172.16.17.6
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 172.16.17.1

and I am using the %12 scope id to select the fe80::e540:1d52:7bf7:3e4%12 link local address on the ARM-dev-board-10.x.x.x adapter.

I have used wireshark to monitor the network packets and I can see that my code correctly sends the ping request, and the target sends a ping reply back. The problem is that my code never receives the reply packet. The call to recv() times out (returns SOCKET_ERROR and WSAGetLastError returns 10600).

Is there some magic socket option I need to set to get Windows to pass the reply packets through to me ?

I tried adding a call to bind() before the sendto() but that didn't make any difference (I don't think I need to call bind() as the sendto() should implicitly bind the interface for me I think).

I am calling the following code with address = "fe80::b617:80ff:fe40:fe21%12" and timeoutInMs = 1000

bool Ping6Internal(const char* address, const int timeoutInMs)
    {
        bool result = false;
        int timeout = timeoutInMs;

        // get the destination address

        struct addrinfo* addrInfo;
        struct addrinfo hints = { 0 };
        struct sockaddr_in6 dstAddr = { 0 };

        // We only care about IPV6 results
        hints.ai_family   = AF_INET6;
        hints.ai_socktype = SOCK_STREAM;
        hints.ai_flags    = AI_ADDRCONFIG;

        int errcode = getaddrinfo(address, nullptr, &hints, &addrInfo);
        if (errcode != 0)
        {
            perror("[ERROR] getaddrinfo ");
        }

        for (auto p = addrInfo; p; p = p->ai_next)
        {
            // Check to make sure we have a valid AF_INET6 address 
            if (p->ai_family == AF_INET6)
            {
                // Use memcpy since we're going to free the addrInfo variable
                int foo = sizeof(sockaddr_in6);
                memcpy(&dstAddr, p->ai_addr, p->ai_addrlen);
                dstAddr.sin6_family = AF_INET6;
                break;
            }
        }
        freeaddrinfo(addrInfo);

        int sockRaw = WSASocket(AF_INET6, SOCK_RAW, IPPROTO_ICMPV6, nullptr, 0, WSA_FLAG_OVERLAPPED);
        if (sockRaw == INVALID_SOCKET)
        {
            throw std::runtime_error("WSASocket failed");
        }
        int rv = setsockopt(sockRaw, SOL_SOCKET, SO_RCVTIMEO, reinterpret_cast<char*>(&timeout), sizeof(timeout));
        if (rv == SOCKET_ERROR)
        {
            closesocket(sockRaw);
            throw std::runtime_error("setsockopt SO_RCVTIMEO failed");
        }
        rv = setsockopt(sockRaw, SOL_SOCKET, SO_SNDTIMEO, reinterpret_cast<char*>(&timeout), sizeof(timeout));
        if (rv == SOCKET_ERROR)
        {
            closesocket(sockRaw);
            throw std::runtime_error("setsockopt SO_SNDTIMEO failed");
        }

        // Find out which local interface will be used when sending to this destination
        DWORD bytes;
        sockaddr_in6 srcAddr;
        rv = WSAIoctl(sockRaw, SIO_ROUTING_INTERFACE_QUERY, &dstAddr, sizeof(dstAddr),
                      (SOCKADDR *)&srcAddr, sizeof(srcAddr), &bytes, nullptr, nullptr);
        if (rv == SOCKET_ERROR)
        {
            closesocket(sockRaw);
            throw std::runtime_error("could not determine which interface to use");
        }
        string localAddress = FormatAddress((SOCKADDR*)&srcAddr, sizeof(srcAddr));

#if 0
        rv = bind(sockRaw, reinterpret_cast<struct sockaddr*>(&srcAddr), sizeof(srcAddr));
        if (rv == SOCKET_ERROR)
        {
            int errCode   = WSAGetLastError();
            string errMsg = GetErrorString(errCode);
        }
#endif 

        std::vector<char> icmpPacket(MAX_PACKET_SIZE);
        IcmpHeader* pHeaderTx = reinterpret_cast<IcmpHeader*>(icmpPacket.data());

        pHeaderTx->type      = ICMPV6_ECHO;
        pHeaderTx->code      = 0;
        pHeaderTx->checksum  = 0;
        pHeaderTx->id        = static_cast<uint16_t>(GetCurrentProcessId());
        pHeaderTx->seqNum    = 0;
        pHeaderTx->timestamp = GetTickCount();

        // the upper 32 bits is the process id and the lower 32 bits is an incrementing counter
        pHeaderTx->uniqueId = (uint64_t(GetCurrentProcessId()) << 32) | g_threadData.GetNextCounter();

        const int headerSize = sizeof(IcmpHeader);
        const int packetSize = headerSize + DEF_PACKET_SIZE;
        std::fill(icmpPacket.data() + headerSize, icmpPacket.data() + packetSize, 'E');

        // Calculate the packet checksum.
        // The checksum is calculated over the IPv6 pseudo header plus the real packet.

        IpV6PseudoHeader pseudoHeader = { 0 };
        pseudoHeader.srcAddress = srcAddr.sin6_addr;
        pseudoHeader.dstAddress = dstAddr.sin6_addr;
        pseudoHeader.length     = htonl(sizeof(IcmpHeader));
        pseudoHeader.nextHeader = IPPROTO_ICMPV6;

        unsigned long sum = 0;
        const uint16_t* hdrU16 = reinterpret_cast<uint16_t*>(&pseudoHeader);
        for (int n = 0; n < sizeof(IpV6PseudoHeader) / 2; n++)
        {
            sum += hdrU16[n];
        }
        const uint16_t* dataU16 = reinterpret_cast<uint16_t*>(icmpPacket.data());
        for (int n = 0; n < packetSize / 2; n++)
        {
            sum += dataU16[n];
        }
        if (packetSize % 2)   // odd number of bytes so grab the final byte
        {
            sum += icmpPacket[packetSize - 1];
        }
        sum  = (sum >> 16) + (sum & 0xFFFF);
        sum += (sum >>16);

        pHeaderTx->checksum = static_cast<uint16_t>(~sum);

        rv = sendto(sockRaw, icmpPacket.data(), packetSize, 0, (struct sockaddr*)&dstAddr, sizeof(dstAddr));

        if (rv != SOCKET_ERROR)
        {
            char recvBuf[MAX_PACKET_SIZE];
//            struct sockaddr_in6 from = { 0 };
//            sockaddr_storage from;

            auto now = boost::chrono::system_clock::now();
            const auto timeExpired = now + boost::chrono::milliseconds(timeoutInMs);

            do
            {
//                int fromSize = sizeof(from);
//                const int rv = recvfrom(sockRaw, recvBuf, MAX_PACKET_SIZE, 0, (struct sockaddr*) &from, &fromSize);
                const int rv = recv(sockRaw, recvBuf, MAX_PACKET_SIZE, 0);
                if (rv == SOCKET_ERROR)
                {
                    int errCode = WSAGetLastError();
                    string errMsg = GetErrorString(errCode);
                    break;
                }

                IcmpHeader* pHeaderRx = reinterpret_cast<IcmpHeader*>(recvBuf + IP_HEADER_SIZE);
                if (pHeaderRx->uniqueId == pHeaderTx->uniqueId)
                {
                    result = true;
                    break;
                }
                now = boost::chrono::system_clock::now();
            } while (now < timeExpired);
        }
        closesocket(sockRaw);

        return result;
    }

Edited on 10 April 2019 to add:

Looking in Windows Event Viewer at the events with ID 5152 in the security section I can see this:

The Windows Filtering Platform has blocked a packet.

Application Information:
    Process ID:     4
    Application Name:   System

Network Information:
    Direction:      Inbound
    Source Address:     fe80::b617:80ff:fe40:fe21
    Source Port:        0
    Destination Address:    fe80::e540:1d52:7bf7:3e4
    Destination Port:       129
    Protocol:       58

Filter Information:
    Filter Run-Time ID: 648218
    Layer Name:     Receive/Accept
    Layer Run-Time ID:  46

which looks like the Windows Firewall has blocked the ping reply. After turning off the firewall (and temporarily disabling Sophos Endpoint as well) I no longer see the ID 5152 events showing the packet has been filtered. But my program still isn't receiving the ping reply packet :-(

0

There are 0 best solutions below