golang http request error panic recover

4.9k Views Asked by At

I am fairly new to coding in golang and am struggling with the panic/recover process for a bad url request. Below is a script which queries a list of URLs and outputs responses. Occasionally a bad url is entered or a server is down and the HTTP request fails which causes a panic. I am not clear on how to recover from this and continue. I want the program to recover from the panic, document the bad url and error, and continue down the list of urls outputting the failed url and error with the rest of the normal url response data.

package main

import (
    "fmt"
    "net/http"
)

var urls = []string{
    "http://www.google.com",        //good url, 200
    "http://www.googlegoogle.com/", //bad url
    "http://www.zoogle.com",        //500 example
}

//CONCURRENT HTTP REQUESTS -------------------------------------------
func MakeRequest(url string, ch chan<- string) {
    resp, err := http.Get(url)
    if err != nil {
        fmt.Println("Error Triggered", err)
        ch <- fmt.Sprintf("err: %s", err)
    }
    ch <- fmt.Sprintf("url: %s, status: %s ", url, resp.Status) // put response into a channel
    resp.Body.Close()
}

func main() {
    output := make([][]string, 0) //define an array to hold responses

    //PANIC RECOVER------------------------------
    defer func() { //catch or finally
        if r := recover(); r != nil { //catch
            fmt.Println("Recover Triggered: ", r)
        }
    }()

    //MAKE URL REQUESTS----------------------------------------------
    for _, url := range urls {
        ch := make(chan string)                 //create a channel for each request
        go MakeRequest(url, ch)                 //make concurrent http request
        output = append(output, []string{<-ch}) //append output to an array
    }

    //PRINT OUTPUT ----------------------
    for _, value := range output {
        fmt.Println(value)
    }
}

I am looking for an output similar to:

[url: http://www.google.com, status: 200 OK ]

[url: http://www.googlegoogle.com, err: no such host]

[url: http://www.zoogle.com, status: 500 Internal Server Error ]

2

There are 2 best solutions below

0
On

Thanks Jim B. I assumed the panic was triggered by the request, but it was the attempt to use "resp.Status" for a failed request, since it doesn't exist. I modified my error handling to only put a resp.Status in the "ch" if there is no error. In the case of an error, I substitute a different response into the "ch" with the error value. No need to recover since no panic was triggered.

func MakeRequest(url string, ch chan<- string) {
    resp, err := http.Get(url)
    if err != nil {
        ch <- fmt.Sprintf("url: %s, err: %s ", url, err)
    } else {
        ch <- fmt.Sprintf("url: %s, status: %s ", url, resp.Status) // put response into a channel
        defer resp.Body.Close()
    }
}

Output is now:

[url: http://www.google.com, status: 200 OK ]

[url: http://www.googlegoogle.com/, err: Get http://www.googlegoogle.com/: dial tcp: lookup www.googlegoogle.com: no such host ]

[url: http://www.zoogle.com, status: 500 Internal Server Error ]

0
On

The only place I would (and do) place recovers: in a "fault barrier".

A "fault barrier" is the highest available place to centrally catch problems. It is generally the place where a new goroutine got spawned (ie: per http accept). In a ServeHTTP method, you may want to catch and log individual panics without restarting the server (usually such panics are trivial nil ptr derefs). You might see recovers in places where you cannot otherwise check for a condition you need to know - like whether a filehandle is closed already. (just close and handle a possible panic).

I have a large code base that only used recover 2 or 3 times. They are only for the cases mentioned, and I only do it to ensure that the problem is specifically logged. I would do a recover just to log the message even if I was still going to os.Exit and have the script that launched me restart me.