Is it necessary to close the Body in the http.Response object in golang?

Andrii Kushch
7 min readMar 1, 2021

Intro

In this article, I want to answer one question about the golang http package. Is it necessary to close the Body in the http.Response object in go?

First, let’s make the question more formal. Take a look at the following code example:

response, err := http.Get(url)

This line sends an http GET request to the url, using the default golang http client. There are some caveats concerning using it, but this is not the topic of this article.

According to the documentation of golang, if the Body inside response is not closed and read to EOF, the client may not re-use a persistent TCP connection to the server. Let’s investigate what does it mean and which troubles it can bring.

Despite the title of this article, I would like to answer a broader question: how to handle a response correctly. Is it necessary to close and read the Body? Is it enough to do only one of these actions? Or we can just ignore it.

Code

The version of go, I will use in this article is go1.16 linux/amd64, and the OS version is Ubuntu 18.04.3 LTS (kernel 4.15.0–58-generic).

I have written an example of code that we can use to investigate the behavior of each approach. It will allow us to see which type of issues each of them can bring.

Here a simple web application (server.go) we will use for our experiment:

package main

import (
"fmt"
"log"
"net/http"
"os"
)

const url = "localhost:8080"
const endpoint = "/endpoint"

func main() {
fmt.Printf("pid: %d\n", os.Getpid())

http.HandleFunc(endpoint, func(writer http.ResponseWriter, request *http.Request) {
_, err := writer.Write([]byte("OK"))

if err != nil {
log.Fatalln(err)
}
})

if err := http.ListenAndServe(url, nil); err != nil {
log.Fatalln(err)
}
}

This program starts a web server and listens to incoming requests at http://localhost:8080/endpoint. It returns a response with a status code 200 and the OK string inside a response body.

Here is the client (client.go) that I will use to send requests to this web server.

package main

import (
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"os"
)

const url = "localhost:8080"
const endpoint = "/endpoint"

func main() {
fmt.Printf("pid: %d\n", os.Getpid())

t := flag.String("type", "readandclose", `specify the type of request to send:
close - send and close
read - send and read
readandclose - send, read and close
nothing - send
`)
n := flag.Int("number", 5, `specify the number of requests to send`)
flag.Parse()


var fn func() error

switch *t {
case "close":
fn = makeRequestAndCloseBody
case "read":
fn = makeRequestAndReadBody
case "readandclose":
fn = makeRequestAndReadAndCloseBody
case "nothing":
fn = makeRequest
default:
log.Fatalln("unknown request type")
}

for i := 0; i < *n; i++ {
if err := fn(); err != nil {
log.Fatalln(err)
}
}
}

func makeRequestAndReadAndCloseBody() error {
res, err := http.Get("http://" + url + endpoint)
if err != nil {
return err
}
defer res.Body.Close()

_, err = io.Copy(ioutil.Discard, res.Body)
return err
}

func makeRequestAndReadBody() error {
res, err := http.Get("http://" + url + endpoint)
if err != nil {
return err
}

_, err = io.Copy(ioutil.Discard, res.Body)
return err
}

func makeRequestAndCloseBody() error {
res, err := http.Get("http://" + url + endpoint)
if err != nil {
return err
}

return res.Body.Close()
}

func makeRequest() error {
_, err := http.Get("http://" + url + endpoint)
return err
}

The client will accept two parameters:

  • the type [close, read, nothing, readandclose] will define a response’s handling: only close, only read, do nothing, or read and close.
  • the number is how many requests to send.

Tools

Before starting the experiment, let’s say a few words about what we expect and how we can observe it.

The expectation is that the incorrect response handling will create a resource leak. There are few ways to observe it.

Let $PID be a process id of the observable processes.

procfs

Using procfs, if your system supports it, you can get the information about open file descriptors and network state. Inside dir /proc/$PID/fd, you can see all file descriptors associated with a process.

vagrant@vagrant:~$ ls -la /proc/1482/fd
total 0
dr-x------ 2 vagrant vagrant 0 Feb 27 16:02 .
dr-xr-xr-x 9 vagrant vagrant 0 Feb 27 16:01 ..
lrwx------ 1 vagrant vagrant 64 Feb 27 16:02 0 -> /dev/pts/1
lrwx------ 1 vagrant vagrant 64 Feb 27 16:02 1 -> /dev/pts/1
lrwx------ 1 vagrant vagrant 64 Feb 27 16:02 2 -> /dev/pts/1
lrwx------ 1 vagrant vagrant 64 Feb 27 16:02 3 -> 'anon_inode:[eventpoll]'
lr-x------ 1 vagrant vagrant 64 Feb 27 16:02 4 -> 'pipe:[23349]'
l-wx------ 1 vagrant vagrant 64 Feb 27 16:02 5 -> 'pipe:[23349]'
lrwx------ 1 vagrant vagrant 64 Feb 27 16:02 6 -> 'socket:[23356]'

File /proc/net/tcp contains all information about the tcp connections. The format is described here.

vagrant@vagrant:~$ cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
...
4: 0100007F:ABDA 0100007F:1F90 01 00000000:00000076 02:000000E2 00000000 1000 0 23356 4 0000000000000000 20 4 1 10 -1
5: 0100007F:1F90 0100007F:ABDA 01 00000076:00000000 01:00000014 00000000 1000 0 23357 4 0000000000000000 20 4 1 10 -1
...

lsof and netstat

There is another way to do the same things. Using lsof and netstat utilities. Outputs will be something like that :

vagrant@vagrant:~$ lsof -p 1482
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
client 1482 vagrant cwd DIR 0,50 224 121 /vagrant/client
client 1482 vagrant rtd DIR 253,0 4096 2 /
client 1482 vagrant txt REG 0,50 6416745 137 /vagrant/client/client
client 1482 vagrant mem REG 253,0 47568 2097645 /lib/x86_64-linux-gnu/libnss_files-2.27.so
client 1482 vagrant mem REG 253,0 2030544 2097578 /lib/x86_64-linux-gnu/libc-2.27.so
client 1482 vagrant mem REG 253,0 144976 2097665 /lib/x86_64-linux-gnu/libpthread-2.27.so
client 1482 vagrant mem REG 253,0 170960 2097554 /lib/x86_64-linux-gnu/ld-2.27.so
client 1482 vagrant 0u CHR 136,1 0t0 4 /dev/pts/1
client 1482 vagrant 1u CHR 136,1 0t0 4 /dev/pts/1
client 1482 vagrant 2u CHR 136,1 0t0 4 /dev/pts/1
client 1482 vagrant 3u a_inode 0,13 0 9567 [eventpoll]
client 1482 vagrant 4r FIFO 0,12 0t0 23349 pipe
client 1482 vagrant 5w FIFO 0,12 0t0 23349 pipe
client 1482 vagrant 6u IPv4 23356 0t0 TCP localhost:43994->localhost:http-alt (ESTABLISHED)
vagrant@vagrant:~$ netstat -nptuxo
...
tcp 118 0 127.0.0.1:43994 127.0.0.1:8080 ESTABLISHED 1482/./client keepalive (27.78/0/0)
tcp 0 118 127.0.0.1:8080 127.0.0.1:43994 ESTABLISHED 1438/./server on (0.20/0/0)
...

The Experiment

To conduct the experiment, we need to build and start the web application.

go build server.go
./server

And build and run the client with different parameters, and every time check the resources it uses.

go build client.go

Do Nothing:

./client --number 1000000 --type nothing

In my case, the amount of the file descriptors for the client and server processes went up to 1024.

vagrant@vagrant:~$ ls -la /proc/$PID/fd | wc -l
1027
# 1024 + 3 lines (".", "..", and "total 0")

Simultaneously, on the server-side, I saw an error:

Accept error: accept tcp 127.0.0.1:8080: accept4: too many open files; retrying in 1s

It means I have reached the max number of open file descriptors per process. You can get this limit for your environment with the following command.

vagrant@vagrant:~$ ulimit -n
1024

What does it mean? It means this is a wrong approach to not to read and not to close the Body. Your program will create new connections until the limit for open descriptors is reached. After that, you can not send any new request until the underlying OS frees resources by closing connections.

Close

./client --number 1000000 --type close

If we do not read the Body but close it, we see that we do not keep a massive amount of FD per process. Instead, we are creating and deleting a new one each time.

You can see it in the fd list of the client process (number 23356 will be changed):

vagrant@vagrant:~$ ls -la  /proc/1482/fd
total 0
...
lrwx------ 1 vagrant vagrant 64 Feb 27 16:02 6 -> 'socket:[23356]'

Or in DEVICE column in a lsof output:

vagrant@vagrant:~$ lsof -p 1482
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
client 1482 vagrant 6u IPv4 23356 0t0 TCP localhost:43994->localhost:http-alt (ESTABLISHED)

Another interesting thing is if you run the netstat command, you will see many TIME_WAIT connections. But is it a problem?

First of all, TIME_WAIT is the valid and useful state for a connection. It is used to prevent a few potential issues during network communication. More information is here rfc1337.

Nevertheless:

  • It consumes some memory because this information has to be stored in the system.
  • It reduces the number of free ports that the system can use.
  • It might create an additional load on the CPU while searching for the port number that the program can use.

For most applications, this will not be a problem. In case it is, there are some ways how you can reconfigure your system and try to mitigate this issue.

Read

./client --number 1000000 --type read

When I only read the Body but did not close it, only one connection was created and used for requests, which is good.

Important: we should read the Body completely. If it is read partially, there will be many file descriptors open, and it will cause a problem. You can try it by modifying the makeRequestAndReadBody function to the following:

func makeRequestAndReadBody() error {
res, err := http.Get("http://" + url + endpoint)
if err != nil {
return err
}

var oneByteBuff [1]byte
_, err = io.ReadFull(res.Body, oneByteBuff[:])

return err
}

Read and close

./client --number 1000000 --type readandclose

In this case, I received the same result as in the Read scenario, but if Body isn’t read until the end, there will be no issue with open file descriptors. This approach is preferable.

Conclusion

In this article, I have shown the results of using different approaches to handle the response Body from the default http call in go. I tried four scenarios:

  • do nothing with Body
  • read Body
  • close Body
  • read and close Body

Read and close scenario is a winner. Other scenarios failed under various circumstances. The result matches the official documentation.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Andrii Kushch
Andrii Kushch

Written by Andrii Kushch

I am a software engineer, currently living in Munich.

No responses yet

Write a response