How to stdout in python subprocess command to a file without any buffering?

1.9k Views Asked by At

I am trying to use the subprocess command to send its stdout output to a log file. I want the user to be able to use tail -f logfile to look at the logs simultaneously.

However, I observed that the subprocess module is buffering output logs for a long time, before it writes to the file. Is there any way to avoid this buffering behavior?

file_stdout=open("/var/log/feeder.log","w")
file_stderr=open("/var/log/feeder.err","w")
proc = subprocess.Popen("python /etc/feeder/feeder.py -i " + input_file + " -o " + output_file + " -r " + str(rate) + " -l " +str(lines), stdout=file_stdout, stderr=file_stderr, shell=True)

When I run tail -f /var/log/feeder.log I would like to see the streaming output. Any way to achieve this?

4

There are 4 best solutions below

0
On BEST ANSWER

Here file_stdout is /var/log/feeder.log

No it's not. You can't pass a string as stdout. As the docs make clear. It takes a file object or a file descriptor (a number).

So, the problem is almost certainly with the way you opened the file, which you haven't shown us. I'm guessing you did it with the open function, and you left the buffering argument as the default. As the docs say:

When no buffering argument is given, the default buffering policy works as follows:

  • Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On many systems, the buffer will typically be 4096 or 8192 bytes long.

  • “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above for binary files.

(This is the Python 3.x version of open; things are different in 2.x, but the basic problem is equivalent, and so is the solution.)

So, it's going to write in chunks of, e.g., 8192 bytes.

If you want unbuffered output, you can use buffering=0 (and of course make sure to open the file in binary mode). Or just use os.open and pass the fd, and let subprocess create its own file object.


While we're at it, you probably shouldn't be using shell=True (the shell could theoretically introduce buffering of its own—and, more importantly, it's not doing you any good, and it will cause all kinds of problems if, say, any of those strings have spaces in them). Also, you may want to use sys.executable instead of 'python' for the program name; that ensures that the same version of Python being used to run the parent script also gets used to run the child script, rather than whatever version of Python happens to be first one the PATH.

1
On

here: you need give file discriptor not the file and the file must be opened in append mode

file_stdout = open('output_log','a+')
file_stderr = open('error_log','a+')
proc = subprocess.Popen("python /etc/feeder/feeder.py -i " + input_file + " -o " + output_file + " -r " + str(rate) + " -l " +str(lines), stdout=file_stdout, stderr=file_stderr)

here is demo: Here is demo

1
On

You are using subprocess.Popen wrong.

proc = subprocess.Popen(["python", "/etc/feeder/feeder.py",
    "-i", input_file,
    "-o", output_file,
    "-r", str(rate),
    "-l", str(lines)],
    stdout=file_stdout, stderr=file_stderr)

Subprocess is not buffering anything. The called python process does the buffering. You have to use sys.stdout.flush() in /etc/feeder/feeder.py to write the data to the file.

0
On

Do

proc = subprocess.Popen("python -u /etc/feeder/feeder.py -i " + input_file + " -o " + output_file + " -r " + str(rate) + " -l " +str(lines), stdout=file_stdout, stderr=file_stderr, shell=True)

Notice the -u