I have no problem remembering what redirection is, nor that it (typically) involves some
combination of the symbols <
, >
, &
, 1
, and
2
. I can recognize it when I see it, know when I want to use it, but then always have to
look up what order the symbols go in.
Since writing things down is how I learn and remember them, I thought this was worth writing about (which is my reason for nearly every post I write).
Each Linux process has three default file descriptors allocated by the kernel: stdin, stdout, and stderr
(0
, 1
, and 2
respectively). In Bash (and POSIX sh) we can
redirect stdin and stdout from/to files with <
and >
respectively.
# Redirect command's output to output.txt
command >output.txt
# Redirect input to command from input.txt
command <input.txt
When you use the >
output redirection, the output file is overwritten. If you want to
append to the output file, use >>
.
# Append command's output to output.txt
command >>output.txt
You can redirect a command's stderr to a file with 2>
.
# Redirect command's stderr to errors.txt
command 2>errors.txt
This is often used to silence error output by redirecting it to /dev/null
.
Being able to redirect a command's stdin from a file is useful, but often, writing the input you want to send to a command to a file before redirecting the file to a command is just too much overhead. Instead, you can use a here document
$ cat <<EOF
> Line 1
> Line 2
> EOF
Line 1
Line 2
Notice that the indentation of the heredoc was preserved in the output. Sometimes this is undesirable,
so you can use command <<-EOF
to strip leading tabs. Note that they
do have to be hard tabs. If that doesn't suite you, you can use multi-line strings.
You can use here strings too. These are single string stdin redirections that are useful for sending the value of a variable to a command. Note that a here string will have a single newline appended.
$ VAR=test
$ cat <<<"$VAR"
test
So long as you're redirecting stdout and stderr to files (as opposed to other file descriptors) redirecting them independently is easy.
command >stdout.txt 2>stderr.txt
You can redirect all output of a command to stdout with >&
(in Bash) or
2>&1
(in POSIX sh). While less sightly, I recommend using the later, as it's both
POSIX sh compliant and you can swap the 1
and 2
to redirect stdout to stderr.
You can use this to do logging in a script where the stdout output is intended to be machine-readable.
function log() {
echo "[$(date --utc --iso-8601=seconds)] $*" >&2
}
Although, if you're looking for real logging, you should probably use something off-the-shelf instead of reinventing the wheel again.
You can use this to capture all output from a command and save it to a file (although maybe consider capturing stdout and stderr separately), but there's a gotcha with redirection ordering. To redirect stderr to stdout, and then redirect stdout to a file, you would do
command >output.txt 2>&1
Whereas, if you did command 2>&1 >output.txt
only command
's stdout
would be directed to output.txt
because after making stderr a copy of stdout, you redirect
stdout!
In general, the n>&m
(or n<&m
for input redirection) syntax for
file descriptors n
and m
, means "make the n
file descriptor be a
copy of m
". If m
is -
, then the n
file descriptor is
closed. If n
isn't given, it's assumed to be stdin for input redirection, or stdout for
output redirection.
The syntax n>&m-
moves (as opposed to copies as above with
n>&m
) the m
file descriptor to n
before closing it.
I might not be creative enough, but I can't think of a reason you'd do this. According to this Unix Stack Exchange answer, moving file descriptors is rare, non-standard, complicated, and hard to follow.
You can do exec 6<input.txt
to open input.txt
for reading with file
descriptor 6
. You can then use all of the same redirection with the 6
file
descriptor.
When you pipe two commands together with command1 | command2
, you are, in a very real
sense, piping command1
's stdout into command2
's stdin. But what if you want to
pipe both command1
's stdout andstderr to command2
?
You can do either command1 2>&1 | command2
or, if using Bash,
command1 |& command2
(similarly to how you can do >&
).
You can use the plog
from Raimon Grau's
Shell Field Guide
to introspect stderr and stdout for pipelines
function plog() {
local label="${1:-plog}"
tee >(sed "s/^/[$label] /" 1>&2)
}
which you can use to inspect what a particular phase of a pipeline is doing. This can be invaluable!
$ seq 0 4 |
plog seq |
tr '0-9' 'a-z'
a
b
c
d
e
[seq] 0
[seq] 1
[seq] 2
[seq] 3
[seq] 4
TODO: I'd rather the plog
output be interwoven with the tr
output.
This requires process substitution but can be done with
command 2> >(tee stderr.txt >&2)
This uses 2>
to redirect stderr to a file, while the >()
process
substitution substitutes the tee
stdin as a filename. Note that tee
outputs to
stdout, so we redirect it's output with >&2
so that command
's stderr
and stdout aren't merged.
This could be useful for saving compiler stderr output to a file for easy diagnostics while continuing to output it (on stderr even, to minimize disruption to anything else the might consume its output) to the console.
./build.sh 2> >(tee compiler-warnings.txt >&2)
If you want to use this technique in a pipeline, you need to pay special attention to the order of the redirections. Compare the two following examples.
$ cat test.sh
#!/bin/bash
echo "stderr" >&2
echo "stdout"
$ ./test.sh 2> >(tee stderr.txt >&2) 2>&1 | cat
stderr
stdout
$ cat stderr.txt
$ wc -l stderr.txt
0
$ # This is the one you probably want.
$ ./test.sh 2>&1 2> >(tee stderr.txt >&2) | cat
stderr
stdout
$ cat stderr.txt
stderr
$ wc -l stderr.txt
1
You can create a named pipe (a name on the filesystem!) with mkfifo pipename
. You may then
read from, or write to the pipename
pipe. Since it's a special file on the filesystem, all
the regular file permissions apply.
$ mkfifo /tmp/example-pipe
$ ls -l /tmp/example-pipe
prw-rw-r-- 1 nots nots 0 Nov 21 14:23 /tmp/example-pipe
$ echo "test" >/tmp/example-pipe
Notice that this echo
command blocks! This is because the output hasn't been read. So we
can either run echo "test" &
in the background, or read from the
/tmp/example-pipe
from another terminal with
$ cat /tmp/example-pipe
test
while read line; do
echo "$line"
done <input.txt
or
cat input.txt | while read line; do
echo "$line"
done
/dev/kmsg
- write to the kernel log. Useful for troubleshooting long boot times caused
by init scripts.
/dev/fd/$fd
- a Bashism to read/write to/from file descriptor $fd
/dev/{stdin,stdout,stderr}
- a synonym for /dev/fd/{0,1,2}
/dev/tcp/$host/$port
- Bash attempts to open the corresponding
$host:$port
socket
/dev/udp/$host/$port
- Same as /dev/tcp/
, but with UDP.In terminal #1,
nc -l localhost 1234
Then in another terminal, do
echo "test" >/dev/tcp/localhost/1234
this is similar to echo "test" | nc localhost 1234
except it closes the connection. It's
equivalent to echo "test" | nc -N localhost 1234
.