I have a case where I want to use input from a file as the format for printf()
in awk. My formatting works when I set it in a string within the code, but it doesn't work when I load it from input.
Here's a tiny example of the problem:
$ # putting the format in a variable works just fine:
$ echo "" | awk -vs="hello:\t%s\n\tfoo" '{printf(s "bar\n", "world");}'
hello: world
foobar
$ # But getting the format from an input file does not.
$ echo "hello:\t%s\n\tfoo" | awk '{s=$0; printf(s "bar\n", "world");}'
hello:\tworld\n\tfoobar
$
So ... format substitutions work ("%s
"), but not special characters like tab and newline. Any idea why this is happening? And is there a way to "do something" to input data to make it usable as a format string?
UPDATE #1:
As a further example, consider the following using bash heretext:
[me@here ~]$ awk -vs="hello: %s\nworld: %s\n" '{printf(s, "foo", "bar");}' <<<""
hello: foo
world: bar
[me@here ~]$ awk '{s=$0; printf(s, "foo", "bar");}' <<<"hello: %s\nworld: %s\n"
hello: foo\nworld: bar\n[me@here ~]$
As far as I can see, the same thing happens with multiple different awk interpreters, and I haven't been able to locate any documentation that explains why.
UPDATE #2:
The code I'm trying to replace currently looks something like this, with nested loops in shell. At present, awk is only being used for its printf
, and could be replaced with a shell-based printf
:
#!/bin/sh
while read -r fmtid fmt; do
while read cid name addy; do
awk -vfmt="$fmt" -vcid="$cid" -vname="$name" -vaddy="$addy" \
'BEGIN{printf(fmt,cid,name,addy)}' > /path/$fmtid/$cid
done < /path/to/sampledata
done < /path/to/fmtstrings
Example input would be:
## fmtstrings:
1 ID:%04d Name:%s\nAddress: %s\n\n
2 CustomerID:\t%-4d\t\tName: %s\n\t\t\t\tAddress: %s\n
3 Customer: %d / %s (%s)\n
## sampledata:
5 Companyname 123 Somewhere Street
12 Othercompany 234 Elsewhere
My hope was that I'd be able to construct something like this to do the entire thing with a single call to awk, instead of having nested loops in shell:
awk '
NR==FNR { fmts[$1]=$2; next; }
{
for(fmtid in fmts) {
outputfile=sprintf("/path/%d/%d", fmtid, custid);
printf(fmts[fmtid], $1, $2) > outputfile;
}
}
' /path/to/fmtstrings /path/to/sampledata
Obviously, this doesn't work, both because of the actual topic of this question and because I haven't yet figured out how to elegantly make awk join $2..$n into a single variable. (But that's the topic of a possible future question.)
FWIW, I'm using FreeBSD 9.2 with its built in, but I'm open to using gawk if a solution can be found with that.
Why so lengthy and complicated an example? This demonstrates the problem:
In the first case, the string "a\t%s" is a string literal and so is interpreted twice - once when the script is read by awk and then again when it is executed, so the
\t
is expanded on the first pass and then at execution awk has a literal tab char in the formatting string.In the second case awk still has the characters backslash and t in the formatting string - hence the different behavior.
You need something to interpret those escaped chars and one way to do that is to call the shell's printf and read the results (corrected per @EtanReiser's excellent observation that I was using double quotes where I should have had single quotes, implemented here by \047, to avoid shell expansion):
If you don't need the result in a variable, you can just call
system()
.If you just wanted the escape chars expanded so you don't need to provide the
%s
args in the shellprintf
call, you'd just need to escape all the%
s (watching out for already-escaped%
s).You could call awk instead of the shell
printf
if you prefer.Note that this approach, while clumsy, is much safer than calling an
eval
which might just execute an input line likerm -rf /*.*
!With help from Arnold Robbins (the creator of gawk), and Manuel Collado (another noted awk expert), here is a script which will expand single-character escape sequences:
.
Alternatively, this shoudl be functionally equivalent but not gawk-specific:
If you care to, you can expand the concept to octal and hex escape sequences by changing the split() RE to
and for a hex value after the
\\
:and for an octal value:
That's a cool question, I don't know the answer in awk, but in perl you can use
eval
:PS. Be aware of code injection danger when you use
eval
in any language, no just eval any system call can't be done blindly.Example in Awk:
What if the input was
$(rm -rf /)
? You can guess what would happen :)ikegami adds:
Why would even think of using
eval
to convert\n
to newlines and\t
to tabs?Short version:
What you are trying to do is called templating. I would suggest that shell tools are not the best tools for this job. A safe way to go would be to use a templating library such as Template Toolkit for Perl, or Jinja2 for Python.
Graham,
Ed Morton's solution is the best (and perhaps only) one available.
I'm including this answer for a better explanation of WHY you're seeing what you're seeing.
A string is a string. The confusing part here is WHERE awk does the translation of
\t
to a tab,\n
to a newline, etc. It appears NOT to be the case that the backslash andt
get translated when used in aprintf
format. Instead, the translation happens at assignment, so that awk stores the tab as part of the format rather than translating when it runs the printf.And this is why Ed's function works. When read from stdin or a file, no assignment is performed that will implement the translation of special characters. Once you run the command
s="a\tb";
in awk, you have a three character string containing no backslash ort
.Evidence:
vs
And there you go.
As I say, Ed's answer provides an excellent function for what you need. But if you can predict what your input will look like, you can probably get away with a simpler solution. Knowing how this stuff gets parsed, if you have a limited set of characters you need to translate, you may be able to survive with something simple like:
The problem lies in the non-interpretation of the special characters
\t
and\n
byecho
: it makes sure that they are understood as as-is strings, and not as tabulations and newlines. This behavior can be controlled by the-e
flag you give to echo, without changing your awk script at all:tada!! :)
EDIT: Ok, so after the point rightfully raised by Chrono, we can devise this other answer corresponding to the original request to have the pattern read from a file:
Of course in the above we have to be careful with the quoting, as the
$(cat myfile)
is not seen by awk but interpreted by the shell.@Ed Morton's answer explains the problem well.
A simple workaround is to:
awk
variable, using command substitution,Using GNU
awk
ormawk
:Note:
awk
, this almost works, but - sadly -split()
still splits by newlines, despite being given an explicit separator - this smells like a bug. Observed on versions20070501
(OS X 10.9.4) and20121220
(FreeBSD 10.0).Explanation:
tr '\n' '\3' <fmtStrings
replaces actual newlines in the format-strings file with\3
(0x3
) characters, so as to be able to later distinguish them from the\n
escape sequences embedded in the lines, whichawk
turns into actual newlines when assigning to variableformats
(as desired).\3
(0x3
) - the ASCII end-of-text char. - was arbitrarily chosen as an auxiliary separator that is assumed not to be present in the input file.Note that using
\0
(NUL
) is NOT an option, becauseawk
interprets that as an empty string, causingsplit()
to split the string into individual characters.BEGIN
block of theawk
script,split(formats, aFormats, "\3")
then splits the combined format strings back into individual format strings.