I'd like to know the difference (with examples if possible) between CR LF (Windows), LF (Unix) and CR (Macintosh) line break types.
标签:
line-breaks
相关问题
- AlertDialog message with new line. (\n doesn't
- Line break in body of mailto: link
- What is the character code for new line break in e
- How to break this line of code in Python?
- line breaks using javascript strings
相关文章
- How do I create a new line with reStructuredText?
- Aligning the text in Google Maps Marker snippet
- PHP: How to keep line-breaks using nl2br() with HT
- PHP - indent block of html
- asp.net: line break \\n in resource .resx file doe
- Android textview break line behaviour (how to spli
- Multi line text inputs in shiny
- Objective-c Line breaks every 10 characters (Keepi
Since there's no answer stating just this, summarized succinctly:
Carriage Return (MAC pre-OSX)
Line Feed (Linux, MAC OSX)
Carriage Return and Line Feed (Windows)
If you see ASCII code in a strange format, they are merely the number 13 and 10 in a different radix/base, usually base 8 (octal) or base 16 (hexadecimal).
http://www.bluesock.org/~willg/dev/ascii.html
Here is the details.
The sad state of "record separators" or "line terminators" is a legacy of the dark ages of computing.
Now, we take it for granted that anything we want to represent is in some way structured data and conforms to various abstractions that define lines, files, protocols, messages, markup, whatever.
But once upon a time this wasn't exactly true. Applications built-in control characters and device-specific processing. The brain-dead systems that required both CR and LF simply had no abstraction for record separators or line terminators. The CR was necessary in order to get the teletype or video display to return to column one and the LF (today, NL, same code) was necessary to get it to advance to the next line. I guess the idea of doing something other than dumping the raw data to the device was too complex.
Unix and Mac actually specified an abstraction for the line end, imagine that. Sadly, they specified different ones. (Unix, ahem, came first.) And naturally, they used a control code that was already "close" to S.O.P.
Since almost all of our operating software today is a descendent of Unix, Mac, or MS operating SW, we are stuck with the line ending confusion.
Jeff Atwood has a recent blog post about this: The Great Newline Schism
Here is the essence from Wikipedia:
CR and LF are control characters, respectively coded
0x0D
(13 decimal) and0x0A
(10 decimal).They are used to mark a line break in a text file. As you indicated, Windows uses two characters the CR LF sequence; Unix only uses LF and the old MacOS ( pre-OSX MacIntosh) used CR.
An apocryphal historical perspective:
As indicated by Peter, CR = Carriage Return and LF = Line Feed, two expressions have their roots in the old typewriters / TTY. LF moved the paper up (but kept the horizontal position identical) and CR brought back the "carriage" so that the next character typed would be at the leftmost position on the paper (but on the same line). CR+LF was doing both, i.e. preparing to type a new line. As time went by the physical semantics of the codes were not applicable, and as memory and floppy disk space were at a premium, some OS designers decided to only use one of the characters, they just didn't communicate very well with one another ;-)
Most modern text editors and text-oriented applications offer options/settings etc. that allow the automatic detection of the file's end-of-line convention and to display it accordingly.
It's really just about which bytes are stored in a file.
CR
is a bytecode for carriage return (from the days of typewriters) andLF
similarly, for line feed. It just refers to the bytes that are placed as end-of-line markers.Way more information, as always, on wikipedia.