Can someone tell me the very basics of how compute

2019-01-31 01:47发布

问题:

What makes all the words of a programming language actually do anything? I mean, what's actually happening to make the computer know what all of those words mean? If I verbally tell my my computer to do something, it doesn't do it, because it doesn't understand. So how exactly can these human words written into a language actually cause the computer to do some desirable activity?

回答1:

It all starts with the CPU or processor. Each processor type has a defined set of instructions it's able to perform. These instructions operate over ones and zeroes, which in turn represent whatever you wish them to: numbers, letters, even the instructions themselves.

At the lowest level, a zero is determined by the presence of a certain voltage (usually near 0V) at a transistor and a 1 is the presence of a different voltage (CPU dependent, say 5V)

The machine instructions themselves are sets of zeroes and ones placed in a special locations called registers in the processor, the processor takes the instruction and its operands from specific locations and performs the operation, placing the result on yet another location, afterwards going to fetch the next instruction and so on and so forth until it runs out of instructions to perform or is turned off.

A simple example. Let's say the machine instruction 001 means add two numbers.

Then you write a program that adds two numbers, usually like this:

4 + 5

Then you pass this text to a compiler which will generate the adequate machine code for the processor you will run the program on (sidenote, you can compile code to be run in a different processor from the one you are currently running, it's a process called cross compilation and it's useful, for instance, in embedded platforms). Well, the compiler will end up generating, roughly,

001 00000100 00000101

with additional boilerplate machine code to place the 001 instruction in the next instruction register (instruction pointer) and the binary encoded numbers in data registers (or RAM).

The process of generating machine code from structured languages is fairly complex and places limits on how normal these languages can end up looking like. That's why you can't write a program in english, there's too much ambiguity in it for a compiler to be able to generate the proper sequence of zeroes and ones.

The instructions CPUs can execute are fairly basic and simple, addition, division, negation, read from RAM, place in RAM, read from register, and so on.

The next question is, how can these simple instructions over numbers generate all the wonders we see in computing (internet, games, movie players, etc.)?

It basically boils down to the creation of adequate models, for instance a 3D gaming engine has a mathematical model that represents the game world and can calculate the position/collisions of game objects based on it.

These models are built on very many of these small instructions, and here's where high level languages (which are not machine code) really shine because they raise the abstraction level and you can then think closer to the model you want to implement, allowing you to easily reason about things like how to efficiently calculate the next position the soldier is going to be based on the received input from the controller instead of preventing you to reason easily because you are too busy trying not to forget a 0.

A crucial moment occurred with the jump from assembly language (a language very similar to machine code, it was the first programming language and it's CPU specific. Every assembly instruction directly translates into machine code) to C (which is portable among different CPUs and is at a higher level of abstraction than assembly: each line of C code represents many machine code instructions). This was a huge productivity increase for programmers, they no longer had to port programs between different CPUs, and they could think a lot more easily about the underlying models, leading to the continued complexity increase in software we've seen (and even demand) from the 1970s until today.

The pending missing link is how to control what to do with that information and how to receive input from external sources, say displaying images in the screen or writing information to a hard drive, or printing an image on a printer, or receiving keypunches from a keyboard. This is all made possible by the rest of the hardware present in the computer which is controlled in a way similar to that of the CPU, you place data and instructions in certain transistors in the graphic card or the network card or the hard drive or the RAM. The CPU has instructions that will allow it to place some data or instruction into (or read information out of) the proper location of different pieces of hardware.

Another relevant thing to the existence of what we have today is that all modern computers come with big programs called operating systems that manage all the basic stuff like talking to hardware and error handling, like what happens if a program crashes and so on. In addition, many modern programming environments come with a lot of already written code (standard libraries) to handle many basic tasks like drawing on a screen or read a file. This libraries will in turn will ask the operating system to talk to the hardware in its behalf.

If these weren't available, programming would be a very very hard and tedious task as every program you write would have to create again code to draw a single letter on the screen or to read a single bit from each specific type of hard drive, for example.

It seems I got carried away, I hope you understand something out of this :-)



回答2:

A computer programming language is actually a highly abstracted language that is converted down into a very very basic language that computers actually understand.

Basically, computers really only understand machine language, which is a basic language implemented in binary (1's and 0's). One level above this is assembly language, which is a very primitive language that is at least human readable.

In a high level language, we might have something like:

Person.WalkForward(10 steps)

In Machine code it would be:

Lift Persons Left Foot Up
Lean Forward
Place Left Foot Down
Lift Right Foot up
Lean Forward 
Place Right Foot Down
etc

Now obviously, nobody wants to write programs that tell the computer every little repetitive thing to do so we have tools called compilers.

A compiler takes a higher level language that is easier for a human to understand, and converts it down to machine code, so that the computer can run it.



回答3:

A good book that talks about computers for non-engineers is 'Code' by Charles Petzold. I don't recall exactly if it covers exactly your question, but I think so. If you are interested enough to go farther it's a good choice.

Code http://ecx.images-amazon.com/images/I/11MYtZPhJEL._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA198_SH20_OU01_.jpg



回答4:

In the simplest case, a program called a compiler takes the programming language words you write and converts them to machine language which the computer can understand. Compilers understand a specific programming language (C#, Java, etc) which has very specific rules about how you explain to the compiler what you want it to do.

Interpretation and understanding of those rules is most of what Stack Overflow is about. :)



回答5:

Programming is where you take a series of steps which solve a certain problem, and write them out in a certain language which requires certain syntax. When you have described those steps in the language, you can use a compiler (as per Greg's comment) which translates from that language into one the computer can interpret.

The art lies in making sure you describe the steps well enough :)



回答6:

You could compare how programming works to translating between languages. Say you were on a desert island with 2 other people. You only speak French. Person number 1 (we'll call him Fred) only speaks French and Japanese. Person 2 (Bob) only speaks Japanese. Say you need to ask Bob to go help you gather some firewood. Imagine in this case you are the program and Bob is the computer. You say to Fred in French "Can you tell Bob to come help me." Fred translates into Japanese and asks Bob to help you. In this case Fred would be the compiler. He translates the request into something Bob can understand. That is sort of how a computer program works.

There is a good How Stuff Works article that explains things.

I personally didn't really understand how computers could work the way they do until I took a digital electronics class. Prior to that the whole idea of how computers could work at all for me. After I built a binary counter it all made sense.



回答7:

Several people have already provided summaries of the translation process from a typical programming language down to actual machine code that a processor can execute.

To understand this process, it's helpful to have a concrete feel for what it's actually like to program at the level of machine code. Once you have an understanding of what the processor itself can do, it's easier to understand higher-level programming constructs as the abbreviations they are.

But unfortunately, writing machine code for a desktop computer is not much fun.

As an alternative, there is a great old game called Corewar in which you write little programs using a simplified machine language. These programs then battle each other for survival. You can write basic programs in the raw machine language, and then there's a system of macros so you don't have to repeat yourself so much, and that's the first step towards a full-featured language.

Another easy, rewarding, but low-level thing to do is to program a simple embedded controller like an Arduino. There are lots of easy introductions like this one available. You'll still be using a compiler, but the resulting machine code is easier to understand (if you want to) because the capabilities of the processor are so much simpler.

Both of these are great ways to get a feel for how digital computers actually work.



回答8:

Very simply, computer programming is the act of giving the computer a set of instructions in a language that it can understand. Programmers normally write instructions in a high-level programming language, and those instruction then get translated into the binary language that the computer understands. Programs that do this translation are called compilers.



回答9:

I have two crazy suggestions. Go back in time!

1. Get a programmable calculator.

A programmable calculator is a regular calculator, you can do the usual stuff with it: enter numbers, enter operation signs, and after pressing the equel key, you can read the result on the tiny display. In addition, programmable calculator can store short sequences of keystrokes as a program, which later can be "replayed" with a single keypress. Say, you set this sequence as a program (one instruction per line):

(Start)
*
2
+
1
=
(Stop)

Now you have a custom operation: pressing the "program" key (or which one you've assigned to) it will run the sequence without your further assistance and will multiply the content of the display with 2 and will add 1 - this is a program!

Later you can try more enhanced techniques: storing temporary results to memory, branch on result.

Pros:

  • A calculator is a familiar environment. You already have the basics.
  • It's simple. You have not to learn lot of instructions and programming techniques.
  • Modern programming languages are far from the ground, while programmable calculators "are on it". You will learn the fundamentals: memory, branch, elementary operations. Computers works very same (on the machine language level).
  • You will meet low-level problems: memory, division by zero.
  • It's extremly cool.

Cons:

  • It's obsolete. You have to learn a modern programming language, which will be different.
  • It's uncomfortable (hm, it's a pro instead: you can't use comfortable click-and-play toys). Maybe you can't even save your code.
  • It's not a trivial task to get one. You can try on E-Bay. Also, there will be no problem with documentations, most modells have large user groups on the internet.

The best choice IMHO the TI-59.

2. Learn the BASIC language with the help of an emulator.

When you turn on a machine which have a built-in BASIC language interpreter, it's ready to accept your commands, and performs just as you type. First, you can try some instructions in command mode, e.g.:

PRINT 5*4

It will print "20" in the next line, wow. If you've played enough in command mode, you can organize the instructions into programs, then you can run, edit, enhance it.

Pros:

  • BASIC is a real programming language, designed for education.
  • When you later meet a modern programming language, and will discover the differences, you will see the progress of the programming techniques (e.g. procedural, structures etc.) of the last 30 years.
  • It's cool, especially if you get a real one, not just an emulator.

Cons

  • It's obsolete. Almost 30 years passed since that age.
  • Old machines are compact, closed systems. There are no files (in the form today we use them), folders, file types, it may confuse beginners.

My favourite BASIC system is the Commodore 16 (Plus/4), which is very similar to the famous C64. but more comfortable. I prefer YAPE emulator, it can save/load memory snapshots or BASIC programs to file.



回答10:

The CPU has a register (a thingy that can store a number) called an instruction pointer. The instruction pointer says what address in memory to pull an instruction from. The CPU pulls an instruction, executes it, and goes on to the next instruction, unless the instruction that it executed said to go somewhere else. These instructions are fairly weak. Add this number to that number. Store this number over there. Take the next instruction from over there. A program called a compiler analyzes your program and turns it into machine code (I'm skipping a few steps here).

Two books that will give you a decent overview of this process are SICP and TECS.



回答11:

As people are already noting, there's some pre-program (usually a "compiler") which translates the words of your program into a more long-winded "lower level" language called "machine code".

The machine code consists of very simple instructions which are already "understood" by (or at least make sense in terms of) the underlying processor. For example, an instruction which copies data out of a memory location, and into a special part of the processor (called the "accumulator" where simple arithetic can be done to it.) or an instruction to add the contents of two of the slots in the accumulator.

The complex, sophisticated programs you see (including the compilers and interpreters of higher level languages) are all, ultimately built out of these simple instructions being run millions of times.



回答12:

The computer has a preset number of instructions; thing it already knows how to do. A computer program is simply a list of instructions that the computer will execute sequentially.

Early programs were written directly in machine language. Programmers, to make their lives easier, started to create abstractions to simplify the programs they need to write. Over time, more and more abstractions are added, like layers of an onion, but it all boils down to essentially the same thing - executing a series of instructions.

If you want to learn about programming without focusing on compilers, technology, etc, you get a good indication of what a program is when you start to create 3D scenes in Alice. Alice is free from Carnegie Mellon University. You end up learn how to programming without trying to learn proramming.

If however you want to learn more about the technical details, your best bet would be to look at some intro compi-sci university text books. The following How C Programming Works might also give you some answers.



回答13:

Basically, you start out with something simple such as:

print("Hello World");

Then you simply sprinkle syntactic sugar and magic tokens over it until it does what you want!