I agree with Architect! My response is basically another way of saying entropy is the primary difference.
Both code and language require interpretation. Grammar is much more important to code than written languages. Some parts of the code are not interpreted, like strings of characters. Making a mistake in code would result in complete failure, and has almost no effect with spoken language.
The biggest difference in my opinion is that spoken and written language is interpreted each time that the information is communicated. Programming languages on the other hand are interpreted or compiled into a different language, and then interpreted by a consistent set of rules. Even with computers the interpretations can change over time, or function differently with different hardware configurations. However, each version of the interpreter is technically a different language which is why software only works on specific version ranges of the interpreter. Whereas people can change their interpretation of the same sentence after each reading.
Also, written and spoken languages are not so well defined. With computers, they are precisely defined by the code/mechanism that processes it.
Another thing to consider is that binary 010101010001 is not exactly binary. It may be represented as 1's and 0's but in reality it's highs and lows (or frequency key shifting, etc...). This is where code can be misinterpreted. An example is when binary data is transferred over a wireless network. It is transferred using a wave, which is not binary. So it will read the high points as 1's and low points as 0's. I'm oversimplifying how it works to illustrate my point and you can look up digital modulation if you want to learn more.
If written and spoken languages were precise, then they would be less effective. People can derive understanding from parts of words, etymology, context, and so on. People use something called fuzzy-logic, which allows us to read and understand words that a computer would not understand without using fuzzy-logic (or a database of misspellings like a dictionary which may use fuzzy logic to find possible replacement words). There are a few good examples of this:
1. When you see a road sign, but you can not make out the letters, you can still read the sign. You can approximate the width and shape of each of the letters, and the length of each of the words to come up with a small range of possibilities. Then using other information such as memory, you can narrow it down to a single result.
2. You can read sentences fluently even when most of the words are jumbled. This is called Typoglycemia and you've probably seen similar examples before.
"I cdn'uolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg: the phaonmneel pweor of the hmuan mnid. Aoccdrnig to a rseearch taem at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Scuh a cdonition is arppoiatrely cllaed Typoglycemia .
"Amzanig huh? Yaeh and you awlyas thguoht slpeling was ipmorantt."
Imagine a computer trying to read and understand that!