Logo
  • Home
  • Classes
  • Conveying Computing
  • Exams
  • Fractal Gallery
  • Guides
  • Problem Sets
  • Syllabus

Improving Quiz Scores

Posted by David Evans on 26 Oct 2011 in Announcements, Quizzes, Readings | 19 comments

Some students didn’t do as well on the quiz as I would have hoped. Unlike Quiz 2, this quiz really was very straightforward (with the possible exception of question #2 on the information in a nucleotide), so I have to assume that the only reason anyone got below a 4 on the quiz was because they aren’t keeping up with the reading assignments.

Since my main intent in the quizzes is to encourage you to actually do the reading (which I am pretty confident you will find worthwhile and interesting if you do), there is an opportunity to improve your quiz grade by demonstrating that you have read the chapters from The Information. You can do this by posting a comment here that either explains the most interesting thing you read in Chapters 7 and 8, or that raises an interesting question based on something you read in these chapters. Your comment should not substantially duplicate a previous comment. If you want to reply to a previous comment by contributing something new, though, that is encouraged.

Good comments will be enough to revise your Quiz 3 score to be full credit.

Print Friendly Print Get a PDF version of this webpage PDF

19 Responses to “Improving Quiz Scores”

  1. Kevin Liu says:
    27 October 2011 at 12:03 am

    As more and more of the general public began to become aware of and began to use the telegraph, a unique language began to develop. To save money since messages were charged by length, people sending the telegraph began to find ways to send telegraphs with shorter and shorter messages, but with the same meaning. The moment I read that, I couldn’t help but think about how in the modern era, with the advent of texting, people began to do roughly the same thing. Finding more and more ways to write things with less text in order to save money. Of course the texters never really had to develop a complex code to hide their messages, but the parallels are still there.

    • Chi Zhang says:
      27 October 2011 at 9:56 pm

      that’s been very true, Kevin, seeing as how emotions are expressed with colons and parenthesis and “you” became “u,” but that excludes the android technology, Swype, which almost essentially wants to encourage people to spell things correctly. True, you can sort of “train” it to understand what you mean so that you don’t have to be so precise everytime, but there are a lot of instances, especially with long words, that the program steps in and essentially spells for you.

  2. Hannah Beattie says:
    27 October 2011 at 11:02 am

    I thought the section on paradoxes was incredibly interesting albeit confusing. “This statement is false.” I loved how Gleik connected it to recursion- it was so nice to see the parallels between “The Information” and the course material. Also, this REALLY BLEW MY MIND- that a RANDOM message contains MORE information than an actual sentence!

    • David Evans says:
      29 October 2011 at 5:05 pm

      This is one of the most paradoxical things about information. In some sense, the random message contains NO information; in another sense, it contains the most. Another way to view information is what is the shortest program that could produce something that is effectively indistinguishable from the content. Then, a random message can be produced by a quite short program, much shorter than the program that produces something indistinguishable from Hamlet.

      This post discusses some ideas behind this: The First Law of Complexodynamics, most succinctly expressed by this image:
      .

      • Filip says:
        1 December 2011 at 8:48 pm

        …But then a specific unique random message cannot be produced by any program that do not explicitly states the message. There’s no exploitable structures. Wouldn’t the informational content be tantamount to Kolmogorov complexity, which is maximum for random messages (or strings; since we are dealing computer programs now)? In this sense, it carries the most information again.
        While indeed it is exactly this message that has no ‘useful’ meaning, meaning is not information. The lack of structure make it interesting and boring at the same time.

        That’s a nice picture.

  3. Deirdre REgan says:
    27 October 2011 at 12:55 pm

    I thought the section that spoke about Shannon’s thesis on genetics was pretty impressive. Shannon wasn’t a geneticist, but somehow managed to use generalizations of concepts such as alleles in order to write his thesis “An algebra for theoretical genetics”.
    I’m sure most intelligent people with science backgrounds would struggle to read and make sense of his thesis… I find it incredible that Shannon was capable of creating and proving these (accurate) theories from the ground-up, with little scientific knowledge to stand on.

    • David Evans says:
      29 October 2011 at 5:10 pm

      You can read his actual thesis here.

      Although the structure of DNA was not yet know when Shannon wrote this, quite a lot about genetics was, so there was a good basis for starting to think about genetics more algebraically. (As far as I known, his PhD had little real impact, unlike his Master’s thesis which invented digital logic.)

  4. Christopher Smith says:
    27 October 2011 at 4:07 pm

    The whole discussion on “Shannon’s Rat” in Chapter 8 was both an interesting example of early robotics and an intriguing parallel to modern programs and programming methods. I had to design a computerized robot in high school (through the JKarel Java guide) that followed similar logical patterns of maze traversal as the early maze-solving robot that Claude Shannon created. The fact that Shannon’s colleagues called the device a “brain” similarly previewed the ethical questions that have arisen throughout the history of computing. The robot’s artificial intelligence was, and still is, a startling example of what even a few bits are capable of doing, and its inclusion in the book stresses this point with great clarity.

  5. cls2be says:
    27 October 2011 at 8:25 pm

    I found the comparison between a logical machine and the brain to be particular interesting. Both Wiener and Shannon offered the conclusion that the brain employs relays (neurons). The fact that they drew this parallel in the infancy of both computing and neural science is fascinating. They also recognized that the similarities are restricted to the digital messages and that the analog messages were carried by hormones. The way the were able to link neural science, computer science and logic in a time where we were only beginning to grasp these concepts is very impressive.

    • Aleck Berry says:
      31 October 2011 at 12:30 am

      I also found Wiener’s argument that the brain is (at least partly) a logical machine to be fascinating. To me Boolean algebra always seemed like an artificial, human invention that was just used in computers. The fact that the Brain had logical relays suggests that binary information is an intrinsic part of nature. It is like the “atoms of Democritus”, as Warren McCullough said, an indivisible unit of information.
      The existence of logic and binary information in nature would make computer science applicable to many other fields of academia, as is shown in these chapters in the connections with biology, philosophy and psychology.

  6. Michael Carmone says:
    27 October 2011 at 8:26 pm

    I didn’t know that Morse was an artist. The fact that he enhanced telegraphy after a trip to France was amazing. In the World History books, the telegraph’s origins are never specified but Gliek points it’s origin to France during the Reign of Terror. The French Claude in Chapter 5 used a system or rotating arms to give signals and that Napoleon Bonaparte strengthened demand for telegraphy for private purposes, even though sending a message took more than six minutes and the towers had to be built within sight of each until Morse went on a trip to France and saw the telegraph towers. Then he got the idea, while other physicists were trying to objects like needles to send messages, he decides to use an electric current, and uses the technology to start lengths from Baltimore to DC and New York City to Philadelphia.

  7. Diana Naim says:
    28 October 2011 at 11:37 am

    As someone who is well-versed in biological sciences, particularly genetics, I found it extremely interesting how computational logic was implemented to show the diversification of alleles and create an entire theoretical model to explain the intricacies of genetic coding. Additionally, I found it fascinating how physical concepts such as entropy can be vividly applied to both language and logic. Entropy, which to me, is the measure of disorder in the physiological realm, becomes applied to language so that when a string is constructed, entropy decreases. Finally, the link between neurology and circuitry is interesting considering the limitations of neuroscience knowledge at the time. The parallels between the two fields by two early thinkers gave birth to the primary mode of modeling the brain today.

    As a side note (not particularly applicable to Chapters 7 and 8), I most enjoy how Gleick provides a vivid summary of methods of historical communication and the advancements in both logic and computation that allowed for progression from simple African drums, to the telegraph, to even higher orders of communication.

  8. Emmett says:
    28 October 2011 at 5:43 pm

    I find it interesting that psychology as we know it today began taking shape at the same time as computer science. It seems like a subject that would have been firmly grounded in the academical world sooner than CS. The formation of modern psychology is also fascinating. I had learned about the development of the psychological field in previous classes, but I didn’t know that the information theory played such an integral part in that. The experiments that really started the cognitive revolution were all about information retention, a subject which was simultaneously being researched and quantified by mathematicians and engineers. Computer science paved the way for psychology; the concept of computing machines provided a template for the human mind.

  9. Austin Collier says:
    28 October 2011 at 9:35 pm

    One of the most interesting ideas that is addressed in these chapters is the idea that Turing and Shannon mutually conceived. The idea that machines could have the capability to think creatively and ultimately learn new information was a radical concept when they first discussed this in the early 1940′s. Transitioning from the customary mechanical model (i.e. Analytical Engine) of a machine to the idea that computers could use higher-level logic to compose extemporaneously seems quite revolutionary. Shannon suggested incorporating music to a computer (which is why I was hesitant to not choose music for #4) almost 7 decades ago! When first reading this, I couldn’t help to think about how society and digital media are so imbued in today’s world and what it may look like if ideas like Shannon’s and Turing’s never surfaced. I also enjoyed reading the part of chapter 8 where this idea actually plays out by such brilliant minds as Warren McCulloch, Ashby, and Turing among others through unassuming meetings in a Hospital basement in London- just as inconspicuous as Steve Jobs and Wozniak’s garage origins. I find Turing’s Imitation Game that he offers to this group thought-provoking as it describes a thought-experiment to see if whether or not a computer could think like a human in relation to his and Shannon’s ideas.

  10. Li-Chang Wang says:
    29 October 2011 at 2:48 am

    Chapters 7 and 8 finally touch on what the book title suggests: Information. Throughout chapters 7 and 8, information is defined in a way that never occurred to me. When I think of the word information I see it as just basic facts about some specific topic. However, Shannon described information as “order wrenched from disorder.” Information is entropy and uncertainty. This was clearly not how I defined information as; actually it is quite the opposite. Information to me was certainty and factual ideas that one obtains by researching. How does this make sense?

    After thinking for a while, I realized that Shannon had a brilliant perspective regarding information. One learns from uncertainty. Doubt is the key to knowledge. By being uncertain about something, you learn from testing your uncertainty. Maybe information can be considered as the process where uncertainty is made certain.

  11. Reid says:
    29 October 2011 at 4:00 pm

    The main thing that interested me was at the beginning of the chapter 7. The book explores the abstract and logical side of math when it discusses the unsolved problems like the Entscheidungsproblem, Fermat’s Last Theorem and the Goldbach conjecture. The book also goes into solving problems by using logic and deductive reasoning rather than calculation. Then, the abstract side of math is touched upon with Hilbert’s three questions: Is mathematics complete?, Is mathematics consistent?, Is mathematics decidable?. I had never though of mathematics in that type of light and it really got me thinking and challenged what I had previously believed about math. I found that the idea that mathematics has much more complexity than just formulas, equations, and calculations to be very intriguing.

    • David Evans says:
      29 October 2011 at 4:59 pm

      Yes, we’ll talk about this in class on Nov 21 and Nov 28 (and in Chapter 12 of the book).

  12. Josh Whelan says:
    30 October 2011 at 2:48 pm

    The most interesting concept I read in The Information was in chapter 7 when Gleik explains how Shannon came up with the system to reduce excess noise in the his communication devise. He proposed to examine language in terms of probability. He would say the certain letters are followed by another letter many times such as “qu”. In this way the information of the u is held in the q, so reducing the need for extra information. This is a great system but my question is what happens when there is a certain combination that does not happen as it usually does? Does the machine read wrong information?

  13. Janie Willner says:
    5 November 2011 at 3:00 pm

    For some reason I didn’t see this post when it came out, so here is my response. I thought that the description of Weiner’s book in chapter 8 was really fascinating. He discussed a lot of things that I had never thought about, such as how to describe the difference between analog and digital. This has already been mentioned but the decision to use Boolean logic was also really interesting. Boolean logic is the only logic system I’ve ever studied, so it never even hit me that there were other options. I also liked the parallels he drew between the human brain and computing machines in terms of dichotomy: 1s and 0s, firing and repose, etc.


Fall 2011

Register
Login

Help Schedule

(all in Davis Commons, except Dave's office hours in Rice 507)
Sundays, 1-6pm (Valerie/Joseph/Kristina)
Mondays, noon-1:30pm (Kristina)
Mondays, 1:15-2:00pm (Dave, Rice 507)
Tuesdays, 11am-noon (Dave, Rice 507)
Tuesdays, 5-8pm (Valerie/Jonathan)
Wednesdays, 5-6:30pm (Jiamin)
Thursdays, 9:45-11am (Dave, Rice 507)
Thursdays, 1-2:30pm (Joseph)
Thursdays, 4:30-7:30pm (Jonathan/Jiamin)
Fridays, noon-1:30pm (Peter)

Recent Posts

  • Course Wrap-Up
  • Class 41: The Cake of Computing
  • PS8 Submissions
  • Class 40: GuardRails, Big Data, and Secure Computation
  • Exam 2 Solutions

Recent Comments

  • David Evans on Problem Sets
  • jacob777 on Problem Sets
  • Prof. K.R. Chowdhary on Class 41: The Cake of Computing
  • Anon on Exams
  • Anon on Exams

Index

  • Classes
    • Class 1: Computing
    • Class 2: Language
    • Class 3: Rules of Evaluation
    • Class 4: Constructing Procedures
    • Class 5: Procedures Practice
    • Class 6: Programming with Data
    • Class 7: Programming with Lists
    • Class 8: Recursive List Procedures
    • Class 9: Consistent Hashing
  • Conveying Computing
  • Exams
  • Fractal Gallery
  • Guides
    • DrRacket Guide
    • Schemer’s Guide to Python
  • Problem Sets
    • Problem Set 0: Course Registration, Racket
    • Problem Set 1: Making Mosaics
      • PS1 Comments
    • Problem Set 2: Sequence Alignment
      • PS2 Comments
    • Problem Set 3: Limning L-System Fractals
      • PS3 – Comments
    • Problem Set 4: Constructing Colossi
      • PS4 – Comments
    • Problem Set 5: Wahoo! Auctions
      • PS5 Comments
    • Problem Set 6: Adventures in Charlottansville
      • PS6 Comments
    • Problem Set 7: Charming Snakes with Mesmerizing Memoizers
      • PS7 Comments
      • PS7 Responses
    • Problem Set 8 (Part 2): Typed Aazda
    • Problem Set 8: From Aazda to aaZda (Part 1)
      • PS8 Part 1 Comments
  • Syllabus
    • Course Pledge
  • Using These Materials

RSS BA Computer Science

RSS Jefferson’s Wheel

  • MICO Challenge in Membership Inference
  • Voice of America interview on ChatGPT
  • Uh-oh, there's a new way to poison code models
  • Trojan Puzzle attack trains AI assistants into suggesting malicious code

RSS Hacker News

  • Epic’s new motion-capture animation tech has to be seen to be believed
  • Hindenburg Research Outs Block Inc as Fraud
  • Mathematicians discover shape that can tile a wall and never repeat
  • F-150 Lightning Recall Due to SK Cells
  • Nvidia Tackles Chipmaking Process, Claims 40X Speed Up with CuLitho

RSS Babbage

  • And it’s goodnight from us
  • Why 10, not 9, is better than 8
  • Future, imperfect and tense
  • The paperless dilemma
  • How to judge a ’bot; why it’s covered
cs1120 | RSS | Comments RSS | Book | Using These Materials | Login | Admin | Powered by Wordpress