Friday, November 1, 2013

How do you know when you are right?

I have been thinking about this problem, the problem of correctness (which is what I mean by "right") for a good number of years, but never formally.  This post was written a bit on a lark, so forgive the lack of intellectual rigour in the post.  So what sorts of strategies do we have at are disposal:
  1. Faith - Assume another has all the knowledge and take it as The Truth(tm) without a good reason for confidence.
    1. Example: My fortune cookie said ... and while it has never been right, it will be this time.
  2. Personal History - Using your personal knowledge to judge the correctness. 
    1. Example: You've known your smart friend for years and he typically doesn't lead you astray, so I trust this to be true.
    2. Example: My teacher said the earth is round.
  3. History - If it ain't broken, don't fix it.
    1. Example: My grandfather worked as a miner, my father worked as a miner, therefore I should work as a miner.
  4. Senses - Observe and deduce.  
    1. Example: I see a black sky every night, so the sky must be black everywhere during the night.
  5. Logic - Attempt to determine rightness by applying a set of rules.
    1. Example: If I am floating, I am not on Earth.
  6. Scientific Method - Test it and test it until you believe you have a more accurate world view.
    1. Example: 5 / 6 times I tried to login with this username/password, it succeeded.  Therefore, this username / password must be correct.
  7. Probability - Attempt to assign probabilities that a given item is right.  This might be via the scientific method, your senses, etc.  
    1. Example: If I am floating, I have a 60% chance of being in space, a 39% chance of being on the vomit comet and a 1% chance of something outside of my experience.
  8. Research - Use the "Faith" principle, but take multiple sources.  Intentionally look for counter arguments.  Intentionally look for consensus.
    1. According to Lou Adler and Venkat Rao, you should ask an interviewee to tell a stories about their success.  Others, like Nick Corcodilos note that different questions are useful for different requirements.
  9. Random / Sounds Good- Choose a value at random or seemingly random and claim it is right.  This might be a intentional lie or it could be the first thing that 'popped' into your head.
    1. Example: Q: How many wheels does a typical truck have?  A: 16.  Q: How did you get the value?  Well I knew a little, but I basically guessed.
    2. Example: Q: Are you a better than average driver? A: Yes. (80-90% of the American population says yes, thus some are lying or 'randomly' choosing yes because it sounds good).
  10. Assume - Sometimes we just assume something is the truth without knowledge.  Cultural ideas follow this often.  
    1. Example: Barber shops are where you get your hair cut. (Not always; some are also brothels for example.)
  11. Null - I refuse to answer the question or I don't know.
    1. Example: Q: What is the smallest atomic unit in the universe?  A: I don't know.  (Even shades of this like, smaller than a truck or smaller than an atom could fall into this category).
  12. ???
I am wondering, what other ways do people "assign" the idea of "right" (as in correct) to a statement.  Do you find that you use multiple of these, and which of these strategies are the most helpful?  Care to add to the list?

One last question I have in mind with this topic that I think Scott Adams hit reasonably well with BOCTAOE (But Of Course There Are Obvious Exceptions).  How much rigour do we need in any given statement to be clear?  Does the audience matter?  Does the % accuracy matter?  "Login works" might be right, but not right for all customers (or under all loads).  If I had to mention all exclusions of rightness in all my statements, I might end up needing a EULA just to let someone hear me (or read my works).  Last but not least, is our inability to recall the exact truth a just plain human flaw, making the concept of being 'right' really null in and of itself.  Should right be really "within an order of magnitude of the 'truth'"?

I'm not sure I know the answers to these questions, but I think there is a high probability that I have an opinion which maybe within an order of magnitude of the truth, but for now I'm going to say I don't know.


  1. Interesting topic to explore. One other system I can think of is an extension of Personal Knowledge, which is reviews/recommendations. These don't necessarily need to come from people you know and trust, especially in the internet age.
    Example: people are more likely to buy a book off Amazon if enough people recommended the book.

  2. I think XKCD noted one particular flaw with this idea: I'm not saying it doesn't belong in the list, just interesting how 'untrustable' sources can be aggregated into something we believe to be more trustable. Thanks for the interesting thought. :)

  3. I'm with Karl Popper. I don't know if I'm right; I can know if I'm wrong, and I can come to point where coming up with more hypothesis to disprove my idea is less interesting to me than something else. So, in other words:

    "I'm not sure I know the answers to these questions, but I think there is a high probability that I have an opinion which maybe within an order of magnitude of the truth, but for now I'm going to say I don't know."

    It's a lot like that.

    1. How do we deal with bad data and falsification? As wiki notes, several major theories might have been thrown out if we had just stuck to the 'known facts'. What are the limits of testing and the ability to falsify a theory? In business, unlike science, we have a very finite time scale to test a theory, so do we hold onto 'falsified' theories longer or shorter periods of time?

      In asking those questions, I find probability to be a useful tool. I don't mean exact percentages, but the 'feel' of probability. "This is probably wrong, but let's check a few more things." is reasonable to me. In engineering I often explore very unlikely ideas that I thought I had disproven but it later turned out my model was simply wrong.