N.B. If you’re after a quick answer then see here, if you want an in-depth outline see here or if you want to know how science works see here– this blog is more concerned with the broader conceptual framework within which science fits.
Knowledge is an interesting concept – how can we really “know” anything? How do we determine truth from untruth? Does knowledge even require what is “known” to be true? I don’t think so – I think it merely needs to appear true.
The human brain looks for explanations – being able to identify cause and effect is a powerful capability, after all, it underpins all human achievement. For example, if our ancestors were unable to identify that seeds grow into plants, we could never have established agriculture (and subsequently civilisation).
There are a variety of ways in which we make links between cause and effect, from straightforward reflexive Pavlovian classical conditioning, through more complex methods of identifying concept-based causation, to the rigourous statistical analysis of double-blind randomised controlled trials of modern biomedical research (which marks our current best attempt at linking cause to effect, whilst minimising the influence of coincidental factors). However, one of the most common ways in which we find explanations is by relating an observed occurance with an observed outcome – we look for a correlation.
Of course, the trouble with correlations is that you will often be spotting a relationship that doesn’t really exist. Factor A might occur at the same time or increase at the same rate as factor B, but it could be due to factors 1,2 and 3. For example, seasonal sales of ice-cream in the UK can be directly correlated with seasonal umbrella sales in Australia – obviously they are not directly related to each other, but they share the factor of seasonality in their respective hemispheres. So a summer in the Northern Hemisphere sees more ice-cream being bought, whilst in the Southern Hemisphere it is winter and people are buying umbrellas to keep off the rain. This is a simple illustration that is intended to be clear, but unfortunately most of the time we find it very difficult to identify what the factors involved in a correlation actually are – but that doesn’t stop us drawing conclusions from what we see, or think we see.
So what else do we use as a way of acquiring knowledge, beyond drawing inspiration and conclusions about causation from what we experience? I suppose the other source of knowledge would be social/cultural – what we learn from others (link to pdf). This kind of knowledge can be supported by our own observations and experience, but it can also help inform us beyond our own experience. The trouble is, both personal and cultural knowledge are prone to inaccuracy – after all, cultural knowledge was personal once. Moreover, cultural knowledge has to be communicated, which provides opportunity for error to creep in; it also presupposes honesty – making it vulnerable to both genuine error and deliberate manipulation and misinformation.
Some people may also want to add divine inspiration as a source of knowledge, after all, various religious texts are supposed (by some) to be transcriptions of the word of God. This concept of divine inspiration can be grouped with cultural knowledge, since no-one has direct access to the original source material, meaning that only copies or interpretations made by error-prone humans are available. The inconsistencies throughout religious texts (examples from the Bible and Quran) and the numerous examples of their observations not being supported by subsequent observations or even plain logic (examples from the Bible and Quran) clearly identify existing religious texts to be the works of mere mortals (if we assume that an omnipotent god should know what S/He’s talking about).
So I suggest that we have two main sources of knowledge (or routes of learning) – what we’re told and what we’ve experienced. We use a combination of these sources of knowledge to inform our opinions and decisions, but we also use them as a foundation on which to formulate new ideas that themselves may contribute to further knowledge. It is this processing of information that humans are so good at, to the point where we take it for granted – it can be very difficult to pick apart just how we arrive at conclusions about the world.
However, we each process information in different ways: some people are quick to pick up or develop new ideas in an unstructured way (let’s call this a heuristic approach), whilst others may be slower and more systematic (let’s call this an algorithmic approach); some people rely on intuition and emotionally informed processes, whilst others are more calculating or emotionally detached. One big difference in method of processing is doubt (a disposition towards rejection of information), compared to belief (a disposition towards acceptance of information). Opposite extremes of this spectrum could be considered cynicism and gullibility.
It is this information processing that actually determines what (we think) we know. Such knowledge exists in our brain conceptually – supported only by the information and processes that we used to construct it. Of course between the errors in determining cause and effect, the errors of communication, the errors of misinformation and the errors arising from our imperfect ability to process information, we end up with a knowledge product that is likely to be incomplete, inaccurate or just plain incorrect. This is where science comes in.
Science is a method that is used to test what we think we know. It is intended to provide a structured approach to critically assess knowledge, where ideas and assumptions arising from the processing of what we’re told and what we’ve experienced are compared against the material world (see links at top for more detail). Importantly, science can be applied by people with any kind of thought processing – as long as they stick to the procedure. If people don’t stick to the procedure it ceases to be science, which means that the outcomes cannot be treated with the same degree of confidence.
Science is not the only way to test knowledge, but it is the most thorough and powerful method at our disposal. Science cannot claim to offer truth, because the iterative nature of science means that it must always be open to questioning. Science is not values-based and although ethical considerations are imposed on the practice of the scientific method, they are not part of it. Science is atheistic, because it functions without recourse to belief in gods, but it is not antitheistic since it can only deal with the natural: the supernatural is beyond the scope of the scientific method. Science is not about the interpretations or conclusions drawn by scientists, it is about experimental results demonstrating that a testable idea has a foundation in observable fact, or not.
Therefore, when people start suggesting that science is somehow against their opinion, what they are actually saying is that the best available observations of fact are against their opinion. This is something that is common in people who are making claims for which there is only evidence in the form of anecdote or received cultural knowledge (or both). The reason that science is against their opinion is that their opinion is not supported by evidence stripped of its errors and co-factors.
When people accuse science of being closed-minded it tends to be because science doesn’t support their opinion and they would rather cling onto their unsupported opinion than change it to fit the evidence. The scientific method could not (and certainly should not) care less about opinions or preconceived doctrine or dogma; indeed the dogmatic adherence to a set of opinions is the antithesis of science and is an indicator of ideological stagnation. Science is only useful as a tool when applied by people who are willing to change their mind to suit the evidence. For some reason this flexibility is seen by many as a weakness, but in the words of John Maynard Keynes, “When the facts change, I change my mind. What do you do, sir?” – after all, surely it’s better for future knowledge if we build on facts, rather than go into denial about our mistakes? After all, we learn from our mistakes and the alternative is building knowledge on an unstable foundation of misinformation and lies.
That said, science is not perfect. There is always room for improvement and scientists are just people, some of whom are ambitious, inept, dishonest or greedy. However, the scientific community is founded on robust confrontation, suspicion and scepticism about other scientists and ideas. This means that the outcomes of the scientific method tend to be pretty well assessed (links to pdf) before they ever see public light (unless the journal publishers flout the process or the scientists involved are trying to sneak past the peer-review process by going straight to the public press) or at least they tend to be found out if they’ve been faked (that’s what reproducability of results is all about). Bear in mind that other methods of testing knowledge don’t have any controls at all – so although science is far from perfect, it’s far better than methods of enquiry that have no internal critical review process.
In short, science helps us to identify what we actually know from what we think we know.