The problem of irreproducible results in science has been tied to increased competition for shrinking research funding and pressure to publish, but surprisingly, reproducibility has been an issue since the beginning of science. At the heart of the issue is the use of statistical analyses to determine which results are significant.
While the choice of appropriate, standardized, statistical tests is an issue that needs to be addressed, I believe a more fundamental problem is how the narrative of science is taught and perpetuated. Science classes in high school and college often teach the way the world works without emphasizing the way that knowledge was acquired. At best, the seminal experiments and theories of a field are taught as elegant works carefully crafted by brilliant minds (see kekule's dream, Miller-Urey experiment, etc.) While many scientists are undoubtedly brilliant, transmitting scientific knowledge through the narrative of the genius also transmits the expectation that a specific conclusion can always be extracted from a set of data. The drive to "fit" one's data to a particular, neat conclusion can sometimes lead to false conclusions. Related to this is the idea of citation bias, or the preference to publish positive results over negative ones. This leads to an incomplete picture of science, and can obscure potentially useful data.
Overall, there is pressure in culture of science to generate data that supports elegant theories of how the world works. The publication process itself also perpetuates this idea – that scientific data is only ready to be shared once it can be neatly wrapped up in a cohesive narrative. This method worked well when routine data collection and dissemination was scarce, and "scientist" was synonymous with "natural philosopher." Nowadays, the proliferation of scientists and the abundance of data has made the circumstances of data collection as important as the data itself for drawing accurate conclusions. The publication process should involve more frequent and open publication of smaller data sets as a basis for critical discussion - a model embraced by the recent website PubPeer. Ultimately, addressing the problems of bias and irreproducibility will require more open communication about data collection and analysis.