"Most of this author's written works are pure BS, but I do agree with the fundamental principle that researchers should maintain independence from conflicts of interest." The WHOLE POINT of the peer review process and scientific method as a whole is to detect and correct errors DESPITE any such "conflicts of interest". In the long run, THE TRUTH WILL OUT---ALWAYS.
Inaccurate data due to fraud is RARE, and severely punished when detected---even in "non-science" academic venues. Witness what is happening to Bellesiles and his anti-gun "revised history".
Your faith in peer review is a bit misplaced. There are plenty of examples where peer review is a failure. While it is better than nothing, it is still quite lacking.
Without getting into biased peer reviewers with an agenda against the reviewee (there are plenty of articles you can easily find), the following sources on peer review failures might be of interest to you:
Effect on the Quality of Peer Review of Blinding Reviewers and Asking Them to Sign Their Reports published by JAMA.
This particular study inserted 8 deliberate errors into a paper that was accepted for peer review. The paper was sent out to 420 reviewers for peer review. Only 53% (221) of the reviewers presented a peer review report. Out of the 221 reviewers that conducted a peer review on the accepted paper the study found that:
the mean number of errors found was 2,
only 10% identified 4 or more errors, and
16% didnt detect any errors
In the BMJ published book
A Difficult Balance: Editorial Peer Review in Medicine, the author cited the following as examples of peer review not detecting fraud:
John Darsee published 44 invalid papers based on falsified results,
Joseph Cort published 2 papers on a molecule that had not been synthesized, and
Elias Alsabti published 60 plagiarized papers.
The author also points out some abuses by peer reviewers, including:
20% of 300 cancer researchers that were denied NIH grants had their data pirated by reviewers, and
Researcher Alsabti obtained a paper that was sent to a reviewer who had died -- added his name plus two fictitious authors and published it
Another BMJ published book,
Peer Review in Health Sciences had the following examples of the abuse of peer review:
Bridges retained rhodopsin paper for several weeks then declined to review for PNAS and published similar paper in Science, and
Cistron Biotechnology vs Immunex $21mn out-of-court settlement after gene sequence was obtained during review and patented
In the journal,
Behavioral & Brain Sciences, institutional biases were exposed (Peters DP & Ceci SJ., 1982, 5: 187-95). 12 published articles from prestigious institutions were resubmitted to the same 12 psychology journals 18-32 months after publication after changing author and affiliation (to a fictitious institution). Only 3 articles were recognised as duplicates, 1 was accepted, while 8 of the previously published articles were rejected (reason for rejection = weak methodology).
Published papers often have deficiencies that should have been addressed during peer review before approval. Several studies have quantified the rates of deficiencies in published papers.:
no detail on sample size 89%,
no confidence intervals 86%,
inadequate detail on randomisation 60%,
no details of concealment 77%,
defects in 18-68% of abstracts, and
errors in 4-67% (median 36%) of citations
And here are a couple of cases where reviewers of a papers completely ingore the evidence and dismisses important findings. The first paper on radioimmunoassay was rejected, and in the case of hepatitis B, the peer reviewer thought hepatitis B particles were dirt on the microscope slide.
Based on the above studies and instances, it is demonstrable that peer review:
does fail to identify important work,
does fail to identify errors,
does fail to detect fraud, and
does not guarantee quality or accuracy.