Elsewhere on the internet...

The League of Reason has some social media accounts! You can find us on Facebook or on Twitter for some interesting links and things.

The Journal of Dead Ends

Post new topic Reply to topic  Page 1 of 1
 [ 15 posts ] 
The Journal of Dead Ends
Author Message
RatmonkPosts: 4Joined: Tue Jun 08, 2010 2:42 pm

Post The Journal of Dead Ends

Hello,

I was talking to two friends, both PHd students in the field of genetics, and they expressed their frustration about the lack of a journal which collects papers describing experiments which failed to yield useful results. They suggested that the lack of such a database of "failed" experiments led to replication of those same experiments by other researchers who weren't aware that it had already been done. they also suggested that the inability of scientists to publish such papers and thus receive credit for their work added to the pressure to produce "useful" results and this in turn encouraged an environment wherein new methodologies were ignored in favour of more conservative ones.

What i am curious about is if anyone here knows of such a journal and if you don't do you think that it would be useful to have one?
the reason i have posted this in the suggestion forum is that it seems that the LoR would be a good place to conduct a survey of scientists and researchers to find instances of unintentional replication of "failed" experiments.
Wed Jun 09, 2010 2:44 am
sgrunterundtUser avatarPosts: 254Joined: Thu Dec 17, 2009 8:23 amLocation: Niels Bohr Institute, Copenhagen Gender: Tree

Post Re: The Journal of Dead Ends

I have discussed that idea quite a bit. I am afraid it wouldn't work because people wouldn't want to have a long list of publications in "Journal of Failures", and therefore wouldn't publish.

But it would be nice. One could avoid trying and failing to do something that hundreds of others have already tried and failed to do.
Wed Jun 09, 2010 8:20 am
CaseUser avatarPosts: 1080Joined: Sun Feb 28, 2010 9:40 pm Gender: Cake

Post Re: The Journal of Dead Ends

No, that doesn't make sense. It would make sense for everyone to publish studies even if no effect was found. There's still a bias towards publishing studies where effects were found.
I am determined that my children shall be brought up in their father's religion, if they can find out what it is.
Charles Lamb (1775 - 1834)

Atheism is a non-prophet organization.
Wed Jun 09, 2010 10:17 am
Aught3ModeratorUser avatarPosts: 4290Joined: Fri Feb 27, 2009 3:36 amLocation: New Zealand Gender: Male

Post Re: The Journal of Dead Ends

Also there are so many ways your work can fail it would be hard to write up useful information about what went wrong. If you were going to isolate the error so precisely it would be fairly easy for you to repeat the experiment and get it right.
Wanderer, there is no path, the path is made by walking.
Wed Jun 09, 2010 10:33 am
WWW
JustBusiness17User avatarPosts: 1484Joined: Fri Jan 22, 2010 9:29 amLocation: Earth, Solar System, Milky Way, The Universe, Etc etc... Gender: Male

Post Re: The Journal of Dead Ends

Maybe it's my ignorance but I thought falsification experiments were half the battle in science. Why wouldn't it be useful to compile a database like that? It might get a little big really quickly, but it seems to me that it would help for the design and control of future experiments.
ttyl
Tue Jun 22, 2010 6:58 am
DeathofSpeechUser avatarPosts: 220Joined: Tue Jul 20, 2010 2:28 pmLocation: I can't know with certainty Gender: Tree

Post Re: The Journal of Dead Ends

Ratmonk wrote:Hello,

I was talking to two friends, both PHd students in the field of genetics, and they expressed their frustration about the lack of a journal which collects papers describing experiments which failed to yield useful results. They suggested that the lack of such a database of "failed" experiments led to replication of those same experiments by other researchers who weren't aware that it had already been done. they also suggested that the inability of scientists to publish such papers and thus receive credit for their work added to the pressure to produce "useful" results and this in turn encouraged an environment wherein new methodologies were ignored in favour of more conservative ones.

What i am curious about is if anyone here knows of such a journal and if you don't do you think that it would be useful to have one?
the reason i have posted this in the suggestion forum is that it seems that the LoR would be a good place to conduct a survey of scientists and researchers to find instances of unintentional replication of "failed" experiments.


I can actually think of really good reasons this is a bad idea.
It qualifies the entire experiment as a failure, where that is an assumption and that smells like voodoo.

How do you separate out the failures from the successes when you stop before you find out why it didn't work?
Especially when you will never be able to account for all of the variables until you can reproduce the failure predictably?
If you can't make it succeed predictably then you have no basis for being able to reproduce a failure predictably.

If you can't isolate why it failed then you have no basis for reporting a failure.
Reason Bran, with two scoops of Objectivity in every box and loaded with Bran fiber goodness...
...you'll never be full of shit again.


Science - You can see why it works.
Religion - You can't see why it doesn't work
Tue Jul 20, 2010 8:02 pm
RatmonkPosts: 4Joined: Tue Jun 08, 2010 2:42 pm

Post Re: The Journal of Dead Ends

How do you separate out the failures from the successes when you stop before you find out why it didn't work?

This is my point. Do you think that the way that academic publishing currently works might provide an incentive for researchers to stop before they find out why something didn't work? and if that is the case might the opportunity to publish failures-in-progress alleviate such pressure. i ask only because my friends seemed to feel that they have had to drop lines of research before they had been exhausted, because of the pressure to publish.
Sun Jul 25, 2010 8:22 am
DeathofSpeechUser avatarPosts: 220Joined: Tue Jul 20, 2010 2:28 pmLocation: I can't know with certainty Gender: Tree

Post Re: The Journal of Dead Ends

Ratmonk wrote:How do you separate out the failures from the successes when you stop before you find out why it didn't work?

This is my point. Do you think that the way that academic publishing currently works might provide an incentive for researchers to stop before they find out why something didn't work? and if that is the case might the opportunity to publish failures-in-progress alleviate such pressure. i ask only because my friends seemed to feel that they have had to drop lines of research before they had been exhausted, because of the pressure to publish.


Pressure to publish is very real and I suppose one would have to consider it an evolutionary pressure. You either do your work in a manner that allows you to compete or perish. That may or may not be unfortunate.

I think one must qualify "failure" to the extent that if any conclusion can be reached based on evidence then even if the conclusion does not support the initial hypothesis that it adds to the information available for a known specific reason. It can not be called a failure just because the results provide evidence to the contrary of the initial hypothesis.

Example: michelson-morley

An experiment can only be qualified as a failure if no conclusion can be reached.
Only then are you left with nothing to publish.

The more I think on it, the less helpful it seems to publish failures (where no conclusion can be reached).
Reason Bran, with two scoops of Objectivity in every box and loaded with Bran fiber goodness...
...you'll never be full of shit again.


Science - You can see why it works.
Religion - You can't see why it doesn't work
Sun Jul 25, 2010 10:49 am
RatmonkPosts: 4Joined: Tue Jun 08, 2010 2:42 pm

Post Re: The Journal of Dead Ends

I do take your point but i still don't understand why such a paper wouldn't be useful as a signpost saying "dead end, don't bother". if failures aren't published how can other researchers know wether or not they are about to embark on a four year project which someone else has already shown to be fruitless, surely it is a part of the study of methodology.
Mon Jul 26, 2010 9:54 am
DeathofSpeechUser avatarPosts: 220Joined: Tue Jul 20, 2010 2:28 pmLocation: I can't know with certainty Gender: Tree

Post Re: The Journal of Dead Ends

Ratmonk wrote:I do take your point but i still don't understand why such a paper wouldn't be useful as a signpost saying "dead end, don't bother". if failures aren't published how can other researchers know wether or not they are about to embark on a four year project which someone else has already shown to be fruitless, surely it is a part of the study of methodology.


Because if someone swapped grape koolaid for the ninhydrin solution and nobody thinks to check it may be a personal failure but publishing it as a warning that "dragons be here" is counterproductive.

What you are suggesting is that it would be excusable to stop research or commercial development based upon "dunno."

There is no place in science for "dunno" except as a question.
"Dunno" is never an answer.

If a publication can't add to knowledge, then the least one can expect is that it not stand in the way.
Reason Bran, with two scoops of Objectivity in every box and loaded with Bran fiber goodness...
...you'll never be full of shit again.


Science - You can see why it works.
Religion - You can't see why it doesn't work
Tue Jul 27, 2010 12:43 am
RatmonkPosts: 4Joined: Tue Jun 08, 2010 2:42 pm

Post Re: The Journal of Dead Ends

human error holds no water here as any experiment or piece of research may contain mistakes which go unnoticed. i was wrong to write "don't bother", of course we should bother to go down the same line as before, but we should be informed when we do so.
So you've designed an experiment and want to check if anyone has tried it before. you see that someone did ten years ago and failed to get a useful result, so you think "ok, they ballsed it up somehow" but what if thirty different people have tried it independently of each other and all failed to get a useful result? surely that would cause you to re-examine your design which might in turn lead to a better experiment which does finally add some knowledge. I agree with your last post as a response to my last post which i see was badly worded however i still do not see why positive and negative results are published and null results are not.
Tue Jul 27, 2010 1:04 am
DeathofSpeechUser avatarPosts: 220Joined: Tue Jul 20, 2010 2:28 pmLocation: I can't know with certainty Gender: Tree

Post Re: The Journal of Dead Ends

Ratmonk wrote:human error holds no water here as any experiment or piece of research may contain mistakes which go unnoticed. i was wrong to write "don't bother", of course we should bother to go down the same line as before, but we should be informed when we do so.
So you've designed an experiment and want to check if anyone has tried it before. you see that someone did ten years ago and failed to get a useful result, so you think "ok, they ballsed it up somehow" but what if thirty different people have tried it independently of each other and all failed to get a useful result? surely that would cause you to re-examine your design which might in turn lead to a better experiment which does finally add some knowledge. I agree with your last post as a response to my last post which i see was badly worded however i still do not see why positive and negative results are published and null results are not.


In science we do exclude lines of inquiry, but we do so for a substantial reason.

Let's use Pegasus as an example.

If we apply reason, and cladistic phylogenetics, we can discount Pegusi.
There is no path through which an animal could inherit all of the genetic features required that it could have the traits of Pegasus.
There are no mammalian hexopods, therefore regardless of whether some creatures evolved forelimbs into wings, there is no line through which it could also inherit an addition set of limbs which did not.
We must not exclude them exclusively by their absence in taxonomic record, but we must exclude that for which no possible line of genetic inheritance exists.
It is a positive disproof rather than an exclusion by absent evidence.

We can then say that inquiry into the existence of pink Pegusi is a frivolous inquiry and be justified.
That is a valid exclusion and necessary to avoid redundancy of effort. We MUST exclude that which isn't possible to make inquiry efficient.
We must have substantial reason to exclude.

When we design an inquiry we substantiate the line of logic behind it with precedence whenever possible.
We apply precedence either directly or through some apparently suitable analogy.
If we don't get the results we seek we may not exclude the strategy as possibly valid unless we can demonstrate that either the analogy used, or the precedent used was invalid.

Publishing a failure, excludes not merely the inquiry, but the foundation upon which the inquiry was based.
If that can be done in such a way as to provide a positive proof that precedence is invalid, then the inquiry did not fail.
If it can't provide a reason that the foundation of the inquiry was in error, either an error in the analogy employed, or an error in the precedence upon which the analogy was based, then it becomes an exclusion by absence, rather than positive proof that the inquiry is exclusive of merit.

"I did it this way and it didn't work," is not sufficient. Unless you can isolate all possible reasons that the inquiry failed.
No, you can't exclude human error at that point because unless you know exactly why the inquiry failed then it is always possible that you overlooked something.
One can't exclude human error by the absence of information... only by positive proof that the error lies elsewhere.

30 people before you can fail, and the information is meaningless unless a positive conclusion can be reached that the inquiry deserves no merit.

Yes, that means that science on the whole is a rather tedious and repetitious process.
It does not advance by leaps (usually) but by slow measured progress of building upon precedence.
Reason Bran, with two scoops of Objectivity in every box and loaded with Bran fiber goodness...
...you'll never be full of shit again.


Science - You can see why it works.
Religion - You can't see why it doesn't work
Tue Jul 27, 2010 12:51 pm
AndromedasWakeLeague LegendUser avatarPosts: 598Joined: Fri Feb 20, 2009 11:38 pmLocation: Captain's Chair, League HQ Gender: Cake

Post Re: The Journal of Dead Ends

I like this suggestion, but it is not strictly directed at the actual site so I'm moving it to Sci/Math which seems to be most appropriate for its topic.
ImageImage
(( "We are 'star-stuff'. We are a way for the cosmos to know itself." - Carl Sagan | Music! | Twitter - [ AndromedasWake | SiriusStargazer ] ))
Mon Aug 02, 2010 12:46 pm
WWW
monitoradiationPosts: 566Joined: Sat Feb 21, 2009 11:05 pm

Post Re: The Journal of Dead Ends

As much as I like what the OP suggests, there are simply too many ways to fail to be recorded. (I know Failblog tries to capture them all... That's why there's so much lulz on the interwebs)

I think the most parsimonious way, going forward, is to present what methods work - Then it may be induced from them that which don't work. If there are ways that work that we haven't found yet, then its a consequence of being cautious yet inefficient. I don't think science, as an enterprise, needs a timeline for completion.
Tue Aug 03, 2010 6:48 pm
DeathofSpeechUser avatarPosts: 220Joined: Tue Jul 20, 2010 2:28 pmLocation: I can't know with certainty Gender: Tree

Post Re: The Journal of Dead Ends

I'm going to 180 here and change the question...
Rather than a log of failures of the experiment itself, lets see if we can get something more general by tracking only the dependencies.

A database that records reagents, devices, software, and environmental factors. Anything and everything used.

Batch number, lot number, serial number, etc... trackable information.

The search would have to be naive. In other words, each entry would not record the experiment but would only record a list of the dependencies.
Each item would be listed in a strict format.
The search could not assume suitability for purpose for any of the dependencies, but the properties of that dependency which are most essential to the inquiry could be sub-tabled... like "Sulfur content not higher than 0.0003%" If that is important to the inquiry.

The search would only correlate a failure condition in a undescribed inquiries with dependencies in common.

Something like this might be useful before the experiment had to be abandoned and would not prejudice future inquiries without cause.
Reason Bran, with two scoops of Objectivity in every box and loaded with Bran fiber goodness...
...you'll never be full of shit again.


Science - You can see why it works.
Religion - You can't see why it doesn't work
Thu Aug 05, 2010 1:02 am
Post new topic Reply to topic  Page 1 of 1
 [ 15 posts ] 
Return to Science & Mathematics

Who is online

Users browsing this forum: No registered users and 4 guests