Originally posted by ALightinDarkness
As a PhD student in the social sciences, I have a very good grasp of it.
As you and I both know the media absolutely loves spinning academic research whenever it fits their agenda.
They do. Indeed, media reporting of science is generally dreadful.
And you and I both know simple correlations have no meaning in terms of causation. And since research has been done in this area, and NO
ONE has found causation, its pretty clear what is going on here - an agenda.
That I disagree with. A reliable strong significant correlation is suggestive of some form of causation (could be a to b, b to a, unknown(s) to a &
b). It does not mean a causes b (or b causes a), but it is certainly something that would say to me - time to get into this relationship in detail. If
it had no meaning, wouldn't even be worth the time, no? Yet I see scientists using bivariates all the time.
If you think that only multiple regressions are appropriate in this type of study, you have just discarded much very fine and robust research out
I had to laugh at you thinking an R squared above a 0.65 would somehow be impossible - its pretty reasonable, especially if someone were doing
a quality regression that has a ton of controlling variables, in which case the R squared would naturally be higher than normal and they would need to
run some other statistics to make sure the results were not due to the high number of controlling variables.
Aye, but we are not talking about multiple regression in this study. I also never said impossible. I think I said 'fairly rare'. and for a bivariate
it is. And in this case expecting R^2 of .65 to be, what was it you said, 'generous' is pretty laughable. r = .8 is a very very strong
And your 'spurious correlations' blah was indicative of someone who just wants to dismiss this issue out of hand. I'm not, I raised my concerns
earlier in the thread. But your approach is more troubling to me, especially so if you are a PhD student.
IQ is known to be only one measure of intelligence and the literature is in much disagreement about its validity, if they were interested in
finding out the truth they would have included educational variables and other standardized test scores in a multivariate.
I agree. But if you go on to be an independent researcher, you'll learn to use what you have to hand.
An r squared is pretty of 0.36 is pretty laughably low, explaining only 36% of the variation and not controlling for anything obvious. I'd
love to know what the p is, but I'll take your word on it that its < 0.05. I'd also want to know how these people were surveyed to determine how
generalizable it was. But hey, even bad research gets into journals as long as it fits someones bias.
But it certainly isn't meaningless. I'm sorry, it just isn't.
Oh, trust me. If you ever find r = .6, it is more likely than not to be p < .05. Not certainly, but very likely.
And again, I agree. Bad research does get published. Peer-review is a necessary but not sufficient degree of quality control. I take that bias comment
as pretty much poor form. Peer-review generally involves much more than one individual, so you are taking pot-shots at probably 3 experts with
differing views here. Indeed, implicitly at people you don't even know who reviewed this manuscript.
Also, I'm not sure who in the world all of a sudden set standards for what was a moderate or strong strong relationship...I'm pretty sure we
didn't have a conference about that...and I'm pretty sure the professor I worked for as an RA who taught quantitative methods would laugh at me if I
told her those guidelines. Perhaps 0.65 is a bit high overall, but it would certainly need to be higher than 0.36 if everything was properly
.65 for R^2 is much too high for some sort of barrier for acceptable relationships in behavioural science. I'll tell you a little story, I used to
work in pharmaceutical research all those years ago, I would knock of assay calibrations of .99999999999999999 all the time. When I moved into a
behavioural science as an UG, I did a study and found a correlation of something like .35, which I said was poor. And it is! For chemistry. But I was
told it was actually a moderate relationship. It's just something you might have missed or overlooked. Around .3-.4 is moderate, .5+ is strong.
I'd also want to know how these people were surveyed to determine how generalizable it was. But hey, even bad research gets into journals as
long as it fits someones bias.
I also want to read the full study. I'm still unclear how they really went about it, so I won't take it at face value for now. But you do appear to
just want to dismiss it.
It has FAILED in an attempt to imply causation, and a random correlation is just that - spurious until proven otherwise.
Not spurious at all. That's the wrong approach. It shows a significant negative correlation between two variables, that is not random. And it is most
likely not spurious. That's what the stats are for. If we are at p < .05, we are talking about less than 5% false positive.
So we accept it tentatively and study further. Maybe someone will get out there and do a more thorough analysis using your favoured multivariate
I think thou doth want to use questionable research tactics to verify your world view, my dear.
Not my research, so you might want to direct your complaints to Lynn et al and the journal editor.
The key phrase of the day is: construct validity.
Yeah, great phrase.
So, you question the relationship between IQ and intelligence? OK, fair enough, I raised a similar concern earlier in the thread. However, it does
measure something of interest that has been shown to have predictive validity in numerous ways (neurological, social, educational, behavioural
Or do you question the veracity of their measure of belief/non-belief. I'm interested myself as to how they went about it. I have an inkling, and if
I'm correct - not the way I would have went about it. But you sometimes have to work with what you have.
[edit on 13-6-2008 by melatonin]