It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The Extraterrestrial Hypothesis and the null hypothesis

page: 21
8
<< 18  19  20    22  23  24 >>

log in

join
share:

posted on Apr, 16 2014 @ 01:57 AM
link   
reply to post by BayesLike
 


You said:


Quite frankly, because of your inexperience with analyzing data, you totally misunderstood his statement. You do have to use all of the data in the sample, you can't pick and choose to look at only the data you like. What you are doing is ignoring 99.999999999% of the data and selecting a few cases which you like. That isn't permitted.


Tell me, exactly what 99.99999999% of the data am I ignoring. Could you point me to this data?
edit on 16-4-2014 by neoholographic because: (no reason given)



posted on Apr, 16 2014 @ 02:01 AM
link   
reply to post by neoholographic
 



The Null is ASSUMED to be true. YOU DON'T TEST THE NULL. The only thing you can do is demonstrate the null is false. This is why it's called falsification Horshack.

Let's look at the heart attack and aspirin example again. The alternative hypothesis is looking for a relationship between aspirin and the prevention of heart attacks. The whole idea behind falsification is to put the null in a position of strength and it's assumed true.

You then give people Aspirin and see if there's a correlation between taking aspirin and preventing heart attacks.


you obviously read the wiki article but left out some stuff:

For instance, a certain drug may reduce the chance of having a heart attack. Possible null hypotheses are "this drug does not reduce the chances of having a heart attack" or "this drug has no effect on the chances of having a heart attack". The test of the hypothesis consists of administering the drug to half of the people in a study group as a controlled experiment. If the data show a statistically significant change in the people receiving the drug, the null hypothesis is rejected.




posted on Apr, 16 2014 @ 02:10 AM
link   
reply to post by neoholographic
 

Yes, I am bringing up crop circles because its part of the data that you are accusing me of ignoring. Its important because it illustrates a severe problem with your presentation. You essentially included crop circles in your "evidence" pile but also concluded crop circles are not alien. There is absolutely no clear definition of anything you are trying to show. Your "data" is not actual data that can be analyzed objectively by any statistical process whatsoever.



posted on Apr, 16 2014 @ 02:11 AM
link   
reply to post by ZetaRediculian
 


Wow, just wow.

It's sad when people can't even accept something so simple to grasp. You said:


For instance, a certain drug may reduce the chance of having a heart attack. Possible null hypotheses are "this drug does not reduce the chances of having a heart attack" or "this drug has no effect on the chances of having a heart attack". The test of the hypothesis consists of administering the drug to half of the people in a study group as a controlled experiment. If the data show a statistically significant change in the people receiving the drug, the null hypothesis is rejected.


You're not testing the null, you're testing the hypothesis that a certain drug may reduce the chance of a heart attack. The null is a statement of truth that says this drug doesn't reduce the chances of having a heart attack. If the data shows the drug has helped with people then the null is rejected.

The null is assumed to be true and that's why it's called FALSIFICATION.

You're showing that a statement assumed to be true is false. You don't test the null, it's assumed to be true. This puts the onus on those who support the alternative hypothesis to reject the null.



posted on Apr, 16 2014 @ 02:25 AM
link   

neoholographic
First let me say, you don't test the null hypothesis. The null is assumed to be true. Here's an example:


Not quite correct my friend. The null hypothesis is never assumed to be true. In a very real sense, it is considered false or we would not be doing the test in the first place. The other view of this (your view) is what is taught in kindergarden level statistics. We start people off with very simple ideas because that is all they can usually cope with. Later on, if they take more classes, they get exposed to a more complete picture of what is going on.

When we test a hypothesis what we are doing is determining if the alternate explains the data collected better than can be expected under the null hypothesis in the presence of noise. This is normally approached in hypothesis testing through a p-value. A p-value is (roughly speaking) the probability that the data which was observed could have been observed under the null hypothesis. We don't falsify anything ever. In fact, we can't. We also can't prove anything. We can only make statements about the probability the data could have occurred given different assumed distributions and parameters (and forms of the model used).

I have not seen you refer to a model or your p-value for your analysis and long ago concluded you are basically unable to do an appropriate analysis. But you are really really deeply into hand-waiving, name calling, insulting others, and hyperbole! That said, I assume you are maybe still in high-school and for some reason arrogantly believe you actually know something about data analysis. You don't have a clue. In fact, you are badly confused as Zeta and others have noted.



posted on Apr, 16 2014 @ 02:45 AM
link   

ZetaRediculian
reply to post by usertwelve
 



This is the guts of it. The null hypothesis is a statistics problem, not a scientific method problem. The statistical evidence weighs in favor of the ETH which falsifies the null hypothesis. I suppose it comes down to a matter of individual taste.

Its only a statistical problem if there is actual statistics to show. I haven't seen any and nothing has been defined. "Statistical evidence" needs to be in a statistical format. Bluebook was an attempt at that. Showing YouTube videos and links to ufo websites is not statistical information.


Statistical in this sense means that, as data comes in, a picture is builds up. Myriad pieces of data create a statistical drift towards ETH.
edit on 16-4-2014 by EnPassant because: (no reason given)



posted on Apr, 16 2014 @ 02:48 AM
link   

neoholographic
Tell me, exactly what 99.99999999% of the data am I ignoring. Could you point me to this data?
edit on 16-4-2014 by neoholographic because: (no reason given)


That you don't know is telling. The data you need is a sample of all the observations made under either all conditions or under specific observational conditions in a controlled setting. Because you are not doing an experiment (controlled setting), you have to default back to all available data. This isn't a very good analytical setting because we don't collect data for the vast majority of observations -- we just collect cases which seem "interesting." What is missing, unfortunately is the vast majority of the data. That data is all the observations everyday people make about objects under different conditions all the time. Why? Because UFOs are unidentifieds -- and the null involves identifieds.
It's OK, and preferable in all cases, to work with samples. But we don't have samples, we just have selected interesting observations which are an extremely small fraction of all observations.

Why all observational data? Well we have to know what the null hypothesis looks like. Then and only then can we specify about what the tail (the interesting observations) should look like under the null hypothesis. First though -- we need a really cleanly stated null hypothesis, which you have never stated. And we need a really cleanly stated alternate hypothesis. Then we need a model which contains both the null hypothesis and the alternate as possible outcomes. I'm speaking loosely here so that you have a chance of following, but there is a rigorous way to do this which is way beyond your knowledge level. And yes, it is possible to do this (specify null, alternate, and the model plus assumptions for the error distribution) for UFO observations.

Once all that is in place, we can go collect a sample -- but we have to collect all type of observations under all types of conditions. Some will (hopefully) contain UFOs. If the sample is large enough, it will.

There are some other things which can be done which are less statistically appropriate, but we still have to know the properties of normal misidentification and normal lack of identification to get anywhere with any sort of rational analysis. Doable -- and maybe simpler -- but still an expensive proposition.

Anything else is hand-waiving and not an analysis of a rational sort. It's just self-delusion that any sort of even semi-valid analysis has occurred at this point.



posted on Apr, 16 2014 @ 03:00 AM
link   

EnPassant
Statistical in this sense means that, as data comes in, a picture is builds up. Myriad pieces of data create a statistical drift towards ETH.
edit on 16-4-2014 by EnPassant because: (no reason given)


No -- Zeta is right. All that is occurring here is an enumeration of selected special cases. You can't come up with a frequency of any of these observations within all normal observations, all normal misidentifieds, and all normal unidentifieds. You can't even do that in simple sub-categories, such as radar observations. Without that knowledge, you have no hope of determining if these UFO cases are abnormal (not typical under the null hypothesis) because these cases are extremely rare when you consider all observations.

For instance, even under the null in the radar subcategory, there is a frequency of radar contacts which are totally inappropriate for known objects. We would expect the same for unknown objects too. But we have to break that frequency out by conditions known to affect radar. Can that be done, yes. But the data does not exist at this point in time. Without a program to collect the data to understand the null, we could consider working with a panel of radar experts and designers to make some guesses at frequency of radar errors under different conditions. But that would not be satisfying in the way a real analysis would be and it would be subject to a lot of (valid) criticism.
edit on 376am14America/Chicago08002kAmerica/Chicago by BayesLike because: (no reason given)



posted on Apr, 16 2014 @ 03:01 AM
link   
reply to post by BayesLike
 


It is not a matter of screening out a few events. It is a matter of identifying emergent themes and backing up good sightings with evidence from other domains. The themes in question are stalling car engines, falling leaf motion etc. there are enough sightings of this nature to formulate the hypothesis.



posted on Apr, 16 2014 @ 03:07 AM
link   

EnPassant

It is not a matter of screening out a few events. It is a matter of identifying emergent themes and backing up good sightings with evidence from other domains. The themes in question are stalling car engines, falling leaf motion etc. there are enough sightings of this nature to formulate the hypothesis.


Car engines fail without UFOs. How often is that under different conditions -- and in different parts of the country / city/ etc. So you have some cases where somebody saw a light and their car failed. So what? Is that frequency of cases within the normal probability of occurence or not? This is how you have to work with this data. Car engines do fail with or without lights being spotted. A few cases? Not so impressive as I'd expect some just considering how often car engines do fail. Millions of cases in a particular week and only when lights are spotted -- now that would be impressive (rare) under the null hypothesis!



posted on Apr, 16 2014 @ 03:15 AM
link   

Phage
reply to post by EnPassant
 


To falsify it only requires a significant falsification of the EVIDENCE or an alternative explanation for that evidence.
No. What you are talking about would be invalidation of evidence in favor of the hypothesis. A hypothesis is not falsified by invalidation of evidence. A hypothesis can be weakened by invalidation of evidence but it is not falsified by it.

Falsification of a hypothesis is carried out by verification of the null hypothesis. Falsification is carried out by obtaining evidence which validates the null. The OP states that the null is: "No UFOs are controlled by extraterrestrials." This is not a falsifiable hypothesis.

edit on 4/15/2014 by Phage because: (no reason given)


Ok. The null should be; The ETH is not based on a reasonable appraisal of the evidence. The hypothesis is that ETH is a reasonable assumption, considering the evidence.



posted on Apr, 16 2014 @ 03:24 AM
link   

BayesLike

EnPassant

It is not a matter of screening out a few events. It is a matter of identifying emergent themes and backing up good sightings with evidence from other domains. The themes in question are stalling car engines, falling leaf motion etc. there are enough sightings of this nature to formulate the hypothesis.


Car engines fail without UFOs. How often is that under different conditions -- and in different parts of the country / city/ etc. So you have some cases where somebody saw a light and their car failed. So what? Is that frequency of cases within the normal probability of occurence or not? This is how you have to work with this data. Car engines do fail with or without lights being spotted. A few cases? Not so impressive as I'd expect some just considering how often car engines do fail. Millions of cases in a particular week and only when lights are spotted -- now that would be impressive (rare) under the null hypothesis!


But it is not just engines failing. It is engine failure under a specific set of conditions. Conditions that precipitate a series of events that are well documented.



posted on Apr, 16 2014 @ 03:30 AM
link   

EnPassant
Ok. The null should be; The ETH is not based on a reasonable appraisal of the evidence. The hypothesis is that ETH is a reasonable assumption, considering the evidence.


That would not work and it would not be testable. Let's work with something simple like car engines failing.

A testable null would be like: a car fails when a light is seen overhead 0.x% of the time and the alternate would be something like: a car fails at greater than 0.x% when a light is seen overhead. Now, the 0.x% would need to be the expected frequency of a car failing if the light overhead was not associated with cars failing. Depending on what 0.x% is we know what sample size to work with; we can do a hypothesis test to determine if the presence of a light overhead makes a difference. Once it is known if it does or doesn't, you may then apply your interpretation of what that means.

That interpretation would be subject to some debate perhaps, but the association between the car failing and he light would be determined with a known level of confidence and would not be debatable.

In a similar manner with other events, this is a way that you can get to a valid set of hypotheses to test which can be interpreted as involving (after interpretation) UFOs and potentially aliens. The effects on the cars, radar, missile silos, and so forth would not be debatable. Only the interpretation would be debatable.
edit on 397am14America/Chicago50031kAmerica/Chicago by BayesLike because: (no reason given)



posted on Apr, 16 2014 @ 03:47 AM
link   
reply to post by EnPassant
 


You can only test things you can observe and measure. So, instead of involving "aliens" as part of the hypothesis we work with the effects the aliens are expected to produce. These effects are measurable (countable is a type of measure). With the appropriate set of hypotheses, we can know if the effects are abnormal relative to blind chance. If they are abnormal, you then may have support for an alien interpretation. But leave the aliens in the interpretation.

If you look at the topic that Neo seems to like to go to -- quantum mechanics -- this is exactly what is done. There are measurable effects which can be described to have a frequency under a null hypothesis and an alternate. The interpretation is that (for example) a Higgs has been detected if the data fits the alternate better than the null. But they don't actually observe a Higgs, they observe a whole set of effects attributed to the Higgs at a higher rate than expected due to blind chance. After they get enough evidence, they may finally feel comfortable announcing the Higgs has been found.

Where the Higgs guys can have a problem though is collecting data until something becomes significant. That is not really allowed as it will always happen if you collect enough data. You have to have a predefined stopping rule for collecting the data and then do the test whether you want to do it or not. I don't know a set stopping rule for data collections was or was not in place in the Higgs case, it probably wasn't. And I believe the confidence level was unusually easy, something like a p-value of 0.10 where 0.01 would be more reasonable for a major announcement in fundamental particles. So, to be convincing, their little experiment may need to be repeated many times and detection might ultimately be decided to not have actually occurred 10 years from now. I'm not an expert in the Higgs arena, so don't take that as fact .... but this is how these things are actually done in many labs. It's much more likely to work this way in a lab if continued funding is at risk. Some scientists take chances with their careers when they really shouldn't; funding is a powerful incentive for early announcement and announcement with less than pre-determined confidence levels (we usually set a necessary p-value to meet as a cutoff ahead of the data collection).
edit on 420am14America/Chicago32005kAmerica/Chicago by BayesLike because: (no reason given)



posted on Apr, 16 2014 @ 10:00 AM
link   

neoholographic
reply to post by ZetaRediculian
 


Wow, just wow.

It's sad when people can't even accept something so simple to grasp. You said:

No. I didn't say anything. That was a direct quote from the Wiki article you referenced and was in response to your "Test what?" It also uses the same example that you used "heart attacks". Somehow you quoted some stuff from the wiki but left out "The test of the hypothesis". I even highlighted that part but somehow the bold disappeared?

so, not my quote, its from your link where you used the same example and I am putting the bold back:



For instance, a certain drug may reduce the chance of having a heart attack. Possible null hypotheses are "this drug does not reduce the chances of having a heart attack" or "this drug has no effect on the chances of having a heart attack". The test of the hypothesis consists of administering the drug to half of the people in a study group as a controlled experiment. If the data show a statistically significant change in the people receiving the drug, the null hypothesis is rejected.




You're not testing the null, you're testing the hypothesis that a certain drug may reduce the chance of a heart attack.

Test what?



posted on Apr, 16 2014 @ 10:20 AM
link   
reply to post by BayesLike
 


You're just all over the place. You said:


Why? Because UFOs are unidentifieds -- and the null involves identifieds.
It's OK, and preferable in all cases, to work with samples. But we don't have samples, we just have selected interesting observations which are an extremely small fraction of all observations.


This makes no sense. You couldn't answer the question so you just rambled about nothing. This was the question.


Tell me, exactly what 99.99999999% of the data am I ignoring. Could you point me to this data?


You just stumbled through a long winded post and said nothing. What does that mean? The null involves identifieds?

The null was "no U.F.O.'s are controlled by extraterrestrials"

All you have to do to falsify the null is show that there's no correlation between U.F.O.'s and radar reports, U.F.O.'s and trace evidence, U.F.O.'s and physical evidence, U.F.O.'s and eyewitness accounts and close encounters. Or show there's a better explanation for these correlations.

If I say:

"No Toyota's come in blue"

I don't need to look at all Toyota's in order to falsify the null. When I spot a blue Toyota the null which is assumed to be true is now (FALSE) ified. This is why it's called falsification.



posted on Apr, 16 2014 @ 10:32 AM
link   
reply to post by ZetaRediculian
 


Zeta, you're making yourself look bad if you will not accept or even try to understand a simple scientific method of ensuring that those who support the alternative hypothesis have to falsify the null which is assumed to be true.

You can put test the hypothesis in bold or capital letters.

When they say test the hypothesis, they're talking about the drug helping to prevent heart attacks.

What does null mean?


1
: having no legal or binding force : invalid
2
: amounting to nothing : nil
3
: having no value : insignificant


The reason it's called the null because it represents zero. It represents nothing. You can't test nothing. In the case of science the null hypothesis is assumed true so there's nothing to test. You can only demonstrate that it's false by showing a correlation between two measurements.



posted on Apr, 16 2014 @ 10:36 AM
link   
reply to post by neoholographic
 


So what are you testing?



posted on Apr, 16 2014 @ 10:38 AM
link   

neoholographic
reply to post by ZetaRediculian
 


Zeta, you're making yourself look bad if you will not accept or even try to understand a simple scientific method of ensuring that those who support the alternative hypothesis have to falsify the null which is assumed to be true.




Neo, you're making Zeta look good by repeatedly demanding proof of a negative.



posted on Apr, 16 2014 @ 10:41 AM
link   
reply to post by ZetaRediculian
 


The correlation between two measurements. You're testing the alternative hypothesis in order to falsify the null.

In this case you would be testing if there's a correlation between the drug and the prevention of heart attacks. The null hypothesis is assumed to be true. You don't test the null, you falsify it by testing the alternative hypothesis and showing a correlation between two measurements.



new topics

top topics



 
8
<< 18  19  20    22  23  24 >>

log in

join