It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
The study of beliefs in dialogue has proceeded along two separate lines: the creation of logical formalisms which capture the inferences that it seems reasonable for agents to make, and the construction of implemented models that approximate what agents actually do. In our project to connect the two approaches we have created a belief model based on a reason maintenance system, and here we present a theory in nonmonotonic logic which describes the behaviour of this model. We start out by examining the consequences of combining the use of modal operators (to represent nested beliefs) with autoepistemic operators (as a means of nonmonotonic inference). We then look at how some general principles of belief modelling can be represented, such as the persistence and ascription of beliefs, before applying our findings to the construction of a theory for the specific domain in which we are interested.
The autoepistemic logic is a formal logic for the representation and reasoning of knowledge about knowledge. While propositional logic can only express facts, autoepistemic logic can express knowledge and lack of knowledge about facts.
Proof-theoretic formalization of a non-monotonic logic begins with adoption of certain non-monotonic rules of inference, and then prescribes contexts in which these non-monotonic rules may be applied in admissible deductions. This typically is accomplished by means of fixed-point equations that relate the sets of premises and the sets of their non-monotonic conclusions. Defaults logics and autoepistemic logic are the most common examples of non-monotonic logics that have been formalized that way.[1]
Model-theoretic formalization of a non-monotonic logic begins with restriction of the semantics of a suitable monotonic logic to some special models, for instance, to minimal models, and then derives the set of non-monotonic rules of inference, possibly with some restrictions in which contexts these rules may be applied, so that the resulting deductive system is sound and complete with respect to the restricted semantics. Unlike some proof-theoretic formalizations that suffered from well-known paradoxes and were often hard to evaluate with respect of their consistency with the intuitions they were supposed to capture, model-theoretic formalizations were paradox-free and left little, if any, room for confusion about what non-monotonic patterns of reasoning did they cover. Examples of proof-theoretic formalizations of non-monotonic reasoning, which revealed some undesirable or paradoxical properties or did not capture the desired intuitive comprehensions, that have been successfully (consistent with respective intuitive comprehensions a with no paradoxical properties, that is) formalized by model-theoretic means include first-order circumscription, closed-world assumption, and autoepistemic logic.[1]
In logic, a rule of inference, inference rule, or transformation rule is the act of drawing a conclusion based on the form of premises interpreted as a function which takes premises, analyzes their syntax, and returns a conclusion (or conclusions). For example, the rule of inference modus ponens takes two premises, one in the form of "If p then q" and another in the form of "p" and returns the conclusion "q". The rule is valid with respect to the semantics of classical logic (as well as the semantics of many other non-classical logics), in the sense that if the premises are true (under an interpretation) then so is the conclusion.
Typically, a rule of inference preserves truth, a semantic property. In many-valued logic, it preserves a general designation. But a rule of inference's action is purely syntactic, and does not need to preserve any semantic property: any function from sets of formulae to formulae counts as a rule of inference. Usually only rules that are recursive are important; i.e. rules such that there is an effective procedure for determining whether any given formula is the conclusion of a given set of formulae according to the rule. An example of a rule that is not effective in this sense is the infinitary ω-rule.[1]
Popular rules of inference include modus ponens, modus tollens from propositional logic and contraposition. First-order predicate logic uses rules of inference to deal with logical quantifiers. See List of rules of inference for examples.
Originally posted by sadybull
Well it looks like someone was threatened for looking into there shenanigans. I quote a post and i will post link.
The person who wrote this comment name is salvador. Have a look for yourself.
translate.google.com...://terraeantiqvae.com/profiles/blogs/iruna-veleia-y-sus-3%3Fid%3D2043782%253ABlogPost%253A6583 9%26page%3D3&prev=/search%3Fq%3Doscar%2Bescribano%2Bayndryl%26client%3Dfirefox-a%26rls%3Dorg.mozilla:en-USfficial%26biw%3D1024%26bih%3D467
edit on 22-7-2013 by sadybull because: bad
that link....quote then copy and paste....is about a flase item of antiquity with modern inscriptions in basque.
The Iruña-Veleia site had been granted an unusually large 3.72 million euros funding by the Basque regional government. In 2006, a series of sensational findings at Iruña-Veleia were announced to the press by the director of the archeological mission. These included what would have been the oldest non-onomastical texts in Basque, which were hailed as the first evidence of written Basque.
Also, it was announced the discovery of a series of inscriptions and drawings on pottery fragments, some of which refer to Egyptian history and even some written in Egyptian hieroglyphs. Finally, it was announced the finding of the earliest representation of the Calvary (crucifixion of Jesus) found anywhere to date.[5]
However, none of these findings were submitted to any scholarly journal or any serious expert assessment.
Fabricated pieces
Eventually, all these inscriptions turned out to be a fabrication, as concluded by the 26 experts who analyzed the data for almost 10 months, and that went public on November 19, 2008. The texts were described as "crude manipulation," "incoherent," having texts and words both "incorrect and non-existent", and as being so "obviously false as to be almost comical."[6] The case has been dubbed as the "biggest archaeological fraud in the history of the Iberian Peninsula"[7] and "the product of an elaborate hoax."[8]
The regional government of Alava is pursuing legal actions against the fraud perpetrators.[9][1]