- Counterfactuals are a special form of if--then conditionals in which
the antecedent is believed to be false. They are ubiquitous and are used extensively in our daily linguistic discourse. In general, counterfactuals becomes true
just because of a connection between the antecedent and the consequent. Such a connection, we construe as causal.
However, the presence of suspicious notion of metaphysical necessity,
present us number of problems. Immediate problem is we cannot interpret them just by using simple truth functional sematics that uses truth table method. We need metaphysical notions such as "possibility" and necessity" to handle them appropriately under a given context. Also, counterfactuals are
highly context sensitive and they makes sense only when we evaluate them under
an "appropriate theory". Appropriate Semantics of counterfactuals that takes care of such a notion is forthcomming.
-
Counterfactuals suggests us to learn from the
"experience", we might not have quite often. For instance, we can learn from counterfactual such as "If I had put my finger in the fire, it would burn" or If i had taken anacin, I would be relieved from headache". In both the cases we can learn from the experiences of others. In Philosophy, they are widely used to
distinguish law statements from the accidental generalizations. In AI,
there are widely used in the Natural language understanding, Diagnosis,
finding in-consistencies in the database, and Planning. Counterfactuals often makes assertions about belief change propositions. Counterfactuals are
embedded in the revision processes. Each revision corresponds to accepting
a counterfactual as true.
However, the problem of belief revision is
two fold. First, accommodating new beliefs when new information generates
in-consistencies in the knowledge base. Second, how one preserves core
beliefs when the new information overrides the old beliefs, when obtained
from most reliable source.
- I am working on the problem of finding appropriate semantics of Causal
Counterfactuals within the purview of VCU2(Variably Conditional Update
logics) logics, based on possible worlds. The main question concerns us is how one can handle them within the belief revision frameworks, AGM(revision) and KM(Updates) The problem can be precisely put as follows; existing
logics(Grahne-VCU2) fall short of satisfactory account of why theorems such as CSO,
CV, CC are allowed (in VCU2), but are intuitively invalid. In the same way, certain theorems makes uinituitive counterfactuals intuitive.
logics. Hence, we propose an extension of VCU2 logics of Grahne. The work also
includes the role of causality in the belief revision process, especially
in generating the ordering of beliefs and the preservation of core
beliefs.