What is a minimal set of axioms that you have to believe in to be able to do science and self-improvement?
Here is what OpenAI o1 suggested:
Axiom 1 (Mathematical and Logical Consistency): Assume a consistent framework for reasoning, probability, and mathematics. This is a bare minimum for rational inference.
Axiom 2 (Learnability): There exist sufficiently compressible (lawlike) explanations that can unify past, present, and future observations, enabling us to distinguish better theories from worse through observation and criticism.
Axiom 3 (Resource Reasoning): When forming priors, distributing credence over an unbounded complexity space forces you to give higher effective credence to simpler theories. This ensures that simpler theories get tested first and, if successful, rapidly gain credence.
Axiom 4 (Cognitive/Observational Reliability): Our faculties (or instruments) are good enough to let us propose, refine, and test theories iteratively. We don’t need absolute certainty here—just enough trust to get started and not dismiss the entire enterprise.
Here is what OpenAI o1 suggested:
Axiom 1 (Mathematical and Logical Consistency): Assume a consistent framework for reasoning, probability, and mathematics. This is a bare minimum for rational inference.
Axiom 2 (Learnability): There exist sufficiently compressible (lawlike) explanations that can unify past, present, and future observations, enabling us to distinguish better theories from worse through observation and criticism.
Axiom 3 (Resource Reasoning): When forming priors, distributing credence over an unbounded complexity space forces you to give higher effective credence to simpler theories. This ensures that simpler theories get tested first and, if successful, rapidly gain credence.
Axiom 4 (Cognitive/Observational Reliability): Our faculties (or instruments) are good enough to let us propose, refine, and test theories iteratively. We don’t need absolute certainty here—just enough trust to get started and not dismiss the entire enterprise.