> The axioms don't have to be true. You can still talk about their implications in absolute terms.
No, you can't. Two mathematicians using different rules of inference (say, those of classical vs. intuitionistic logic) will arrive at different theorems, even if they begin from the same axioms.
> "Assuming a=0 implies a=0" is absolutely true.
Assuming that “a = 0” and implication are both expressible in our formal system, it would indeed be very weird (though not a priori impossible) if “a = 0” didn't imply “a = 0”. But I have no problem imagining a formal system where “a = 0” isn't expressible (e.g., because equality is inexpressible), or implication isn't expressible (e.g., geometric logic), or implication has strange properties to someone only familiar with the classical and/or intuitionistic interpretation of “implication” (e.g., linear logic).
You can't have the same axioms with different rules of inference. The rules of inference are axioms.
By the axioms being the same, you mean the ink-shapes making up the symbols on a piece of paper being the same.
Not the actual meaning behind them.
Edit: Since you added more content to this reply later on, let me respond. In this case I was talking about specifically a formal system where a=0 makes sense and is expressible, but to keep it short and concise I didn't choose to write out all the details. So the rebuttal is moot.
Axioms and rules of inference are fundamentally different:
(0) An axiom is an internal statement to a mathematical theory that is assumed to be true. That is, inside of a mathematical theory, you don't need to prove that its axioms hold. However, if you want to construct a model of a mathematical theory, you need to prove externally that the axioms hold. In return, you get the theory's theorems (suitably interpreted) for free.
(1) A rule of inference exists in an external metatheory, where the original mathematical theory we were studying is treated as a syntactic object (also known as object language), in very much the same way a compiler treats the program being compiled as a (possibly annotated) syntax tree. A rule of inference defines a class of valid syntactic transformations, but doesn't concern itself with the impact of these transformations have on the meaning of the phases in the object language.
It is perfectly sensible to consider the effect of changing the rules of inference, on an axiomatic system.
Axioms and rules of inference are fundamentally different:
Things are not that simple, because you can often convert axioms to rules of inference or vice versa, without changing the set of derivable consequences.
As an example, consider pure first-order logic (FOL). As one extreme, you can present FOL with just one rule of inference (Modus Ponens), see for example [1]. The other extreme is Gentzen's sequent calculus [2] which has only one axiom (A |- A), everything else being a rule of inference. Most presentations of FOL are between these extremes.
[1] A mathematical introduction to logic, by H. Enderton
[2] Untersuchungen über das logische Schliessen I, by G. Gentzen
> Things are not that simple, because you can often convert axioms to rules of inference or vice versa, without changing the set of derivable consequences.
Yes, but the derivations themselves will change.
> As an example, consider pure first-order logic (FOL). As one extreme, you can present FOL with just one rule of inference (Modus Ponens), see for example [1]. The other extreme is Gentzen's sequent calculus [2] which has only one axiom (A |- A), everything else being a rule of inference. Most presentations of FOL are between these extremes.
Yep. I'm aware of the phenomenon that a single mathematical object of type T (say, infinity-categories) may admit multiple presentations by objects of type T' (say, model categories). But, just because two objects of type T' present the same object of type T (e.g., two Quillen-equivalent categories), it doesn't mean that they are equal in all respects (e.g., the category of simplicial sets is much nicer than the category of topological spaces).
There is a difference, but it's not clear quite what it is.
For example, it becomes progressively harder to get nice
(with cut elimination and finite rule schemata) sequent calculi
for richer axiomatic systems. For example I have not come across a
nice (in the above sense) sequent style formalisation of ZFC set
theory. Have you?
If you have different rules, you have a different object and the meaning of the axiom is different. Even if it is written using the same symbols.
To continue the compiler analogy..
Just because the ASCII sequence "int c=0;" means different things in C and Java, doesn't imply "int c=0;" is meaningless when specifically talking about only C.
The problem with your analogy is that C and Java have completely different abstract syntaxes. A better analogy would be taking the syntax of an existing programming language, and completely changing its meaning. For instance, consider the effect of making Racket use lazy evaluation (Lazy Racket), or making Haskell use strict evaluation (the upcoming -XStrict pragma). Strict and lazy languages validate different sets of equational laws (and hence compiler optimizations!), neither of which is a subset of the other, so this is perhaps a more interesting example than switching between intuitionistic and classical logic.
What you're saying is more or less equivalent to “two C implementations targeting different [architectures / operating systems / whatever] are actually implementations of two different programming languages”.
Ah, lovely! You just arrived exactly where I was trying to get.
> I'm saying the meaning behind statements are different.
Yes, exactly! And, just like a single C program can have two different meanings under two different implementations, the same axiomatic system can have two different meanings when deducing its consequences (proving theorems) using different rules of inference.
Not necessarily. It's very possible for one system to embed inside the other. In this case you can often talk about exactly the same axiom/theorem/whatever within different inferential systems.
An interesting example is the Axiom of Choice which is, naively encoded, a theorem of intuitionistic logic. That said, we can use flattening to embed classical logic inside of intuitionistic logic and then recover a whole family of propositions of which AC is one and also see why it has a difficult time holding in IPL.
This seems to be a technical objection that dodges the meat of the parent's comment. In what sense is it true that a given theorem follows from a given set of axioms and a given choice of inference rules?
A “theorem” is by definition a statement in a mathematical theory that has a proof. In what sense is this true? Well, that depends on what “true” means in your model... :-p
I agree with parent comment. At the heart of your disagreement is that you seem to use definition from logic of what axioms are, while your opponents use "definitions" from metamathematics.
Umm. I've been of the understanding that we never change the inference rules, Can you point to some branch of mathematics that involves a change of inference rules?
At my level of understanding, it seems a reasonable characterization of intuitionistic logic to say that it operates with a more restricted set of inference rules. It was the parent who made the stronger claim about differences here, and they speak a bit more to the topic in nearby comments.
This is not correct. There is no such thing as absolute truth. Something can only be true within a set of previously agreed constraints and rules. For example, I can simply imagine a scenario where a=0 implies a=0 is considered to be false, because I define it to be so.
Although I know you meant that jokingly, presumably the op's statement is only true within the parameters of a world where "there is no such thing as absolute truth", therefore, the statement isn't absolutely true, it's just true within the confines of his own (subjective) world view. The actual world, may, in fact, have absolute truth.
As it was mentioned earlier we can’t think about “absolute truth" in terms of math because math is appliance science. It is like a language. The root science is physics and only it owns “absolute truth", the real axioms and implication rules.
Acioms are only viewed as true because they are defined as such.
It's obvious when applying math to real world. If you're talking about objects on Earth surface, for example, at first Euclid's geometry will get you good results, because objects you're working with can be assumed to fulfill it's axioms. However, when your scale gets bigger, you'll have apply a more complicated geometry apparatus.
Easiest analogy about axioms is interface in software engineering: you don't care what the object really is, but as long as it shows certain properties, you can prove theorems about it. Which will be true for any objects with these properties, and as true as exactly these properties are fulfilled by the real life object.
"Axioms are better thought of as universally accepted truths"
Not really universally accepted. You just have to specify which set of axioms you are using. Different problems may call for different axioms. Like if you want to prove theorems about the security of some computer system, you'll define a set of axioms that allow proving interesting results for that purpose. If you want to prove theorems about sets, you'll want to specify what set axioms you are using. You may even try to specify interesting minimal sets of axioms that make your theorem true. There is no reason to use the same axioms for everything, or to assume they are True in some teological sense.
Omega is an amazing number. We know it has a digit in its binary expansion that is 0 or 1, yet it is impossible to formulate why. It is impossible to come up with a train of thought that explains it. There is no explanation that can be written down on a piece of paper.
So we are left with the weird conclusion:
1. It has the value it has for no reason.
2. It has the value it has for a reason that is impossible to formulate.
Nah, Omega is just defined as a number which contains countably infinite bits of algorithmic information. Since algorithmic information is equivalent to thermodynamic information (and likewise relates to quantum information if you head in that direction...), the whole "you can't calculate Omega" thing is really just a way of saying, "You can't have an infinite-precision physical measurement encoded into a finite physical system."
Chaitin makes far too much metaphysics of his work.
I don't really understand the insight with Chaitin's number. It's unknowable, but so are a lot of things. The halting problem already says that there are programs which we can't know if they will halt or not. And before that, Godel showed there are simple logical sentences which are true and unprovable.
In the case of modern countries: The contract is the law + the Constitution (or your local equivalent if not in the US) and your signature is living in and benefiting from the nation/society you live in.
The problem isn't power, it's the behavior generated by rules of the system. Because agents only receive output proportional to the capital they possess, they are incentivized to accumulate capital and the system ends up optimizing for capital instead.
As the system progresses, agents get the choice between loosing their allocation or using it to accumulate capital. This is unfortunate since, theoretically, the whole point of capitalism is to maximize long term consumption.
It's a faulty analogy. In fact, a paperclip maximizer goes against every single principle of capitalism out there.
The paperclip maximizer is bad exactly because it isn't based on capitalism; it will maximize paperclips regardless of whether someone values them or not! At _any cost_, it will maximize paperclips.
Markets, on the other hand, have the goal of providing products/services that people are willing to pay for. It adapts. The more people value it, the more capital it will attract, and the more resources will be allocated to it.
You've identified one of the key defects in capitalism today...the disconnect of a "true" market where value is measured by folk's willingness to pay for things.
Not more capitalism :), just determining a better way to manage the money supply then a few folks on the board of central banks. If we could somehow connect an increase or decrease of the money supply to a classic market definition (e.g. people buying and selling) I think it would make a very positive impact to everyone.
That's partially true. However if you force TCP connections (in Toxic this is done with the -t flag) your IP is effectively hidden from your contacts because all your traffic gets relayed by TCP nodes in the network. The downside is that forced TCP connections are slower and less reliable.
The reason Tox doesn't have built in anonymity is because strong anonymity has a massive impact on quality (especially streaming data like audio/video). Our goal is to steal normal (non-paranoid) users from Skype and get everyone and their mother using strong encryption. In order to achieve that we need to have comparable quality rather than something that feels like you're using a 28.8k modem.
And again, anyone who actually wants anonymity still has that option.
If broad adoption is truly your goal you have a branding problem. Normal people will not use a tool called 'toxic' which prints crazy ASCII letters in the command line. You might as well ask soccer moms to hold the line at occupy wall street.
This is false. The axioms don't have to be true. You can still talk about their implications in absolute terms.
"Assuming a=0 implies a=0" is absolutely true. Regardless of whether a actually is 0.