Goodstein’s Theorem

May 20, 2008

In the previous post, I had talked about a special kind of sequence. Such sequences are called Goodstein’s sequences. And Goodsten’s theorem states that all Goodstein sequences converge to 0. Now, this is a rather counter-intuitive result given the way the series expands. Remember, in every iteration, you are increasing the base and the exponent of the terms in a sum by 1, whereas you are subtracting the whole sum by just 1. Surely, the number is increasing very rapidly notwithstanding the subtractions? For example, even a small first number, say 4, increases in this manner: 4, 26, 41, 60, 83, 109, 139, 173,…, 3319048, …, 93850207,…

How can this sum start reducing and converge to 0? One has to look more carefully to see what is happening. Let us start with a smaller number, say 3.

Iteration 1 (hereditary base-2):
(1) 3 = 2 + 1
(2) 3 + 1
(3) 3 + 1 – 1 = 3
Iteration 2 (hereditary base-3):
(1) 3 = 3
(2) 4
(3) 4 + 1 – 1 = 3
Iteration 3 (hereditary base-4)
(1) 3 = 3
(2) 3
(3) 3 – 1 = 2

See what’s happening? Firstly, note that in step(2) of iteration 1, the last term, which is 1, does not get increased, because the number is less than the current base. Say the base is n. Then increasing the base of numbers 1 through (n – 1), does not change them. For example, 2 is base 4 is 40 + 40. And 2 in base 5 is, 50 + 50. Correct? So, in this case, the last term is not increased and there is a subtraction following that.

What happens is, even when you are increasing the bases of the terms, the subtraction by 1 operation is slowly “eating” into the last term of the sum. This process may be arbitrarily slow, but eventually, it so happens that, the power of the last term in the sum “falls off”, and the last term falls into a lower base than the ongoing one, hence ceasing to increase. So then, the subtraction can now eat away the last term easily, before it starts attacking the next term (which will now be the last term).

So, you can now see that, that the sum does in fact start reducing, albeit after numerous steps, even for small numbers. In fact, you can only check this for the numbers 1, 2 and 3 by hand. You can probably write a program to check this for bigger numbers, though it might take extremely long.


Now, another important thing about Goodstein’s theorem is that it is one of Godel type theorems. The truth of Goodstein’s theorem cannot be proved using the axioms of first order arithmetic (Peano arithmetic). There is a proof for this, as well as a proof for Goodsteins’s theorem using techniques that are outside of Peano arithmetic.


An interesting sequence

May 16, 2008

Mandar has an interesting set of posts on convergence and divergence of infinite series. It made me recall a fascinating result I had come across once. It is not about an infinite series, though. But it is a very interesting sequence.

The sequence needs some explanation of a notation called hereditary base-n notation. Let me explain this with an example. Let us write the number 26 in its hereditary base-2 notation. First we start by writing 26 as the sum of powers of 2.
26 = 24 + 23 + 21
Next, the powers are written as sums of powers of 2 as well.
So, 26 = 222 + 221 + 1 + 21

Similarly, the hereditary base-3 notation of 1000 is the following.

1000 = 36 + 34 + 33 + 1 = 33 + 3 + 33 + 1 + 33 + 1

Note that the bases and the powers cannot be bigger than n. Also, we can write the terms as a product of a base power n and a number smaller than n. For example – 26 can be written in hereditary base-3 notation as, 2.32 + 2.3 + 2.

Now, let us take a number, say 26, and do the following:

  1. Take the number. Start with base, n = 2.
  2. Express number in hereditary-n notation
  3. The next number is formed by changing all the n’s to n+1’s. That is, “increase” the base of the sequence by 1. [n = n + 1]
  4. Subtract 1 from the above number and goto 2. [number = number – 1]

Let’s take an example. Let’s start with 4.

4 = 22 (step 2)

33 (step 3)

33 – 1 = 26 (step 4) — Iteration 1

26 = 2.32 + 2.3 + 2 (step 2)

2.42 + 2.4 + 2 (step 3)

2.42 + 2.4 + 2 – 1 = 41 (step 4) — Iteration 2

41 = 2.42 + 2.4 + 1 (step 2)

2.52 + 2.5 + 1 (step 3)

2.52 + 2.5 + 1 – 1 = 60 (step 4) — Iteration 3

60 = 2.52 + 2.5 (step 2)

2.62 + 2.6 (step 3)

2.62 + 2.6 – 1 = 83 (step 4) — Iteration 4

83 = 2.62 + 6 + 5 (step 2)

2.72 + 7 + 5 (step 3)

2.72 + 7 + 5 – 1 = 109 (step 4) — Iteration 5

And so on. Now the question is, does this sequence converge (or terminate)? If it converges, what does it converge to? Or if you think it does not converge, explain why.

As always, people who already know this result may please defer commenting.

Regular Graphs: “Clique Insertion”

March 12, 2008

After talking about the basics of regular graphs, let me now introduce something more interesting. When we make changes to a regular graph topology, like addition or removal of edges or nodes, the resulting topology might not remain regular. An interesting question then is the following: are there ways to modify a given regular graph topology, while maintaining its regularity?

Here I introduce a simple operation called “clique insertion” that “extends” a graph (or makes a graph bigger by adding more nodes) while preserving the regularity of the graph. The basic idea of this is due to sri. Later on I developed it a bit more and now it looks like it is a useful result.

Given an r-regular graph of n nodes, the graph can be extended to an r-regular graph of n + (r – 1) nodes by a process of “node insertions” (or “clique insertion”).

Explanation: Let us start with the minimally connected non-trivial regular graph: a circle with n nodes. We can see that by “inserting” a node between any two nodes, we can extend the graph into a bigger graph while preserving the regularity. In other words, cut an arbitrary edge into two, and make their loose ends incident on a newly added node. We can go on inserting any number of nodes in this fashion. We just get bigger circles, all of which have a regularity of 2.


It is pretty simple to see in a circle. Now, let’s go to a slightly more complicated topology. Consider a 3-regular graph. We can get a bigger 3-regular graph by inserting 2 nodes (as explained above), and adding another edge between the 2 nodes.


It is equivalent to inserting a clique of exactly r – 1 (where r is the regularity) nodes in the existing graph (refer the figure below). For an arbitrary n and arbitrary r, we can construct an n + (r – 1) node graph by inserting exactly r – 1 nodes, and adding edges among them to form a r – 1 node clique.



In the figure, we have a 4-regular graph (incidentally, it is the complete graph K5). We now do a “clique insertion” operation on the graph to get a new 4-regular graph of 8 nodes. The nodes marked in brown form the clique.


The above result can be extended thus: given an r-regular graph of n nodes, we can construct a graph of n + m * (r – 1), where m is a positive integer. So, we can easily construct arbitrarily large regular graphs of a given regularity r by inserting any number of cliques of size exactly r – 1.




Usefulness: This has an important connotation in distributed systems, where nodes joining and failures are frequent. Several distributed hash tables (DHTs) require a symmetric degree distribution. They also need to accommodate for node joining and leaving.

  • For one, this result can be developed to be a measure the amount of disruption in symmetry that can be expected when changes in topology occur.
  • Also, it can be used to design topology changes so as to preserve symmetry. Perhaps, by using dummy nodes.

Question: Is there a similar way to make a graph smaller while preserving the regularity?

That was one example of preserving a structural property of a graph. It is an interesting to find out if we can have a whole set of such operators. We can look at several other properties as well: diameter, average path length, centrality distributions, connectivity etc.. I shall define this problem more formally in another post later on.

Regular Graphs: Bounds

March 12, 2008

In this post, I discuss a few results about connected regular graphs that are fairly obvious. However, to the best of my knowledge they have not been discussed explicitly in literature.

  • If a connected graph of n (n > 2) nodes is regular, then it is at least 2-regular. In other words, the lower bound on the regularity r of a connected graph of more than 2 nodes, is 2.

Proof: Let us start with a minimally connected graph of n > 2 nodes. It is a straight line graph with n−1 edges, in which every node, except the two end nodes of the straight line, is connected to a “predecessor” and a “successor”. To make this regular, we add one more edge between the two end nodes of the straight line. The graph now is a circle, with every node connected to 2 other nodes, thus making it the minimal connected regular graph.

  • The upper bound on the regularity r of a n node (n > 0) graph is n − 1.

Proof: In a graph of n nodes, every node can connect to a maximum of n − 1 nodes, forming a complete graph. Hence the above result follows.

  • As a corollary of the above result, we have: the lower bound on the number of nodes n required to construct a r regular graph is r + 1.

Regular Graphs: Basics

March 11, 2008

A regular graph is a graph wherein all nodes in the graph have the same degree. In an r-regular graph, all nodes have a degree r. The number r is called the regularity of a regular graph. Regular graphs are undirected. (At least, I have not seen it to be defined for directed graphs.) They need not be connected. So, a 0-regular graph is a set of nodes; A 1-regular graph is a set of disconnected edges; and so on.

Another way to define regular graphs is to say the following: if in a graph the minimum degree is equal to the maximum degree, the graph is regular.

What I am more interested in are connected regular graphs. Despite the fact that connected regular graphs exhibit interesting properties, there does not seem to be an extensive body of literature on them. Nonetheless, in this series of posts, I will try to put down as much of my understanding of regular graphs as possible. In this series of posts, regular graphs mean connected regular graphs unless otherwise mentioned.

Let us first start with some basic properties of regular graphs.

  1. Since the total degree of any graph is even (2 * no of edges), regular graphs with even odd regularity cannot be constructed with odd number of nodes. For example, we cannot have a 3-regular graph with 7 nodes. (Connectedness does not matter.)
  2. Degree distribution is uniform for regular graphs.

I had some misconceptions regarding regular graphs until recently. I had imagined that regularity corresponds to connectivity, which is untrue.

  1. Regularity need not correspond to connectivity. An r-regular graph can have a connectivity value much less than r.
  2. On a related note, betweenness centrality distribution need not be uniform for regular graphs. It can be skewed. This is clear from the graph below.

What this implies is that regular graphs are robust only to a certain extent. They ensure uniform degree distribution but not uniform centrality distribution. Centrality distribution is a better measure of robustness than degree distribution.

Infinite trees have no leaves

March 11, 2008

Often our mind conveniently ignores special or unlikely cases. But then completeness is an essential virtue of any mathematical proof.

I was listening to a lecture on graph theory, paying only partial attention since the speaker was covering very basic concepts. He started talking about trees: first he gave the different ways a tree can be defined; then he presented some properties of trees. For example, he said, “There is at least one vertex in a tree with degree one”. This is, of course, not completely correct. I am not sure why he made that mistake. The statement should be, ” There are at least two vertices in a tree with degree one”. This is pretty simple to prove.

The proof is by contradiction: let’s assume there are less than two vertices in a tree with degree one. So, there is either 1 vertex with degree 1 or none. Let’s consider both separately.

(1) If there are no degree 1 vertices in the tree, then every vertex has a degree of at least 2. So, the total degree of the tree is at least 2n, where ‘n’ is the number of nodes in the tree. Now, we know that the total degree of a graph is twice as much as the number of edges in the graph (Handshaking Lemma). So, in our tree there must be at least ‘n’ edges, which is not allowed by definition. Thus, we cannot have a situation where no nodes have a degree 1.

(2) If there is only one vertex with a degree 1, and every other vertex has at least a degree 2, the total degree of our tree is 2(n-1) + 1, which is an odd number. However, from the Handshaking Lemma we know that the total degree of a graph is always even. So, we have another contradiction. This situation is also not possible. So, we have to have at least two vertices with degree 1.

This is alright. But what brought my attention back to the lecture was what he said next: “This does not hold true for infinite trees.” Well, yes, it does not hold for infinite trees! In fact, we can also not prove that trees have no cycles when they are infinite. In fact, a lot of graph theorems that we prove or use don’t hold for infinite graphs. Sometimes we state it clearly, sometimes we don’t. It does not matter for most practical purposes. At the same time, it’s always a good idea not to ignore the theoretical completeness.

[Postscript: The basic result in Graph Theory about the relation between the total degree and the number of edges is called the Handshaking Lemma. It is the relation between the number of people in a party and the number of handshakes that result. (At least 2 people attend the party). I don’t like this analogy very much. Most graphs that result here are disconnected. It makes it complicated.]

Counting vs Measuring

March 11, 2008

Probability and Statistics has been my weakest point in the last few years. I feel enraged that I was never taught even a half decent course during school or my undergrad. Or they did have some topics, which I don’t recall much. After some effort, I am feeling much more comfortable with topics in P and S. It is essentially the lack of familiarity that often causes trouble.

But that’s not the point.

I recently watched a series of lectures on elementary probability and statistics presented by an instructor from De Anza college. The presentation is easy to follow and very clear. I took to the lectures right from the first few minutes.

While describing the different kinds of data that we deal with in P and S — qualitative and quantitative (discrete and continuous) — the instructor easily explained away how one can decide if a piece of quantitative data is discrete or continuous. It’s the difference between “how many?” and “how much?”. In other words, you count discrete data, whereas you measure continuous data.

Pretty elementary. But goes a long way.