Howardfs policy iteration(policy improvement algorithm)

 

We consider the following maximization problem with log utility function

 

 

subject to

 

where  and .

 

The Bellman equation with expectation is

 

 

We firstly set a feasible policy function as

 

   for .

 

Consequently, we have

 

 

as in what we set yesterday.

 

Then, we can have

We also have

due to the same kind of process mentioned before.

 

The Bellman equation between the first two period is

 

FOC w.r.t. Kf is

  for some L.

Thus

 

On the other hand, FOC w.r.t. L is

Substituting Kf into this, we get

Thus

Thus

   in terms of .

We also have

 

The detailed calculation of initial value function is as follows:

 

 

 .

 .

 .

where

starting from  and  when t=1.

Here,

where

starting from  and  when t=1.