Replication step 2 - Figures and tables

This notebook creates the figures and tables of "Using the Sequence-Space Jacobian to Solve and Estimate Heterogeneous-Agent Models".

  • Figure 1: Impulse responses of capital to to TFP shocks in the Krusell-Smith model
  • Figure 2: Jacobian and fake news matrix in the Krusell-Smith model
  • Figure 4: Equivalence between the SSJ and Reiter methods
  • Figure 5: Impulse response error as a function of the truncation horizon
  • Figure 6: Simulation and second moments in Krudell-Smith
  • Figure 7: Nonlinear impulse responses and transitional dynamics for two-asset HANK
  • Figure D.1: Accuracy of HA Jacobian for Krusell-Smith
  • Figure D.2: Accuracy of HA Jacobian for one-asset HANK
  • Figure E.1: Equivalence with Dynare for Smets-Wouters
  • Figure E.2: Equivalence with Dynare for Herbst-Schorfheide
  • Figure F.1-2: Estimation results for Krusell-Smith
  • Figure F.3-4: Estimation results for one-asset HANK (shocks)
  • Figure F.5-6: Estimation results for one-asset HANK (shocks and parameters)
  • Figure F.7-8: Estimation results for two-asset HANK (shocks)
  • Figure F.9-10: Estimation results for two-asset HANK (shocks and parameters)
  • Table 1: Summary of computing times
  • Table 2: Direct and fake news algorithms to compute Jacobians
  • Table 3: Estimation times
  • Table B.1: Calibration of Krusell-Smith
  • Table B.2: Calibration of one-asset HANK
  • Table B.3: Calibration of two-asset HANK
  • Table C.1: Computing times for G
  • Table C.2: Computing times for G, efficient vs flat DAG
  • Table C.3: Computing times for impulse responses
  • Table F.1: Estimated parameters for Smets-Wouters and Herbst-Schorfheide
  • Table F.2: Estimated parameters for Krusell-Smith
  • Table F.3: Estimated parameters for one-asset HANK
  • Table F.4: Estimated parameters for two-asset HANK

You first need to run the notebook models to get the inputs.

Solution from Reiter method are generated by the notebooks in import_export/autodiff folder.

  • reiter_ks.ipynb
  • reiter_hank1.ipynb

Jacobians with automatic differentiation are generated by the notebooks in import_export/autodiff folder.

  • jac_accuracy_ks.ipynb
  • jac_accuracy_hank.ipynb
In [1]:
# Python packages
import numpy as np
import pandas as pd
from scipy.io import loadmat
import seaborn as sns
import matplotlib.pyplot as plt
import inspect

# Models
from models import krusell_smith as ks
from models import hank_1a
from models import hank_2a

# Auxiliary functions
from toolkit import aux_fn as aux
from toolkit import aux_speed as aux_jac

# Figure fonts
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.rc('text.latex', preamble=r'\usepackage{mathpazo}')
plt.rc('font', size=15)

# Save figures?
savefig = False
In [2]:
# Color palette for optimal black-and-white print
cpalette = sns.color_palette("cubehelix", 5)
sns.palplot(cpalette)
sns.set_palette("cubehelix", 5)

Fun fact: these colors are called regal blue, san felix, muddy waters, plum, pale cornflower blue.

Figure 1 - Impulse responses of capital to TFP shocks in Krusell-Smith

In [3]:
# Load model
ss, G = np.load('import_export/results/hd_ks.npy', allow_pickle=True)
T = 300
Tplot = 51

# Fig 1a
rhos = np.array([0.3, 0.5, 0.7, 0.8, 0.9])
dZ = rhos ** (np.arange(T)[:, np.newaxis])
dK = G['K']['Z'] @ dZ

plt.figure(figsize=(6, 4.5))
for s in range(len(rhos)): plt.plot(dK[:Tplot, s], linewidth=2, label=r'$\rho$=' + str(rhos[s]))
plt.ylabel(r'\% deviation from ss')
plt.xlabel(r'Time $t$')
plt.axhline(y=0, color='#808080', linestyle=':')
plt.legend(framealpha=0)
plt.xlim([0, Tplot - 1])
plt.tight_layout()
if savefig: plt.savefig('import_export/figures/fig1_ks_rho.pdf', format='pdf')

# Compute impulse responses
news = np.array([5, 10, 15, 20, 25])
dZ = np.arange(T)[:, np.newaxis] == news
dK = G['K']['Z'] @ dZ

# Fig 1b
dZ = np.arange(T)[:, np.newaxis] == news
dK = G['K']['Z'] @ dZ

plt.figure(figsize=(6, 4.5))
for s in range(len(news)): plt.plot(dK[:Tplot, s], linewidth=2, label=r'$s$=' + str(news[s]))
plt.ylabel(r'\% deviation from ss')
plt.xlabel(r'Time $t$')
plt.axhline(y=0, color='#808080', linestyle=':')
plt.legend(framealpha=0)
plt.xlim([0, Tplot - 1])
plt.ylim([-.4, 0.9])
plt.tight_layout()
if savefig: plt.savefig('import_export/figures/fig1_ks_news.pdf', format='pdf')

Figure 2 - Jacobian and fake news matrix in Krusell-Smith

In [4]:
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']

# Load model
T = 300
ss,G = np.load('import_export/results/ks.npy',allow_pickle=True)
J, F = aux_jac.all_JFs(ks.backward_iterate, ss, T, {'r': {'r': 1}, 'w': {'w': 1}})

# Fake news matrix and Jacobian for Krusell-Smith."""
ss = ks.ks_ss()
news=np.array([0, 25, 50, 75, 100])

# fake news col 0
plt.figure(figsize =(4, 3))
plt.plot(F['A']['r'][:, 0], linewidth=2, label=r'$s$=0')
plt.xlabel(r'Time $t$')
plt.xticks(np.arange(0, T+1, 100))
plt.axhline(y=0, color='#808080', linestyle=':')
plt.legend(framealpha=0)
plt.xlim([0,T])
plt.tight_layout()
if savefig: plt.savefig('import_export/figures/fig2_fake0.pdf', format='pdf')

# fake news cols > 0
plt.figure(figsize =(4, 3))
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']

plt.gca().set_prop_cycle(color=colors[1:len(news)])
for s in news[1:]:
    plt.plot(F['A']['r'][:, s], linewidth=2, label=r'$s$=' + str(s))
plt.xlabel(r'Time $t$')
plt.xticks(np.arange(0, T+1, 100))
plt.axhline(y=0, color='#808080', linestyle=':')
plt.legend(framealpha=0)
plt.yticks([0,0.1,0.2])
plt.xlim([0,T])
plt.tight_layout()
if savefig: plt.savefig('import_export/figures/fig2_fake_rest.pdf', format='pdf')

# jacobian
plt.figure(figsize =(4, 3))
for s in news:
    plt.plot(J['A']['r'][:, s], linewidth=2, label=r'$s$=' + str(s))
plt.xlabel(r'Time $t$')
plt.xticks(np.arange(0, T+1, 100))
plt.yticks(np.arange(0, 13, 4)) # hardcoded!
plt.axhline(y=0, color='#808080', linestyle=':')
plt.xlim([0,T])
plt.legend(framealpha=0)
plt.tight_layout()
if savefig: plt.savefig('import_export/figures/fig2_jac.pdf', format='pdf')

Figure 4 - Equivalence between the SSJ and Reiter methods

In [5]:
sns.set_palette("cubehelix", 4)
cpalette = sns.color_palette("cubehelix", 4)

# Load series
irf_reiter = pd.read_csv('import_export/autodiff/impulse_responses/irf_reiter_ks.csv', header=None).values
irf_ssj_autodiff = pd.read_csv('import_export/autodiff/impulse_responses/irf_ssj_autodiff_ks.csv', header=None).values
irf_ssj_numdiff = pd.read_csv('import_export/autodiff/impulse_responses/irf_ssj_numdiff_ks.csv', header=None).values
irf_ssj_2sided = pd.read_csv('import_export/autodiff/impulse_responses/irf_ssj_twosidednumdiff_ks.csv', header=None).values

# Compute distance from Reiter
irf_dist_autodiff = np.abs(irf_ssj_autodiff - irf_reiter)
irf_dist_numdiff = np.abs(irf_ssj_numdiff - irf_reiter)
irf_dist_2sided = np.abs(irf_ssj_2sided - irf_reiter)

# Plots
plt.figure(figsize=(6, 4.5))
plt.plot(100 * irf_reiter, label='Reiter', linewidth=2)
plt.plot(100 * irf_ssj_numdiff, label='SSJ with 1-sided num. diff.', linewidth=2, linestyle=':')
plt.plot(100 * irf_ssj_2sided, label='SSJ with 2-sided num. diff.', linewidth=2, linestyle='--')
plt.plot(100 * irf_ssj_autodiff, label='SSJ with auto. diff.', linewidth=2, linestyle='-.')
plt.ylabel(r'\% deviation from ss')
plt.xlabel(r'Time $t$')
plt.xlim([0, len(irf_reiter) - 1])
plt.legend(framealpha=0, loc='upper right')
plt.tight_layout()
if savefig: plt.savefig(f'import_export/figures/fig5_reiter_ks1.pdf', format='pdf')

plt.figure(figsize=(6, 4.5))
plt.plot(100 * irf_dist_numdiff, label='SSJ with 1-sided num. diff.', linewidth=2, linestyle=':', color=cpalette[1])
plt.plot(100 * irf_dist_2sided, label='SSJ with 2-sided num. diff.', linewidth=2, linestyle='--', color=cpalette[2])
plt.plot(100 * irf_dist_autodiff, label='SSJ with auto. diff.', linewidth=2, linestyle='-.', color=cpalette[3])
plt.xlabel(r'Time $t$')
plt.yscale('log')
plt.xlim([0, len(irf_reiter) - 1])
plt.tight_layout()
plt.ylim(1E-12, 1E-1)
plt.legend(framealpha=0, loc='upper right')
if savefig: plt.savefig(f'import_export/figures/fig5_reiter_ks2.pdf', format='pdf')
In [6]:
# Load series
irf_reiter = pd.read_csv('import_export/autodiff/impulse_responses/irf_reiter_hank.csv', header=None).values
irf_ssj_autodiff = pd.read_csv('import_export/autodiff/impulse_responses/irf_ssj_autodiff_hank.csv', header=None).values
irf_ssj_numdiff = pd.read_csv('import_export/autodiff/impulse_responses/irf_ssj_numdiff_hank.csv', header=None).values
irf_ssj_2sided = pd.read_csv('import_export/autodiff/impulse_responses/irf_ssj_twosidednumdiff_hank.csv', header=None).values

# Compute distance from Reiter
irf_dist_autodiff = np.abs(irf_ssj_autodiff - irf_reiter)
irf_dist_numdiff = np.abs(irf_ssj_numdiff - irf_reiter)
irf_dist_2sided = np.abs(irf_ssj_2sided - irf_reiter)

# Plots
plt.figure(figsize=(6, 4.5))
plt.plot(100 * irf_reiter, label='Reiter', linewidth=2)
plt.plot(100 * irf_ssj_numdiff, label='SSJ with 1-sided num. diff.', linewidth=2, linestyle=':')
plt.plot(100 * irf_ssj_2sided, label='SSJ with 2-sided num. diff.', linewidth=2, linestyle='--')
plt.plot(100 * irf_ssj_autodiff, label='SSJ with auto. diff.', linewidth=2, linestyle='-.')
plt.ylabel(r'\% deviation from ss')
plt.xlabel(r'Time $t$')
plt.xlim([0, len(irf_reiter) - 1])
plt.legend(framealpha=0, loc='upper right')
plt.tight_layout()
if savefig: plt.savefig(f'import_export/figures/fig5_reiter_ha1.pdf', format='pdf')

plt.figure(figsize=(6, 4.5))
plt.plot(100 * irf_dist_numdiff, label='SSJ with 1-sided num. diff.', linewidth=2, linestyle=':', color=cpalette[1])
plt.plot(100 * irf_dist_2sided, label='SSJ with 2-sided num. diff.', linewidth=2, linestyle='--', color=cpalette[2])
plt.plot(100 * irf_dist_autodiff, label='SSJ with auto. diff.', linewidth=2, linestyle='-.', color=cpalette[3])
plt.xlabel(r'Time $t$')
plt.yscale('log')
plt.xlim([0, len(irf_reiter) - 1])
plt.tight_layout()
plt.ylim(1E-13, 1E-1)
plt.legend(framealpha=0, loc='upper right')
if savefig: plt.savefig(f'import_export/figures/fig5_reiter_ha2.pdf', format='pdf')

Figure 5 - Truncation horizon

In [7]:
sns.set_palette("cubehelix", 3)

# Load data
T_list, rmse_ks_rho90, rmse_hank_rho90, rmse_hank2_rho90, rmse_ks_rho99, rmse_hank_rho99, rmse_hank2_rho99 = np.load(
    'import_export/results/truncation.npy', allow_pickle=True)

# Plot transitory shock
plt.figure(figsize=(6, 4.5))
plt.plot(T_list[:-1], rmse_ks_rho90[:-1], linewidth=2, label='Krusell-Smith')
plt.plot(T_list[:-1], rmse_hank_rho90[:-1], linewidth=2, label='One-asset HANK', linestyle='--')
plt.plot(T_list[:-1], rmse_hank2_rho90[:-1], linewidth=2, label='Two-asset HANK', linestyle='-.')
plt.ylabel(r'RMSE in $dY/Y_{ss}$ in first 100 periods')
plt.ylim(1E-9, 1)
plt.xlim(100, 500)
plt.xlabel(r'Truncation horizon')
plt.yscale('log')
plt.legend(framealpha=0, loc='upper right')
plt.tight_layout()
if savefig: plt.savefig(f'import_export/figures/fig6_truncation1.pdf', format='pdf')

# Plot persistent shock
plt.figure(figsize=(6, 4.5))
plt.plot(T_list[:-1], rmse_ks_rho99[:-1], linewidth=2, label='Krusell-Smith')
plt.plot(T_list[:-1], rmse_hank_rho99[:-1], linewidth=2, label='One-asset HANK', linestyle='--')
plt.plot(T_list[:-1], rmse_hank2_rho99[:-1], linewidth=2, label='Two-asset HANK', linestyle='-.')
plt.ylabel(r'RMSE in $dY/Y_{ss}$ in first 100 periods')
plt.ylim(1E-9, 1)
plt.xlim(100, 500)
plt.xlabel(r'Truncation horizon')
plt.yscale('log')
plt.legend(framealpha=0, loc='upper right')
plt.tight_layout()
if savefig: plt.savefig(f'import_export/figures/fig6_truncation2.pdf', format='pdf')

Figure 6 - Simulations and second moments for Krusell-Smith

In [8]:
sns.set_palette("cubehelix", 4)

# Load model jacobian
ss, G = np.load('import_export/results/ks.npy', allow_pickle=True)
sigma = 0.02
rho = 0.9
T = 300
lag = 60

## Compute simulations
T_simul = 200
G['Z'] = {};
G['Z']['Z'] = np.identity(T)    # add shock itself to G matrix to compute simulated path
outputs = ['Z', 'Y', 'C', 'K']  # define outputs to compute
np.random.seed(2)
simul = aux.simulate(G, ['Z'], outputs, {'Z': rho}, {'Z': sigma}, T, T_simul)

# Compute impulse response
dZ = rho ** (np.arange(T))
dY, dC, dK = G['Y']['Z'] @ dZ, G['C']['Z'] @ dZ, G['K']['Z'] @ dZ
dX = np.stack([dZ, dY, dC, dK], axis=1)


# Compute covariance
def all_covariances_oneshock(dX, sigma, T):
    dft = np.fft.rfftn(dX, s=(2 * T - 2,), axes=(0,))
    total = sigma ** 2 * (dft.conjugate()[:, :, np.newaxis] * dft[:, np.newaxis, :])
    return np.fft.irfftn(total, s=(2 * T - 2,), axes=(0,))[:T]


Sigma = all_covariances_oneshock(dX, sigma, T)

# get sd of each series and correlation
sd = np.sqrt(np.diag(Sigma[0, ...]))
correl = (Sigma / sd) / (sd[:, np.newaxis])

## Figures

# format
ls = np.arange(-lag, lag + 1)
corrs_l_positive = correl[:lag + 1, 0, :]
corrs_l_negative = correl[lag:0:-1, :, 0]
corrs_combined = np.concatenate([corrs_l_negative, corrs_l_positive])

# Plot simulations
plt.figure(figsize=(6, 4.5))
plt.plot(100 * simul['Z'] / ss['Z'], linewidth=2, label=r'$dZ/Z$')
plt.plot(100 * simul['Y'] / ss['Y'], linewidth=2, label=r'$dY/Y$')
plt.plot(100 * simul['C'] / ss['C'], linewidth=2, label=r'$dC/C$')
plt.plot(100 * simul['K'] / ss['K'], linewidth=2, label=r'$dK/K$')
plt.legend(framealpha=0, loc='upper right')
plt.ylabel(r'\% deviation from ss')
plt.xlim(-0, T_simul)
plt.xlabel(r'Time $t$')
plt.tight_layout()
if savefig: plt.savefig('import_export/figures/fig8_simul.pdf', format='pdf')

# Plot moments
plt.figure(figsize=(6, 4.5))
plt.plot(ls, corrs_combined[:, 0], linewidth=2, label=r'$dZ$')
plt.plot(ls, corrs_combined[:, 1], linewidth=2, label=r'$dY$')
plt.plot(ls, corrs_combined[:, 2], linewidth=2, label=r'$dC$')
plt.plot(ls, corrs_combined[:, 3], linewidth=2, label=r'$dK$')
plt.legend(framealpha=0, loc='upper right')
plt.xlim(-lag, lag)
plt.xlabel(r'Lag $l$')
plt.tight_layout()
if savefig: plt.savefig('import_export/figures/fig8_autocov.pdf', format='pdf')

Figure 7 - Nonlinear impulse responses and transitional dynamics in two-asset HANK

In [9]:
sns.set_palette("cubehelix", 3)

# Load solution
td_nonlinear, td_linear, td_newss, ss = np.load('import_export/results/hank_2a_transition.npy', allow_pickle=True)

# Plot transitory shock
Tplot = 11
plt.figure(figsize=(6, 4.5))
plt.plot(100 * td_linear[:Tplot], linewidth=2, label='Linear', linestyle='-')
plt.plot(100 * td_nonlinear[:Tplot], linewidth=2, label='Nonlinear', linestyle='--')
plt.axhline(y=0, color='#808080', linestyle=':')
plt.legend(framealpha=0)
plt.ylabel(r'\% deviation from ss')
plt.xlim(0, 10)
plt.xlabel(r'Time $t$')
plt.xticks([0, 5, 10])
plt.tight_layout()
if savefig: plt.savefig('import_export/figures/fig9_nonlinear.pdf', format='pdf')

# Plot permanent shock
Tplot = 200
plt.figure(figsize=(6, 4.5))
plt.plot(100 * (td_newss['Y'][:Tplot] - ss['Y']) / ss['Y'], label=r'$Y$')
plt.plot(100 * (td_newss['N'][:Tplot] - ss['N']) / ss['N'], label=r'$N$', linestyle='--')
plt.plot(100 * (td_newss['K'][:Tplot] - ss['K']) / ss['K'], label=r'$K$', linestyle='-.')
plt.axhline(y=0, color='#808080', linestyle=':')
plt.legend(framealpha=0)
plt.ylabel(r'\% deviation from initial ss')
plt.xlim(-0.3, Tplot)
plt.xlabel(r'Time $t$')
plt.tight_layout()
if savefig: plt.savefig('import_export/figures/fig9_newss.pdf', format='pdf')

Figure D.1 - Accuracy of HA Jacobian for Krusell-Smith

In [10]:
sns.set_palette("cubehelix", 5)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']

# Load series
for s in ['0', '50', '100']:
    # Load series
    J_fakenews_1S = pd.read_csv('import_export/autodiff/jacobians/J_fakenews_oneside_ks_' + s + '.csv').values
    J_direct_1S = pd.read_csv('import_export/autodiff/jacobians/J_direct_oneside_ks_' + s + '.csv').values
    J_fakenews_2S = pd.read_csv('import_export/autodiff/jacobians/J_fakenews_twoside_ks_' + s + '.csv').values
    J_direct_2S = pd.read_csv('import_export/autodiff/jacobians/J_direct_twoside_ks_' + s + '.csv').values
    J_fakenews_AD = pd.read_csv('import_export/autodiff/jacobians/J_fakenews_ad_ks_' + s + '.csv').values
    J_direct_AD = pd.read_csv('import_export/autodiff/jacobians/J_direct_ad_ks_' + s + '.csv').values

    # Plot and save graph
    plt.figure(figsize=(4, 3))
    plt.plot(np.abs(J_fakenews_1S - J_direct_AD), linewidth=2)
    plt.plot(np.abs(J_direct_1S - J_direct_AD), linewidth=2, linestyle='--')
    plt.plot(np.abs(J_fakenews_2S - J_direct_AD), linewidth=2)
    plt.plot(np.abs(J_direct_2S - J_direct_AD), linewidth=2, linestyle='--')
    plt.plot(np.abs(J_fakenews_AD - J_direct_AD), linewidth=2)
    if s == '0':
        plt.ylabel(r'vs Direct, auto. diff.')
    plt.xlabel(r'Time $t$')
    plt.yscale('log')
    plt.xlim([0, T - 1])
    plt.tight_layout()
    if savefig: plt.savefig('import_export/figures/figC1_ks_' + s + '.pdf', format='pdf')

# Now deal with legends
label_list = ['Fake-news, 1-sided', 'Direct, 1-sided', 'Fake-news, 2-sided', 'Direct, 2-sided',
              'Fake-news, auto. diff.']
plt.figure(figsize=(11, 1))
plt.plot(np.abs(J_fakenews_1S - J_direct_AD), linewidth=2, label=label_list[0])
plt.plot(np.abs(J_direct_1S - J_direct_AD), linewidth=2, label=label_list[1], linestyle='--')
plt.plot(np.abs(J_fakenews_2S - J_direct_AD), linewidth=2, label=label_list[2])
plt.plot(np.abs(J_direct_2S - J_direct_AD), linewidth=2, label=label_list[3], linestyle='--')
plt.plot(np.abs(J_fakenews_AD - J_direct_AD), linewidth=2, label=label_list[4])
plt.ylim(1, 2)
plt.axis('off')
plt.legend(framealpha=0, loc='center', ncol=3);
if savefig: plt.savefig(f'import_export/figures/figC1_ks_title.pdf', format='pdf');

Figure D.2 - Accuracy of HA Jacobian for one-asset HANK

Need outputs from automatic differentiation in autodiff/jacobians folder.

In [11]:
for s in ['0', '50', '100']:
    # Load series
    J_fakenews_1S = pd.read_csv('import_export/autodiff/jacobians/J_fakenews_oneside_hank_' + s + '.csv').values
    J_direct_1S = pd.read_csv('import_export/autodiff/jacobians/J_direct_oneside_hank_' + s + '.csv').values
    J_fakenews_2S = pd.read_csv('import_export/autodiff/jacobians/J_fakenews_twoside_hank_' + s + '.csv').values
    J_direct_2S = pd.read_csv('import_export/autodiff/jacobians/J_direct_twoside_hank_' + s + '.csv').values
    J_fakenews_AD = pd.read_csv('import_export/autodiff/jacobians/J_fakenews_ad_hank_' + s + '.csv').values
    J_direct_AD = pd.read_csv('import_export/autodiff/jacobians/J_direct_ad_hank_' + s + '.csv').values

    # Plot and save graph
    plt.figure(figsize=(4, 3))
    plt.plot(np.abs(J_fakenews_1S - J_direct_AD), linewidth=2)
    plt.plot(np.abs(J_direct_1S - J_direct_AD), linewidth=2, linestyle='--')
    plt.plot(np.abs(J_fakenews_2S - J_direct_AD), linewidth=2)
    plt.plot(np.abs(J_direct_2S - J_direct_AD), linewidth=2, linestyle='--')
    plt.plot(np.abs(J_fakenews_AD - J_direct_AD), linewidth=2)
    if s == '0':
        plt.ylabel(r'vs Direct, auto. diff.')
    plt.xlabel(r'Time $t$')
    plt.yscale('log')
    plt.xlim([0, T - 1])
    plt.tight_layout()
    if savefig: plt.savefig('import_export/figures/figC1_hank_' + s + '.pdf', format='pdf')

# Now deal with legends
label_list = ['Fake-news, 1-sided', 'Direct, 1-sided', 'Fake-news, 2-sided', 'Direct, 2-sided',
              'Fake-news, auto. diff.']
plt.figure(figsize=(11, 1))
plt.plot(np.abs(J_fakenews_1S - J_direct_AD), linewidth=2, label=label_list[0])
plt.plot(np.abs(J_direct_1S - J_direct_AD), linewidth=2, label=label_list[1], linestyle='--')
plt.plot(np.abs(J_fakenews_2S - J_direct_AD), linewidth=2, label=label_list[2])
plt.plot(np.abs(J_direct_2S - J_direct_AD), linewidth=2, label=label_list[3], linestyle='--')
plt.plot(np.abs(J_fakenews_AD - J_direct_AD), linewidth=2, label=label_list[4])
plt.ylim(1, 2)
plt.axis('off')
plt.legend(framealpha=0, loc='center', ncol=3);
if savefig: plt.savefig(f'import_export/figures/figC1_hank_title.pdf', format='pdf')

Figure E.1 - Equivalence with Dynare for Smets-Wouters

In [12]:
sns.set_palette("cubehelix", 7)

# Load data from python
x, diff_irf, irf, irf_dist = np.load('import_export/results/sw.npy', allow_pickle=True)
shocks_list = ['eps_a', 'eps_b', 'eps_g', 'eps_I', 'eps_i', 'eps_p', 'eps_w']
label_list = ['Productivity', 'Monetary', 'Gov. spending', 'Investment', 'Discount factor', 'Price markup',
              'Wage markup']

# swap order of discount factor shock and MP shock to be consistent with Herbst-Schorfheide
irf[[4, 1]] = irf[[1, 4]]
irf_dist[[4, 1]] = irf_dist[[1, 4]]

plt.figure(figsize=(6, 4.5))
for k in range(len(shocks_list)): plt.plot(irf[k], label=label_list[k], linewidth=2)
plt.axhline(y=0, color='#808080', linewidth=2, linestyle=':')
plt.ylabel(r'\% deviation from ss')
plt.xlabel(r'Time $t$')
plt.xlim(0, 50)
plt.tight_layout()
if savefig: plt.savefig(f'import_export/figures/figB3_sw1.pdf', format='pdf')

plt.figure(figsize=(6, 4.5))
for k in range(len(shocks_list)): plt.plot(irf_dist[k], label=label_list[k], linewidth=2)
plt.ylabel(r'\% deviation from ss')
plt.xlabel(r'Time $t$')
plt.ylim(1E-15, 1E-3)
plt.xlim(0, 50)
plt.yscale('log')
plt.tight_layout()
if savefig: plt.savefig(f'import_export/figures/figB3_sw2.pdf', format='pdf')

plt.figure(figsize=(11, 1))
for k in range(len(shocks_list)): plt.plot(irf_dist[k], label=label_list[k], linewidth=2)
plt.ylim(1, 2)
plt.axis('off')
plt.legend(framealpha=0, loc='center', ncol=4);
if savefig: plt.savefig(f'import_export/figures/figB3_sw3.pdf', format='pdf')

Figure E.2 - Equivalence with Dynare for Herbst-Schorfheide

In [13]:
# Load data from python
sns.set_palette("cubehelix", 3)
x,diff_irf,irf,irf_dist = np.load('import_export/results/hs.npy',allow_pickle=True)
label_list = ['Productivity', 'Monetary','Gov. spending']
shocks_list = ['y_epsz','y_epsR','y_epsg']

plt.figure(figsize =(6, 4.5))
for k in range(len(shocks_list)): plt.plot(irf[k], label = label_list[k], linewidth=2)
plt.axhline(y=0, color='#808080', linewidth=2, linestyle=':')
plt.ylabel(r'\% deviation from ss')
plt.xlabel(r'Time $t$')
plt.xlim(0,50)
plt.tight_layout()
if savefig: plt.savefig(f'import_export/figures/figB3_hs1.pdf', format='pdf');

plt.figure(figsize =(6, 4.5))
for k in range(len(shocks_list)): plt.plot(irf_dist[k], label = label_list[k], linewidth=2)
plt.ylabel(r'\% deviation from ss')
plt.xlabel(r'Time $t$')
#plt.ylim(1E-15, 1E-3)
plt.xlim(0,50)
plt.yscale('log')
plt.tight_layout()
if savefig: plt.savefig(f'import_export/figures/figB3_hs2.pdf', format='pdf');
    
plt.figure(figsize =(11, 1))
for k in range(len(shocks_list)): plt.plot(irf_dist[k], label = label_list[k], linewidth=2)
plt.ylim(1,2)
plt.axis('off')
plt.legend(framealpha=0,loc='center',ncol=4);
if savefig: plt.savefig(f'import_export/figures/figB3_hs3.pdf', format='pdf');

Figures F.1 & F.2 - Estimation results for Krusell-Smith

In [14]:
sns.set_palette("cubehelix", 5)
plt.rc('font', size=20)
In [15]:
# Load data
x, para_avg, para_5, para_95, Nsim, acceptancerate, rmean, kernel_support, kernel_density, para_sim = np.load(
    'import_export/results/ks_est.npy', allow_pickle=True)

# Plot
plt_title = ['TFP shock s.d.', 'TFP shock AR-1', 'TFP shock AR-2']
plt.rcParams.update({'figure.max_open_warning': 0})

for i in range(para_avg.shape[0]):
    # recursive means
    plt.figure(figsize=(.8 * 6, .8 * 4.5))
    plt.plot(rmean[i], linewidth=2)
    plt.ylabel(r'Parameter value')
    plt.xlabel(r'Draw')
    plt.title(plt_title[i])
    plt.tight_layout()
    if savefig: plt.savefig(f'import_export/figures/fig_MH_ks_avg{i}.pdf', format='pdf')

    # posterior
    plt.figure(figsize=(.8 * 6, .8 * 4.5))
    plt.plot(kernel_support[i], kernel_density[i], linewidth=2, label='Posterior density')
    plt.axvline(np.mean(para_sim[i]), linewidth=2, color=cpalette[1], linestyle=':', label='Recursive average')
    plt.axvline(x[i], linewidth=2, color=cpalette[2], linestyle=':', label='Mode from optimization')
    plt.xlabel(r'Parameter value')
    plt.ylabel(r'Density')
    plt.title(plt_title[i])
    plt.xlim(kernel_support[i][0], kernel_support[i][-1])
    plt.tight_layout()
    if savefig: plt.savefig(f'import_export/figures/fig_MH_ks_density{i}.pdf', format='pdf')

# Create legend as well
plt.figure(figsize=(11, 1))
plt.axvline(linewidth=2, label='Posterior density')
plt.axvline(linewidth=2, color=cpalette[1], linestyle=':', label='Recursive average')
plt.axvline(color=cpalette[2], linestyle=':', label='Mode from optimization')
plt.legend(framealpha=0, loc='upper right')
plt.xlim(1, 2)
plt.axis('off')
plt.legend(framealpha=0, loc='center', ncol=4);
if savefig: plt.savefig(f'import_export/figures/fig_MH_legend.pdf', format='pdf')

Figures F.3 and F.4 - Estimation results for one-asset HANK (shocks)

In [16]:
# Load data
x, para_avg, para_5, para_95, Nsim, acceptancerate, rmean, kernel_support, kernel_density, para_sim = np.load(
    'import_export/results/hank_1a_est.npy', allow_pickle=True)

# Plot
plt_title = ['Monetary shock s.d.', 'Monetary shock AR-1', 'G shock s.d.', 'G shock AR-1', 'P markup shock s.d.',
             'P markup shock AR-1']
plt.rcParams.update({'figure.max_open_warning': 0})

for i in range(para_avg.shape[0]):
    # recursive means
    plt.figure(figsize=(.8 * 6, .8 * 4.5))
    plt.plot(rmean[i], linewidth=2)
    plt.ylabel(r'Parameter value')
    plt.xlabel(r'Draw')
    plt.title(plt_title[i])
    plt.tight_layout()
    if savefig: plt.savefig(f'import_export/figures/fig_MH_hank_1a_avg{i}.pdf', format='pdf')

    # posterior
    plt.figure(figsize=(.8 * 6, .8 * 4.5))
    plt.plot(kernel_support[i], kernel_density[i], linewidth=2, label='Posterior density')
    plt.axvline(np.mean(para_sim[i]), linewidth=2, color=cpalette[1], linestyle=':', label='Recursive average')
    plt.axvline(x[i], linewidth=2, color=cpalette[2], linestyle=':', label='Mode from optimization')
    plt.xlabel(r'Parameter value')
    plt.ylabel(r'Density')
    plt.title(plt_title[i])
    plt.xlim(kernel_support[i][0], kernel_support[i][-1])
    plt.tight_layout()
    if savefig: plt.savefig(f'import_export/figures/fig_MH_hank_1a_density{i}.pdf', format='pdf')

Figures F.5 and F.6 - Estimation results for one-asset HANK (shocks + parameters)

In [17]:
# Load data
x, para_avg, para_5, para_95, Nsim, acceptancerate, rmean, kernel_support, kernel_density, para_sim = np.load(
    'import_export/results/hank_1afull_est.npy', allow_pickle=True)

# Plot
plt_title = ['Monetary shock s.d.', 'Monetary shock AR-1', 'G shock s.d.', 'G shock AR-1', 'P markup shock s.d.',
             'P markup shock AR-1', r'$\phi$', r'$\phi_y$', r'$\kappa$']
plt.rcParams.update({'figure.max_open_warning': 0})

for i in range(para_avg.shape[0]):
    # recursive means
    plt.figure(figsize=(.8 * 6, .8 * 4.5))
    plt.plot(rmean[i], linewidth=2)
    plt.ylabel(r'Parameter value')
    plt.xlabel(r'Draw')
    plt.title(plt_title[i])
    plt.tight_layout()
    if savefig: plt.savefig(f'import_export/figures/fig_MH_hank_1afull_avg{i}.pdf', format='pdf')

    # posterior
    plt.figure(figsize=(.8 * 6, .8 * 4.5))
    plt.plot(kernel_support[i], kernel_density[i], linewidth=2, label='Posterior density')
    plt.axvline(np.mean(para_sim[i]), linewidth=2, color=cpalette[1], linestyle=':', label='Recursive average')
    plt.axvline(x[i], linewidth=2, color=cpalette[2], linestyle=':', label='Mode from optimization')
    plt.xlabel(r'Parameter value')
    plt.ylabel(r'Density')
    plt.title(plt_title[i])
    plt.xlim(kernel_support[i][0], kernel_support[i][-1])
    plt.tight_layout()
    if savefig: plt.savefig(f'import_export/figures/fig_MH_hank_1afull_density{i}.pdf', format='pdf')

Figures F.7 and F.8 - Estimation results for two-asset HANK (shocks)

In [18]:
# Load data
x, para_avg, para_5, para_95, Nsim, acceptancerate, rmean, kernel_support, kernel_density, para_sim = np.load(
    'import_export/results/hank_2a_est.npy', allow_pickle=True)

# Plot
plt_title = ['TFP shock s.d.', 'TFP shock AR-1', 'G shock s.d.', 'G shock AR-1', r'$\beta$ shock s.d.',
             r'$\beta$ shock AR-1',
             'Investment shock s.d.', 'Investment shock AR-1', 'Monetary shock s.d.', 'Monetary shock AR-1',
             'P markup shock s.d.', 'P markup shock AR-1', 'W markup shock s.d.', 'W markup shock AR-1']
plt.rcParams.update({'figure.max_open_warning': 0})

for i in range(para_avg.shape[0]):
    # recursive means
    plt.figure(figsize=(.8 * 6, .8 * 4.5))
    plt.plot(rmean[i], linewidth=2)
    plt.ylabel(r'Parameter value')
    plt.xlabel(r'Draw')
    plt.title(plt_title[i])
    plt.tight_layout()
    if savefig: plt.savefig(f'import_export/figures/fig_MH_hank_2a_avg{i}.pdf', format='pdf')

    # posterior
    plt.figure(figsize=(.8 * 6, .8 * 4.5))
    plt.plot(kernel_support[i], kernel_density[i], linewidth=2, label='Posterior density')
    plt.axvline(np.mean(para_sim[i]), linewidth=2, color=cpalette[1], linestyle=':', label='Recursive average')
    plt.axvline(x[i], linewidth=2, color=cpalette[2], linestyle=':', label='Mode from optimization')
    plt.xlabel(r'Parameter value')
    plt.ylabel(r'Density')
    plt.title(plt_title[i])
    plt.xlim(kernel_support[i][0], kernel_support[i][-1])
    plt.tight_layout()
    if savefig: plt.savefig(f'import_export/figures/fig_MH_hank_2a_density{i}.pdf', format='pdf')

Figures F.9 and F.10 - Estimation results for two-asset HANK (shocks + parameters)

In [19]:
# Load data
x, para_avg, para_5, para_95, Nsim, acceptancerate, rmean, kernel_support, kernel_density, para_sim = np.load(
    'import_export/results/hank_2afull_est.npy', allow_pickle=True)

# Plot
plt_title = ['TFP shock s.d.', 'TFP shock AR-1', 'G shock s.d.', 'G shock AR-1', r'$\beta$ shock s.d.',
             r'$\beta$ shock AR-1',
             'Investment shock s.d.', 'Investment shock AR-1', 'Monetary shock s.d.', 'Monetary shock AR-1',
             'P markup shock s.d.', 'P markup shock AR-1', 'W markup shock s.d.', 'W markup shock AR-1',
             r'$\phi$', r'$\phi_y$', r'$\kappa_p$', r'$\kappa_w$', r'$\epsilon_I$']
plt.rcParams.update({'figure.max_open_warning': 0})

for i in range(para_avg.shape[0]):
    # recursive means
    plt.figure(figsize=(.8 * 6, .8 * 4.5))
    plt.plot(rmean[i], linewidth=2)
    plt.ylabel(r'Parameter value')
    plt.xlabel(r'Draw')
    plt.title(plt_title[i])
    plt.tight_layout()
    if savefig: plt.savefig(f'import_export/figures/fig_MH_hank_2afull_avg{i}.pdf', format='pdf')

    # posterior
    plt.figure(figsize=(.8 * 6, .8 * 4.5))
    plt.plot(kernel_support[i], kernel_density[i], linewidth=2, label='Posterior density')
    plt.axvline(np.mean(para_sim[i]), linewidth=2, color=cpalette[1], linestyle=':', label='Recursive average')
    plt.axvline(x[i], linewidth=2, color=cpalette[2], linestyle=':', label='Mode from optimization')
    plt.xlabel(r'Parameter value')
    plt.ylabel(r'Density')
    plt.title(plt_title[i])
    plt.xlim(kernel_support[i][0], kernel_support[i][-1])
    plt.tight_layout()
    if savefig: plt.savefig(f'import_export/figures/fig_MH_hank_2afull_density{i}.pdf', format='pdf')

Table 1 - Summary of computing times

In [20]:
# Load data
times_fakenews = np.load('import_export/results/fake_news_speed.npy', allow_pickle=True)[0]
times_ha = times_fakenews['fake'][0]
times_irf = np.load('import_export/results/irf_speed.npy', allow_pickle=True)[0]
times_irf['DAG'][0]['hd_ks'] = times_irf['DAG'][0]['ks']
times_G = np.load('import_export/results/G_speed.npy', allow_pickle=True)[0]
times_G['DAG'][0]['hd_ks'] = times_G['DAG'][0]['ks']
times_est = {}
times_est['ks'] = np.load('import_export/results/ks_speed.npy', allow_pickle=True)[0]
times_est['hd_ks'] = np.load('import_export/results/hd_ks_speed.npy', allow_pickle=True)[0]
times_est['ha'] = np.load('import_export/results/hank_1a_speed.npy', allow_pickle=True)[0]
times_est['hafull'] = np.load('import_export/results/hank_1afull_speed.npy', allow_pickle=True)[0]
times_est['ha2'] = np.load('import_export/results/hank_2a_speed.npy', allow_pickle=True)[0]
times_est['ha2full'] = np.load('import_export/results/hank_2afull_speed.npy', allow_pickle=True)[0]
times_nonlinear = np.load('import_export/results/nonlinear_speed.npy', allow_pickle=True)[0]

# Format tables
t1, t2, t3, t4, t5, t6, t7, t8, t9, t10, t11 = '\n', '', '', '', '\n', '', '', '', '', '', '\n'
tss, tsimul = '\n', ''
para_list = ['Krusell-Smith', 'HD Krusell-Smith', 'one-asset HANK', 'two-asset HANK']
id_list = ['ks', 'hd_ks', 'ha', 'ha2']
for n, var in enumerate(id_list):
    t1 += id_list[n] + '\t'
    tss += ("{:.2f}").format(times_est[var]['ss']) + '\t'
    t2 += ("{:.2f}").format(times_ha[var]) + '\t'
    t3 += ("{:.4f}").format(times_irf['DAG'][0][var]) + '\t'
    t4 += ("{:.4f}").format(times_G['DAG'][0][var]) + '\t'
    t5 += ("{:.4f}").format(times_est[var]['lk']) + '\t'
    t6 += ("{:.2f}").format(times_est[var]['est']) + '\t'
    t7 += ("{:.0f}").format(times_est[var]['mh']) + '\t'
    t11 += ("{:.2f}").format(times_nonlinear[var]) + '\t'
    tsimul += ("{:.4f}").format(times_est[var]['sim']) + '\t'

t8 = '\n\t' + '\t' + ("{:.3f}").format(times_est['hafull']['lk']) + '\t' + ("{:.3f}").format(
    times_est['ha2full']['lk']) + '\t'
t9 = '\t' + '\t' + ("{:.0f}").format(times_est['hafull']['est']) + '\t' + ("{:.0f}").format(
    times_est['ha2full']['est']) + '\t'
t10 = '\t' + '\t' + ("{:.0f}").format(times_est['hafull']['mh']) + '\t' + ("{:.0f}").format(
    times_est['ha2full']['mh']) + '\t'

# Print results
print('\nComputing times (in seconds)')
print(t1)
print(tss)
print(t2)
print(t3)
print(t4)
print(tsimul)
print(t5)
print(t6)
print(t7)
print(t8)
print(t9)
print(t10)
print(t11)
Computing times (in seconds)

ks	hd_ks	ha	ha2	

0.43	46.47	1.93	16.76	
0.09	9.97	0.36	4.12	
0.0009	0.0009	0.0134	0.0311	
0.0040	0.0040	0.0571	0.2161	
0.0050	0.0050	0.0279	0.0883	

0.0007	0.0007	0.0023	0.0143	
0.06	0.06	0.62	15.50	
136	136	553	2935	

		0.061	0.273	
		12	534	
		12154	50554	

0.34	27.00	1.28	14.55	

Table 2 - Direct and fake news algorithms to compute Jacobians

In [21]:
# Load data
times = np.load('import_export/results/fake_news_speed.npy', allow_pickle=True)[0]

# Format tables
t1, t2, t3, t4, t5, t6, t7, t8, t9 = '\n', '\n', '', '', '\n', '', '', '', ''
para_list = ['Krusell-Smith', 'HD Krusell-Smith', 'one-asset HANK', 'two-asset HANK']
id_list = ['ks', 'hd_ks', 'ha', 'ha2']
for n, var in enumerate(id_list):
    t1 += id_list[n] + '\t'
    t2 += ("{:.0f}").format(times['direct'][0][var]) + '\t'
    t3 += ("{:.0f}").format(times['direct'][1][var]) + '\t'
    t4 += ("{:.0f}").format(times['direct'][2][var]) + '\t'
    t5 += ("{:.3f}").format(times['fake'][0][var]) + '\t'
    t6 += ("{:.3f}").format(times['fake'][1][var]) + '\t'
    t7 += ("{:.3f}").format(times['fake'][2][var]) + '\t'
    t8 += ("{:.3f}").format(times['fake'][3][var]) + '\t'
    t9 += ("{:.3f}").format(times['fake'][4][var]) + '\t'

# Print results
print('\nComputing times (in seconds)')
print(t1)
print(t2)
print(t3)
print(t4)
print(t5)
print(t6)
print(t7)
print(t8)
print(t9)
Computing times (in seconds)

ks	hd_ks	ha	ha2	

22	1952	170	990	
14	1189	147	881	
8	762	23	109	

0.095	9.968	0.358	4.116	
0.064	7.952	0.255	3.704	
0.009	1.017	0.019	0.115	
0.017	0.995	0.070	0.279	
0.003	0.003	0.014	0.018	

Table 3 - Estimation times

In [22]:
# Load data
times = {}
times['ks'] = np.load('import_export/results/ks_speed.npy',allow_pickle=True)[0]
times['ha'] = np.load('import_export/results/hank_1a_speed.npy',allow_pickle=True)[0]
times['hafull'] = np.load('import_export/results/hank_1afull_speed.npy',allow_pickle=True)[0]
times['ha2'] = np.load('import_export/results/hank_2a_speed.npy',allow_pickle=True)[0]
times['ha2full'] = np.load('import_export/results/hank_2afull_speed.npy',allow_pickle=True)[0]

# Format tables
t1, t2, t3, t4, t5, t6, t7, t8, t9, t10, t11 = '\t', '\nLk (ms)\t', '1 (ms)\t', '2 (ms)\t', '3 (ms)\t', '\nOpt (s)\t', 'eval\t', '\nMH (s)\t', 'eval\t', 'rate\t', 'c\t'
para_list = ['Krusell-Smith', 'hank shocks', 'hank shocks + para', 'two-asset hank shocks','two-asset hank shocks + para']
id_list = ['ks','ha','hafull','ha2','ha2full']
for n, var in enumerate(id_list):
    t1 += id_list[n] + '\t'
    t2 += ("{:.3f}").format(1000*times[var]['lk']) + '\t'
    t3 += ("{:.3f}").format(1000*times[var]['lk_step1']) + '\t'
    t4 += ("{:.3f}").format(1000*times[var]['lk_step2']) + '\t'
    t5 += ("{:.3f}").format(1000*times[var]['lk_step3']) + '\t'
    t6 += ("{:.2f}").format(times[var]['est']) + '\t'
    t7 += ("{:.0f}").format(times[var]['nfev']) + '\t'
    t8 += ("{:.2f}").format(times[var]['mh']) + '\t'
    t9 += '100K\t'
    t10 += ("{:.3f}").format(times[var]['rate']) + '\t'
    t11 += ("{:.2f}").format(times[var]['c']) + '\t'

# Print results
print('\nEstimation times\n')
print(t1)
print(t2)
print(t3)
print(t4)
print(t5)
print(t6)
print(t7)
print(t8)
print(t9)
print(t10)
print(t11)
Estimation times

	ks	ha	hafull	ha2	ha2full	

Lk (ms)	0.694	2.299	60.931	14.321	273.197	
1 (ms)	0.018	0.109	58.809	1.300	260.287	
2 (ms)	0.043	0.156	0.156	0.766	0.761	
3 (ms)	0.633	2.034	1.966	12.255	12.149	

Opt (s)	0.06	0.62	11.57	15.50	533.58	
eval	81	237	440	1034	5720	

MH (s)	135.75	552.96	12153.66	2935.19	50553.63	
eval	100K	100K	100K	100K	100K	
rate	0.253	0.248	0.253	0.255	0.241	
c	2.50	1.10	0.65	0.40	0.10	

Table B.1 - Krusell-Smith calibration

In [23]:
ss = ks.ks_ss()
sig = inspect.signature(ks.ks_ss)

print('r     = ', ss['r'])
print('sigma = ', ss['eis']**(-1))
print('alpha = ', ss['alpha'])
print('delta = ', ss['delta'])
print(sig.parameters['rho'])
print(sig.parameters['sigma'])
print('ne    = ', len(ss['e_grid']))
print('nk    = ', len(ss['a_grid']))
r     =  0.01
sigma =  1.0
alpha =  0.11
delta =  0.025
rho=0.966
sigma=0.5
ne    =  7
nk    =  500

Table B.2 - One-asset HANK calibration

In [24]:
ss = hank_1a.hank_ss()
sig = inspect.signature(hank_1a.hank_ss)

print('beta  = ', round(ss['beta'],3) , ' (r =', ss['r'],')')
print('vphi  = ', round(ss['vphi'],3))
print('sigma = ', ss['eis']**(-1))
print('nu    = ', ss['frisch']**(-1))
print('bbar  = ', ss['a_grid'][0])
print(sig.parameters['rho_s'])
print(sig.parameters['sigma_s'])
print('mu    = ', ss['mu'])
print('kappa = ', ss['kappa'])
print('B     = ', ss['B'])
print('G     = ', round(ss['Y']-ss['C'],3))
print('phi   = ', ss['phi'])
print('phi   = ', 0)
print('ne    = ', len(ss['e_grid']))
print('nk    = ', len(ss['a_grid']))
beta  =  0.982  (r = 0.005 )
vphi  =  0.786
sigma =  2.0
nu    =  2.0
bbar  =  0.0
rho_s=0.966
sigma_s=0.5
mu    =  1.2
kappa =  0.1
B     =  5.6
G     =  -0.0
phi   =  1.5
phi   =  0
ne    =  7
nk    =  500

Table B.3 - Two-asset HANK calibration

In [25]:
ss = hank_2a.hank_ss(noisy=False)
sig = inspect.signature(hank_2a.hank_ss)

print('beta  = ', round(ss['beta'],3) , ' (r =', ss['r'],')')
print('sigma = ', ss['eis']**(-1))
print('chi0  = ', ss['chi0'])
print('chi1  = ', round(ss['chi1'],3), ' (B =', round(ss['B'],3),')')
print('chi2  = ', ss['chi2'])
print('bbar  = ', ss['a_grid'][0])
print(sig.parameters['rho_z'])
print(sig.parameters['sigma_z'])
print('----------')
print('vphi   = ', round(ss['vphi'],3), '( N=',ss['N'],')')
print('nu     = ', ss['frisch']**(-1))
print('muw    = ', ss['muw'])
print('kappaw = ', ss['kappaw'])
print('----------')
print('Z      = ', round(ss['Z'],3), '( Y=',ss['Y'],')')
print('alpha  = ', round(ss['alpha'],3), '( K=',ss['K'],')')
print('mup    = ', round(ss['mup'],3), '( p+Bg=',round(ss['A']+ss['B'],3),')')
print('delta  = ', round(ss['delta'],3))
print('kappap = ', ss['kappap'])
print('----------')
print('omega  = ', ss['omega'])
print('----------')
print('tau    = ', round(ss['tax'],3))
print('G      = ', ss['G'])
print('Bg     = ', ss['Bg'])
print('phi    = ', ss['phi'])
print('phiy   = ',0)
print('----------')
print('ne    = ', len(ss['e_grid']))
print('nb    = ', len(ss['b_grid']))
print('na    = ', len(ss['a_grid']))
beta  =  0.976  (r = 0.0125 )
sigma =  2.0
chi0  =  0.25
chi1  =  6.416  (B = 1.04 )
chi2  =  2
bbar  =  0.0
rho_z=0.966
sigma_z=0.92
----------
vphi   =  1.713 ( N= 1 )
nu     =  1.0
muw    =  1.1
kappaw =  0.1
----------
Z      =  0.468 ( Y= 1 )
alpha  =  0.33 ( K= 10 )
mup    =  1.015 ( p+Bg= 14.0 )
delta  =  0.02
kappap =  0.1
----------
omega  =  0.005
----------
tau    =  0.356
G      =  0.2
Bg     =  2.8
phi    =  1.5
phiy   =  0
----------
ne    =  3
nb    =  50
na    =  70

Table C.1 - computing times for G

In [26]:
# Load data
times = np.load('import_export/results/G_speed.npy',allow_pickle=True)[0]

# Format tables
t1, t2, t3, t4, t5 = '\n', '\n', '', '', ''
para_list = ['Krusell-Smith', 'one-asset HANK', 'two-asset HANK']
id_list = ['ks','ha','ha2']
for n, var in enumerate(id_list):
    t1 += id_list[n] + '\t'
    t2 += ("{:.1f}").format(1000*times['DAG'][0][var]) + '\t'
    t3 += ("{:.1f}").format(1000*times['DAG'][1][var]) + '\t'
    t4 += ("{:.1f}").format(1000*times['DAG'][2][var]) + '\t'
    t5 += ("{:.1f}").format(1000*times['DAG'][3][var]) + '\t'

# Print results
print('\nComputing times (in ms)')
print(t1)
print(t2)
print(t3)
print(t4)
print(t5)
Computing times (in ms)

ks	ha	ha2	

4.0	57.1	216.1	
0.7	7.0	33.2	
1.4	27.2	53.5	
2.0	22.9	129.4	

Table C.2 - Computing times for G, efficient vs flat DAG

In [27]:
# Load data
times = np.load('import_export/results/G_speed.npy',allow_pickle=True)[0]

# Format tables
t1, t2, t3 = '\n', '\n', ''
para_list = ['Krusell-Smith', 'one-asset HANK', 'two-asset HANK']
id_list = ['ks','ha','ha2']
for n, var in enumerate(id_list):
    t1 += id_list[n] + '\t'
    t2 += ("{:.1f}").format(1000*times['DAG'][0][var]) + '\t'
    t3 += ("{:.1f}").format(1000*times['no_DAG'][0][var]) + '\t'

# Print results
print('\nComputing times (in ms)')
print(t1)
print(t2)
print(t3)
Computing times (in ms)

ks	ha	ha2	

4.0	57.1	216.1	
27.1	168.2	1551.8	

Table C.3 - Computing times for impulse responses

In [28]:
# Load data
times = np.load('import_export/results/irf_speed.npy',allow_pickle=True)[0]

# Format tables
t1, t2, t3, t4, t5, t6, t7 = '\n', '\n', '\n', '', '', '', ''
para_list = ['Krusell-Smith', 'one-asset HANK', 'two-asset HANK']
id_list = ['ks','ha','ha2']
for n, var in enumerate(id_list):
    t1 += id_list[n] + '\t'
   # t2 += ("{:.1f}").format(1000*times['no_DAG'][0][var]) + '\t'
    t3 += ("{:.1f}").format(1000*times['DAG'][0][var]) + '\t'
    t4 += ("{:.1f}").format(1000*times['DAG'][1][var]) + '\t'
    t5 += ("{:.1f}").format(1000*times['DAG'][2][var]) + '\t'
    t6 += ("{:.1f}").format(1000*times['DAG'][3][var]) + '\t'
    t7 += ("{:.1f}").format(1000*times['DAG'][4][var]) + '\t'

# Print results
print('\nComputing times (in seconds)')
print(t3)
print(t4)
print(t5)
print(t6)
print(t7)
Computing times (in seconds)

0.9	13.4	31.1	
0.4	5.1	21.8	
0.1	0.1	0.4	
0.5	8.0	7.8	
0.1	0.3	1.1	

Table F1a - Estimated parameters for Smets-Wouters

In [29]:
# Load data from python
x, diff_irf, irf, irf_dist = np.load('import_export/results/sw.npy', allow_pickle=True)

# Load estimates from dynare
order_dynare = [0, 7, 1, 8, 2, 9, 3, 10, 4, 11, 5, 12, 14, 6, 13, 15]
x_dynare = loadmat('import_export/dynare/smets_wouters/sw_dynarecode_mode')['xparam1'][order_dynare].T[0]

# Format tables
t1, t2, t3, t4, t5, t6 = 'Params\t', 'SSJ\t', 'Dynare\t', '\nParams\t', 'SSJ\t', 'Dynare\t'
para_list = ['sd_a', 'ar_a', 'sd_b', 'ar_b', 'sd_g', 'ar_g', 'sd_I', 'ar_I', 'sd_i', 'ar_i', 'sd_p', 'ar_p', 'ma_p',
             'sd_w', 'ar_w', 'ma_w']
for n, var in enumerate(para_list[:10]):
    t1 += para_list[n] + '\t'
    t2 += ("{:.3f}").format(x[n]) + '\t'
    t3 += ("{:.3f}").format(x_dynare[n]) + '\t'
for n, var in enumerate(para_list[10:]):
    t4 += para_list[10:][n] + '\t'
    t5 += ("{:.3f}").format(x[10:][n]) + '\t'
    t6 += ("{:.3f}").format(x_dynare[10:][n]) + '\t'

# Print results
print('\nMode of the posterior distribution\n')
print(t1)
print(t2)
print(t3)
print(t4)
print(t5)
print(t6)

print(
    f'\nMaximum distance in irfs between sequence jacobian and dynare across all shocks and series: {np.max(diff_irf):.3}')
print(
    f'Maximum distance in estimation between sequence jacobian and dynare: {100 * np.max(np.abs((x_dynare - x) / x)):.2}%')
Mode of the posterior distribution

Params	sd_a	ar_a	sd_b	ar_b	sd_g	ar_g	sd_I	ar_I	sd_i	ar_i	
SSJ	0.446	0.978	0.246	0.251	0.589	0.971	0.461	0.662	0.229	0.086	
Dynare	0.446	0.978	0.245	0.252	0.589	0.970	0.460	0.663	0.229	0.086	

Params	sd_p	ar_p	ma_p	sd_w	ar_w	ma_w	
SSJ	0.135	0.975	0.740	0.256	0.976	0.925	
Dynare	0.133	0.975	0.735	0.256	0.975	0.924	

Maximum distance in irfs between sequence jacobian and dynare across all shocks and series: 5.55e-05
Maximum distance in estimation between sequence jacobian and dynare: 1.1%

Table F.1b - Estimated parameters for Herbst-Schorfheide

In [30]:
# Load data
x, diff_irf, irf_python, irf_dist = np.load('import_export/results/hs.npy', allow_pickle=True)

# Load estiamtes from dynare and marlab
x_dynare = loadmat('import_export/dynare/herbst_schorfheide/hs_dynarecode_mode.mat')['xparam1']

# put in the right order
para_list = ['tau', 'kappa', 'psi1', 'psi2', 'sd_r', 'ar_r', 'sd_g', 'ar_g', 'sd_z', 'ar_z', 'cons_r', 'cons_pi', 'cons_y']
order_dynare = [3, 4, 5, 6, 0, 7, 1, 8, 2, 9, 10, 11, 12]
order_python = [6, 7, 8, 9, 0, 5, 1, 2, 3, 4, 10, 11, 12]
estimate_dynare = x_dynare[order_dynare].T[0]
estimate_python = x[order_python]

# Format tables
t1, t2, t3 = 'Params\t', 'SSJ\t', 'Dynare\t'
for n, var in enumerate(para_list):
    t1 += para_list[n] + '\t'
    t2 += ("{:.4f}").format(estimate_python[n]) + '\t'
    t3 += ("{:.4f}").format(estimate_dynare[n]) + '\t'

# Print results
print('\nMode of the posterior distribution\n')
print(t1)
print(t2)
print(t3)

print(
    f'\nMaximum distance in irfs between sequence jacobian and dynare across all shocks and series: {np.max(diff_irf):.3}')
print(
    f'Maximum distance in estimation between sequence jacobian and dynare: {100 * np.max(np.abs((estimate_dynare - estimate_python) / estimate_python)):.2}%')
Mode of the posterior distribution

Params	tau	kappa	psi1	psi2	sd_r	ar_r	sd_g	ar_g	sd_z	ar_z	cons_r	cons_pi	cons_y	
SSJ	2.3162	1.0000	1.9684	0.4754	0.1905	0.7978	0.6531	0.9908	0.1855	0.9252	0.3043	3.4468	0.6214	
Dynare	2.3164	1.0000	1.9684	0.4753	0.1905	0.7978	0.6530	0.9903	0.1855	0.9252	0.3051	3.4472	0.6213	

Maximum distance in irfs between sequence jacobian and dynare across all shocks and series: 9.5e-14
Maximum distance in estimation between sequence jacobian and dynare: 0.27%

Table F.2 - Estimated parameters for Krusell-Smith

In [31]:
# Load data
x, para_avg, para_5, para_95, Nsim, acceptancerate, rmean, kernel_support, kernel_density, para_sim = np.load(
    'import_export/results/ks_est.npy', allow_pickle=True)

# Format tables
t1, t2, t3, t4, t5, t6 = '', '', '', '', '', ''
para_list = ['sd', 'ar1', 'ma1']
for n, var in enumerate(para_list):
    t1 += para_list[n] + '\t'
    t2 += ("{:.3f}").format(x[n]) + '\t'
    t3 += para_list[n] + '\t'
    t4 += ("{:.3f}").format(para_avg[n]) + '\t'
    t5 += ("{:.3f}").format(para_5[n]) + '\t'
    t6 += ("{:.3f}").format(para_95[n]) + '\t'

# Print results
print('\nEstimation results:')
print(t1)
print(t2)
print(f'\nNumber of simulations: {Nsim}')
print(f'Acceptance rate: {acceptancerate:.3}')
print('\nRecursive averages:')
print(t3)
print(t4)
print(t5)
print(t6)
Estimation results:
sd	ar1	ma1	
0.179	0.908	0.032	

Number of simulations: 200000
Acceptance rate: 0.253

Recursive averages:
sd	ar1	ma1	
0.182	0.908	0.047	
0.165	0.862	0.015	
0.200	0.950	0.096	

Table F.3a - Estimated parameters for one-asset HANK (shocks)

In [32]:
# Load data
x, para_avg, para_5, para_95, Nsim, acceptancerate, rmean, kernel_support, kernel_density, para_sim = np.load(
    'import_export/results/hank_1a_est.npy', allow_pickle=True)

# Format tables
t1, t2, t3, t4, t5, t6 = '', '', '', '', '', ''
para_list = ['sd_m', 'ar_m', 'sd_g', 'ar_g', 'sd_p', 'ar_p']
for n, var in enumerate(para_list):
    t1 += para_list[n] + '\t'
    t2 += ("{:.3f}").format(x[n]) + '\t'
    t3 += para_list[n] + '\t'
    t4 += ("{:.3f}").format(para_avg[n]) + '\t'
    t5 += ("{:.3f}").format(para_5[n]) + '\t'
    t6 += ("{:.3f}").format(para_95[n]) + '\t'

# Print results
print('\nEstimation results:')
print(t1)
print(t2)
print(f'\nNumber of simulations: {Nsim}')
print(f'Acceptance rate: {acceptancerate:.3}')
print('\nRecursive averages:')
print(t3)
print(t4)
print(t5)
print(t6)
Estimation results:
sd_m	ar_m	sd_g	ar_g	sd_p	ar_p	
0.429	0.529	0.580	0.872	0.099	0.881	

Number of simulations: 200000
Acceptance rate: 0.248

Recursive averages:
sd_m	ar_m	sd_g	ar_g	sd_p	ar_p	
0.434	0.525	0.584	0.870	0.101	0.878	
0.393	0.478	0.514	0.838	0.091	0.849	
0.478	0.569	0.662	0.900	0.112	0.905	

Table F.3b - Estimated parameters for one-asset HANK (shocks + parameters)

In [33]:
# Load data
x, para_avg, para_5, para_95, Nsim, acceptancerate, rmean, kernel_support, kernel_density, para_sim = np.load(
    'import_export/results/hank_1afull_est.npy', allow_pickle=True)

# Format tables
t1, t2, t3, t4, t5, t6 = '', '', '', '', '', ''
para_list = ['sd_m', 'ar_m', 'sd_g', 'ar_g', 'sd_p', 'ar_p', 'phi', 'phi_y', 'kappa']
for n, var in enumerate(para_list):
    t1 += para_list[n] + '\t'
    t2 += ("{:.3f}").format(x[n]) + '\t'
    t3 += para_list[n] + '\t'
    t4 += ("{:.3f}").format(para_avg[n]) + '\t'
    t5 += ("{:.3f}").format(para_5[n]) + '\t'
    t6 += ("{:.3f}").format(para_95[n]) + '\t'

# Print results
print('\nEstimation results:')
print(t1)
print(t2)
print(f'\nNumber of simulations: {Nsim}')
print(f'Acceptance rate: {acceptancerate:.3}')
print('\nRecursive averages:')
print(t3)
print(t4)
print(t5)
print(t6)
Estimation results:
sd_m	ar_m	sd_g	ar_g	sd_p	ar_p	phi	phi_y	kappa	
0.419	0.463	0.569	0.833	0.092	0.913	1.320	0.126	0.140	

Number of simulations: 200000
Acceptance rate: 0.253

Recursive averages:
sd_m	ar_m	sd_g	ar_g	sd_p	ar_p	phi	phi_y	kappa	
0.430	0.460	0.581	0.821	0.096	0.909	1.352	0.143	0.144	
0.385	0.393	0.508	0.771	0.068	0.875	1.231	0.061	0.105	
0.481	0.524	0.662	0.868	0.128	0.942	1.492	0.250	0.186	

Table F4a - Estimated parameters for two-asset HANK (shocks)

In [34]:
# Load data
x, para_avg, para_5, para_95, Nsim, acceptancerate, rmean, kernel_support, kernel_density, para_sim = np.load(
    'import_export/results/hank_2a_est.npy', allow_pickle=True)

# Format tables
t1, t2, t3, t4, t5, t6 = '', '', '', '', '', ''
para_list = ['sd_z', 'ar_z', 'sd_g', 'ar_g', 'sd_beta', 'ar_beta', 'sd_inv', 'ar_inv', 'sd_m', 'ar_m', 'sd_p', 'ar_p',
             'sd_w', 'ar_w']
for n, var in enumerate(para_list):
    t1 += para_list[n] + '\t'
    t2 += ("{:.3f}").format(x[n]) + '\t'
    t3 += para_list[n] + '\t'
    t4 += ("{:.3f}").format(para_avg[n]) + '\t'
    t5 += ("{:.3f}").format(para_5[n]) + '\t'
    t6 += ("{:.3f}").format(para_95[n]) + '\t'

# Print results
print('\nEstimation results:')
print(t1)
print(t2)
print(f'\nNumber of simulations: {Nsim}')
print(f'Acceptance rate: {acceptancerate:.3}')
print('\nRecursive averages:')
print(t3)
print(t4)
print(t5)
print(t6)
Estimation results:
sd_z	ar_z	sd_g	ar_g	sd_beta	ar_beta	sd_inv	ar_inv	sd_m	ar_m	sd_p	ar_p	sd_w	ar_w	
0.072	0.944	0.437	0.503	0.093	0.941	0.174	0.779	0.142	0.830	0.091	0.904	0.373	0.875	

Number of simulations: 200000
Acceptance rate: 0.255

Recursive averages:
sd_z	ar_z	sd_g	ar_g	sd_beta	ar_beta	sd_inv	ar_inv	sd_m	ar_m	sd_p	ar_p	sd_w	ar_w	
0.073	0.941	0.441	0.499	0.093	0.938	0.179	0.775	0.146	0.827	0.092	0.903	0.377	0.872	
0.066	0.911	0.400	0.448	0.085	0.906	0.147	0.731	0.123	0.798	0.083	0.881	0.343	0.844	
0.080	0.968	0.487	0.548	0.103	0.967	0.214	0.816	0.172	0.856	0.101	0.923	0.414	0.899	

Table F4b - Estimated parameters for two-asset HANK (shocks + parameters)

In [35]:
# Load data
x, para_avg, para_5, para_95, Nsim, acceptancerate, rmean, kernel_support, kernel_density, para_sim = np.load(
    'import_export/results/hank_2afull_est.npy', allow_pickle=True)

# Format tables
t1, t2, t3, t4, t5, t6, t1bis, t2bis, t3bis, t4bis, t5bis, t6bis = '', '', '', '', '', '', '', '', '', '', '', ''
para_list = ['sd_z', 'ar_z', 'sd_g', 'ar_g', 'sd_beta', 'ar_beta', 'sd_inv', 'ar_inv', 'sd_m', 'ar_m', 'sd_p', 'ar_p',
             'sd_w', 'ar_w', '\nphi', 'phi_y', 'kappa_p', 'kappa_w', 'eps_I']
for n, var in enumerate(para_list[:14]):
    t1 += para_list[n] + '\t'
    t2 += ("{:.3f}").format(x[n]) + '\t'
    t3 += para_list[n] + '\t'
    t4 += ("{:.3f}").format(para_avg[n]) + '\t'
    t5 += ("{:.3f}").format(para_5[n]) + '\t'
    t6 += ("{:.3f}").format(para_95[n]) + '\t'

for n, var in enumerate(para_list[14:]):
    t1bis += para_list[14:][n] + '\t'
    t2bis += ("{:.3f}").format(x[14:][n]) + '\t'
    t3bis += para_list[14:][n] + '\t'
    t4bis += ("{:.3f}").format(para_avg[14:][n]) + '\t'
    t5bis += ("{:.3f}").format(para_5[14:][n]) + '\t'
    t6bis += ("{:.3f}").format(para_95[14:][n]) + '\t'

# Print results
print('\nEstimation results:')
print(t1)
print(t2)
print(t1bis)
print(t2bis)
print(f'\nNumber of simulations: {Nsim}')
print(f'Acceptance rate: {acceptancerate:.3}')
print('\nRecursive averages:')
print(t3)
print(t4)
print(t5)
print(t6)
print(t3bis)
print(t4bis)
print(t5bis)
print(t6bis)
Estimation results:
sd_z	ar_z	sd_g	ar_g	sd_beta	ar_beta	sd_inv	ar_inv	sd_m	ar_m	sd_p	ar_p	sd_w	ar_w	
0.071	0.970	0.467	0.292	0.093	0.971	0.089	0.867	0.655	0.844	0.059	0.888	0.142	0.648	

phi	phi_y	kappa_p	kappa_w	eps_I	
1.203	0.086	0.035	0.009	0.267	

Number of simulations: 200000
Acceptance rate: 0.241

Recursive averages:
sd_z	ar_z	sd_g	ar_g	sd_beta	ar_beta	sd_inv	ar_inv	sd_m	ar_m	sd_p	ar_p	sd_w	ar_w	
0.072	0.951	0.643	0.952	0.096	0.943	0.413	0.656	0.663	0.737	0.049	0.891	0.155	0.586	
0.066	0.921	0.543	0.914	0.087	0.913	0.360	0.594	0.487	0.671	0.032	0.851	0.116	0.432	
0.080	0.979	0.783	0.986	0.104	0.969	0.473	0.714	0.874	0.799	0.070	0.923	0.202	0.734	

phi	phi_y	kappa_p	kappa_w	eps_I	
1.297	2.932	0.030	0.011	0.502	
1.021	2.564	0.010	0.007	0.349	
1.764	3.519	0.064	0.017	0.670