8  Ajuste de GLMMs en JAGS (Parte 2)

8.1 Regresión Logística Multinivel

En esta sección se aborda el tema de la regresión logística multinivel, aplicada al análisis de encuestas políticas en los Estados Unidos. El objetivo es modelar la probabilidad de que un individuo apoye a un candidato republicano en función de características demográficas y geográficas, utilizando una estructura jerárquica que permite capturar la variabilidad entre estados y grupos de individuos.

8.1.1 Preparación de los datos

Se utilizan datos de encuestas y resultados electorales previos para construir el modelo. A continuación se muestra el código de preparación de datos, adaptado al entorno tidyverse.

library(tidyverse)
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr     1.1.4     ✔ readr     2.1.5
✔ forcats   1.0.0     ✔ stringr   1.5.2
✔ ggplot2   4.0.0     ✔ tibble    3.3.0
✔ lubridate 1.9.4     ✔ tidyr     1.3.1
✔ purrr     1.1.0     
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag()    masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(arm)
Loading required package: MASS

Attaching package: 'MASS'

The following object is masked from 'package:dplyr':

    select

Loading required package: Matrix

Attaching package: 'Matrix'

The following objects are masked from 'package:tidyr':

    expand, pack, unpack

Loading required package: lme4

arm (Version 1.14-4, built: 2024-4-1)

Working directory is /home/lbarboza/Dropbox/Cursos/Actuales/SP1653_2025/NotasClase/ModelosMixtos
library(haven)

data(state)
state.abbr <- c(state.abb[1:8], "DC", state.abb[9:50])
dc <- 9
not.dc <- c(1:8,10:51)
region <- c(3,4,4,3,4,4,1,1,5,3,3,4,4,2,2,2,2,3,3,1,1,1,2,2,3,2,4,2,4,1,1,4,1,3,2,2,3,4,1,1,3,2,3,3,4,1,3,4,1,2,4)

polls <- read_dta("../ARM_Data/election88/polls.dta")

polls.subset <- polls %>% 
  filter(survey == 8) %>% 
  rename(y = bush)

presvote <- read_dta('../ARM_Data/election88/presvote.dta')
v.prev <- presvote$g76_84pr
candidate.effects <- read.table("../ARM_Data/election88/candidate_effects.dat", header = TRUE)

v.prev[not.dc] <- v.prev[not.dc] + 
  (candidate.effects$X76 + candidate.effects$X80 + candidate.effects$X84) / 3

n.edu <- max(polls.subset$edu)

polls.subset <- polls.subset %>% 
  mutate(
    age.edu = n.edu * (age - 1) + edu,
    v.prev.full = v.prev[state],
    region.full = region[state]
  )

8.1.2 Especificación del modelo

Cada observación \(y_i\) indica el voto del encuestado:

  • \(y_i = 1\): apoyo al candidato republicano.
  • \(y_i = 0\): apoyo al candidato demócrata.

Se modela la probabilidad de apoyo mediante una función logística:

\[ \Pr(y_i = 1) = \text{logit}^{-1}(X_i \beta) \]

donde \(X_i\) representa los predictores demográficos y geográficos. Las observaciones se asumen independientes condicionalmente al conjunto de efectos aleatorios.

8.1.3 Modelo logístico multinivel simple

Este modelo considera dos predictores individuales (female y black) y un intercepto que varía por estado:

\[ \Pr(y_i = 1) = \text{logit}^{-1}(\alpha_{j[i]} + \beta_{\text{female}} \cdot \text{female}_i + \beta_{\text{black}} \cdot \text{black}_i), \quad i = 1, \dots, n \]

donde el intercepto varía según el estado:

\[ \alpha_j \sim N(\mu_\alpha, \sigma_{\text{state}}^2), \quad j = 1, \dots, 51 \]

Este modelo captura simultáneamente los efectos individuales y las diferencias geográficas entre estados.

8.1.4 Modelo completo con factores no anidados

El modelo se amplía para incluir interacciones demográficas y predictores a nivel de estado. Las variables consideradas incluyen:

  • Demográficas:

    • Sexo × Etnicidad
    • Edad × Educación (cuatro categorías cada una)
    • Interacción de 16 combinaciones entre edad y educación.
  • Geográficas:

    • Región del país (5 categorías)
    • Promedio de voto republicano en tres elecciones previas, ajustado por región.

8.1.4.1 Estructura del modelo

\[ \Pr(y_i = 1) = \text{logit}^{-1}(\beta_0 + \beta_{\text{female}} \cdot \text{female}_i + \beta_{\text{black}} \cdot \text{black}_i + \beta_{\text{female.black}} \cdot \text{female}_i \cdot \text{black}_i + \alpha^{\text{age}}_{k[i]} + \alpha^{\text{edu}}_{l[i]} + \alpha^{\text{age.edu}}_{k[i],l[i]} + \alpha^{\text{state}}_{j[i]}) \]

con:

\[ \alpha^{\text{state}}_j \sim N(\alpha^{\text{region}}_{m[j]} + \beta_{\text{v.prev}} \cdot \text{v.prev}_j, \sigma_{\text{state}}^2) \]

8.1.4.2 Distribuciones jerárquicas de los coeficientes

\[ \alpha^{\text{age}}_k \sim N(0, \sigma_{\text{age}}^2), \quad k = 1, \dots, 4 \]

\[ \alpha^{\text{edu}}_l \sim N(0, \sigma_{\text{edu}}^2), \quad l = 1, \dots, 4 \]

\[ \alpha^{\text{age.edu}}_{k,l} \sim N(0, \sigma_{\text{age.edu}}^2) \]

\[ \alpha^{\text{region}}_m \sim N(0, \sigma_{\text{region}}^2), \quad m = 1, \dots, 5 \]

8.1.4.3 Ajuste del modelo en JAGS

library(R2jags)
Loading required package: rjags
Loading required package: coda

Attaching package: 'coda'
The following object is masked from 'package:arm':

    traceplot
Linked to JAGS 4.3.0
Loaded modules: basemod,bugs

Attaching package: 'R2jags'
The following object is masked from 'package:coda':

    traceplot
The following object is masked from 'package:arm':

    traceplot
attach(polls.subset)
The following object is masked from package:MASS:

    survey
state <- as.numeric(factor(state))  # Asegura índices consecutivos

data_jags <- list(
  y = y,
  female = female,
  black = black,
  age = age,
  edu = edu,
  state = state,
  region = region,
  v.prev = v.prev,
  n = length(y),
  n.age = length(unique(age)),
  n.edu = length(unique(edu)),
  n.state = length(unique(state)),
  n.region = length(unique(region))
)

inits <- function() {
  list(
    b.0 = rnorm(1),
    b.female = rnorm(1),
    b.black = rnorm(1),
    b.female.black = rnorm(1),
    b.age = rnorm(data_jags$n.age),
    b.edu = rnorm(data_jags$n.edu),
    b.age.edu = matrix(rnorm(data_jags$n.age * data_jags$n.edu), nrow = data_jags$n.age),
    b.state = rnorm(data_jags$n.state),
    b.region = rnorm(data_jags$n.region),
    b.v.prev = rnorm(1),
    sigma.age = runif(1, 0, 100),
    sigma.edu = runif(1, 0, 100),
    sigma.age.edu = runif(1, 0, 100),
    sigma.state = runif(1, 0, 100),
    sigma.region = runif(1, 0, 100)
  )
}

params <- c("b.0", "b.female", "b.black", "b.female.black", "b.age", "b.edu", 
            "b.age.edu", "b.state", "b.region", "b.v.prev", 
            "sigma.age", "sigma.edu", "sigma.age.edu", "sigma.state", "sigma.region")

 jags_fit <- jags(
   data = data_jags,
   inits = inits,
   parameters.to.save = params,
   model.file = "codigoJAGS/logistic.jags",
   n.chains = 3,
   n.iter = 5000,
   n.burnin = 1000,
   n.thin = 10
 )
module glm loaded
Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 2015
   Unobserved stochastic nodes: 266
   Total graph size: 17210

Initializing model
 print(jags_fit)
Inference for Bugs model at "codigoJAGS/logistic.jags", fit using jags,
 3 chains, each with 5000 iterations (first 1000 discarded), n.thin = 10
 n.sims = 1200 iterations saved. Running time = 186.68 secs
                mu.vect sd.vect     2.5%      25%      50%      75%    97.5%
b.0              -0.231   0.813   -2.283   -0.617   -0.097    0.311    1.125
b.age[1]          0.048   0.199   -0.446   -0.010    0.042    0.128    0.374
b.age[2]         -0.073   0.213   -0.644   -0.118   -0.038    0.008    0.260
b.age[3]          0.024   0.200   -0.450   -0.023    0.022    0.102    0.330
b.age[4]         -0.062   0.213   -0.605   -0.111   -0.023    0.019    0.232
b.age.edu[1,1]   -0.034   0.151   -0.397   -0.100   -0.010    0.033    0.253
b.age.edu[2,1]    0.077   0.168   -0.200   -0.011    0.031    0.154    0.490
b.age.edu[3,1]    0.020   0.145   -0.298   -0.049    0.009    0.081    0.342
b.age.edu[4,1]   -0.170   0.209   -0.684   -0.290   -0.103   -0.008    0.067
b.age.edu[1,2]    0.064   0.136   -0.156   -0.009    0.033    0.131    0.395
b.age.edu[2,2]   -0.106   0.147   -0.441   -0.175   -0.068   -0.004    0.104
b.age.edu[3,2]    0.012   0.124   -0.247   -0.047    0.003    0.074    0.284
b.age.edu[4,2]   -0.006   0.135   -0.290   -0.064   -0.001    0.054    0.257
b.age.edu[1,3]    0.064   0.146   -0.161   -0.015    0.025    0.133    0.421
b.age.edu[2,3]   -0.019   0.132   -0.308   -0.073   -0.005    0.040    0.239
b.age.edu[3,3]    0.006   0.137   -0.265   -0.057    0.000    0.065    0.322
b.age.edu[4,3]    0.069   0.156   -0.191   -0.017    0.029    0.140    0.437
b.age.edu[1,4]   -0.003   0.133   -0.291   -0.064   -0.001    0.054    0.297
b.age.edu[2,4]   -0.039   0.125   -0.335   -0.099   -0.016    0.021    0.199
b.age.edu[3,4]    0.000   0.128   -0.274   -0.060    0.000    0.066    0.264
b.age.edu[4,4]    0.054   0.144   -0.188   -0.017    0.023    0.116    0.408
b.black          -1.646   0.330   -2.294   -1.862   -1.647   -1.423   -1.049
b.edu[1]         -0.173   0.255   -0.808   -0.262   -0.126   -0.019    0.168
b.edu[2]         -0.050   0.227   -0.547   -0.124   -0.027    0.040    0.328
b.edu[3]          0.142   0.237   -0.338    0.024    0.131    0.256    0.599
b.edu[4]          0.006   0.225   -0.536   -0.067    0.009    0.103    0.406
b.female         -0.089   0.100   -0.282   -0.155   -0.086   -0.021    0.101
b.female.black   -0.213   0.430   -1.028   -0.507   -0.221    0.081    0.611
b.region[1]      -0.206   0.327   -0.848   -0.366   -0.169   -0.023    0.365
b.region[2]       0.022   0.310   -0.561   -0.102    0.012    0.133    0.631
b.region[3]       0.072   0.310   -0.490   -0.049    0.043    0.185    0.749
b.region[4]      -0.008   0.317   -0.651   -0.130   -0.002    0.103    0.612
b.region[5]       0.304   0.597   -0.393   -0.021    0.100    0.440    2.019
b.state[1]        1.344   0.916   -0.094    0.722    1.197    1.824    3.581
b.state[2]        0.978   0.930   -0.503    0.341    0.835    1.456    3.294
b.state[3]        0.631   0.958   -1.067    0.010    0.478    1.149    3.036
b.state[4]        0.577   0.861   -0.806   -0.006    0.427    0.959    2.798
b.state[5]        0.616   0.891   -0.837    0.020    0.481    1.062    2.900
b.state[6]        0.801   0.907   -0.692    0.196    0.673    1.248    3.017
b.state[7]        0.312   0.937   -1.309   -0.297    0.192    0.792    2.684
b.state[8]        0.428   0.919   -1.218   -0.150    0.343    0.966    2.578
b.state[9]        0.789   0.834   -0.486    0.229    0.648    1.171    2.938
b.state[10]       0.990   0.916   -0.520    0.382    0.835    1.458    3.295
b.state[11]       0.541   0.926   -1.041   -0.067    0.434    1.025    2.787
b.state[12]       0.399   0.853   -1.000   -0.158    0.268    0.788    2.516
b.state[13]       0.957   0.939   -0.567    0.357    0.792    1.441    3.267
b.state[14]       0.249   0.905   -1.243   -0.325    0.136    0.703    2.508
b.state[15]       1.106   0.914   -0.358    0.476    0.965    1.574    3.387
b.state[16]       0.868   0.875   -0.565    0.294    0.740    1.344    3.006
b.state[17]       0.907   0.919   -0.562    0.295    0.746    1.383    3.222
b.state[18]       0.538   0.896   -1.001   -0.046    0.422    0.999    2.726
b.state[19]       0.189   0.942   -1.298   -0.443    0.050    0.695    2.468
b.state[20]       0.157   0.898   -1.374   -0.417    0.021    0.630    2.378
b.state[21]       0.454   0.845   -0.923   -0.112    0.307    0.894    2.632
b.state[22]       0.133   0.855   -1.259   -0.444    0.027    0.571    2.331
b.state[23]       1.146   0.897   -0.269    0.523    0.994    1.588    3.435
b.state[24]       0.612   0.868   -0.899    0.054    0.473    1.064    2.840
b.state[25]       0.676   0.958   -0.947    0.024    0.542    1.215    2.884
b.state[26]       0.714   0.880   -0.760    0.142    0.608    1.136    2.925
b.state[27]       0.823   0.949   -0.787    0.174    0.712    1.401    3.038
b.state[28]       0.892   1.085   -0.934    0.166    0.698    1.493    3.474
b.state[29]       0.762   0.888   -0.674    0.180    0.620    1.197    2.933
b.state[30]       0.437   0.951   -1.092   -0.214    0.299    0.922    2.791
b.state[31]       0.214   0.867   -1.200   -0.331    0.073    0.616    2.405
b.state[32]       1.095   0.876   -0.303    0.487    0.974    1.550    3.198
b.state[33]       0.478   0.890   -1.081   -0.121    0.383    0.968    2.664
b.state[34]       1.011   0.860   -0.405    0.446    0.854    1.452    3.208
b.state[35]       0.879   0.958   -0.667    0.253    0.738    1.366    3.274
b.state[36]       0.310   0.920   -1.208   -0.281    0.167    0.787    2.566
b.state[37]       0.553   0.889   -0.886   -0.019    0.395    0.988    2.807
b.state[38]       0.450   0.905   -1.084   -0.161    0.344    0.914    2.717
b.state[39]       0.951   0.870   -0.503    0.364    0.844    1.425    3.027
b.state[40]       0.421   0.845   -1.012   -0.120    0.347    0.876    2.411
b.state[41]       1.396   0.903   -0.035    0.782    1.259    1.851    3.612
b.state[42]       0.855   0.873   -0.558    0.267    0.704    1.288    3.073
b.state[43]       1.328   0.928   -0.129    0.685    1.194    1.850    3.603
b.state[44]       0.953   0.998   -0.685    0.288    0.789    1.463    3.378
b.state[45]       1.224   0.931   -0.287    0.592    1.080    1.716    3.548
b.state[46]       0.561   0.891   -0.948   -0.003    0.427    1.008    2.745
b.state[47]       0.796   0.939   -0.811    0.182    0.632    1.255    3.108
b.state[48]       0.404   0.877   -1.034   -0.195    0.294    0.836    2.601
b.state[49]       0.582   0.874   -0.975   -0.006    0.482    1.098    2.683
b.v.prev          1.299   1.497   -1.259    0.320    1.064    2.062    4.913
sigma.age         0.207   0.286    0.008    0.061    0.127    0.230    1.019
sigma.age.edu     0.143   0.104    0.004    0.058    0.134    0.207    0.390
sigma.edu         0.323   0.413    0.011    0.121    0.228    0.369    1.322
sigma.region      0.417   0.492    0.025    0.136    0.267    0.490    1.911
sigma.state       0.424   0.103    0.239    0.354    0.420    0.488    0.645
deviance       2609.724  11.551 2588.998 2602.000 2609.113 2616.305 2634.646
                Rhat n.eff
b.0            1.382    10
b.age[1]       1.121    86
b.age[2]       1.114   230
b.age[3]       1.137   140
b.age[4]       1.124   180
b.age.edu[1,1] 1.030   260
b.age.edu[2,1] 1.022   230
b.age.edu[3,1] 1.007  1200
b.age.edu[4,1] 1.063    45
b.age.edu[1,2] 1.001  1200
b.age.edu[2,2] 1.037    67
b.age.edu[3,2] 1.016   640
b.age.edu[4,2] 1.031   500
b.age.edu[1,3] 1.005   440
b.age.edu[2,3] 1.022  1200
b.age.edu[3,3] 1.005  1200
b.age.edu[4,3] 1.025   240
b.age.edu[1,4] 1.005  1000
b.age.edu[2,4] 1.010   510
b.age.edu[3,4] 1.007  1200
b.age.edu[4,4] 1.010   260
b.black        1.006   380
b.edu[1]       1.098    56
b.edu[2]       1.116    50
b.edu[3]       1.063    85
b.edu[4]       1.108    61
b.female       1.002  1100
b.female.black 1.005   390
b.region[1]    1.059   990
b.region[2]    1.092   210
b.region[3]    1.088   260
b.region[4]    1.088  1000
b.region[5]    1.337    13
b.state[1]     1.329    11
b.state[2]     1.372    10
b.state[3]     1.337    10
b.state[4]     1.395     9
b.state[5]     1.373    10
b.state[6]     1.348    10
b.state[7]     1.315    11
b.state[8]     1.274    12
b.state[9]     1.393    10
b.state[10]    1.359    10
b.state[11]    1.300    12
b.state[12]    1.389     9
b.state[13]    1.370    10
b.state[14]    1.365    10
b.state[15]    1.367    10
b.state[16]    1.335    11
b.state[17]    1.347    10
b.state[18]    1.307    11
b.state[19]    1.367    10
b.state[20]    1.365    10
b.state[21]    1.377    10
b.state[22]    1.357    10
b.state[23]    1.351    10
b.state[24]    1.375    10
b.state[25]    1.334    10
b.state[26]    1.347    10
b.state[27]    1.277    12
b.state[28]    1.303    11
b.state[29]    1.401     9
b.state[30]    1.346    10
b.state[31]    1.392     9
b.state[32]    1.343    10
b.state[33]    1.302    11
b.state[34]    1.383    10
b.state[35]    1.363    10
b.state[36]    1.346    10
b.state[37]    1.389    10
b.state[38]    1.339    10
b.state[39]    1.314    11
b.state[40]    1.263    13
b.state[41]    1.355    10
b.state[42]    1.400     9
b.state[43]    1.312    11
b.state[44]    1.320    11
b.state[45]    1.378    10
b.state[46]    1.372    10
b.state[47]    1.367    10
b.state[48]    1.380    10
b.state[49]    1.244    13
b.v.prev       1.356    10
sigma.age      1.012  1200
sigma.age.edu  1.117    28
sigma.edu      1.010   410
sigma.region   1.086    30
sigma.state    1.023   330
deviance       1.013   170

For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).

DIC info (using the rule: pV = var(deviance)/2)
pV = 66.0 and DIC = 2675.7
DIC is an estimate of expected predictive error (lower deviance is better).

8.1.4.4 Diagnóstico del modelo

library(CalvinBayes)
Loading required package: ggformula
Loading required package: scales

Attaching package: 'scales'
The following object is masked from 'package:arm':

    rescale
The following object is masked from 'package:purrr':

    discard
The following object is masked from 'package:readr':

    col_factor
Loading required package: ggiraph
Loading required package: ggridges

New to ggformula?  Try the tutorials: 
    learnr::run_tutorial("introduction", package = "ggformula")
    learnr::run_tutorial("refining", package = "ggformula")
Loading required package: bayesplot
This is bayesplot version 1.14.0
- Online documentation and vignettes at mc-stan.org/bayesplot
- bayesplot theme set to bayesplot::theme_default()
   * Does _not_ affect other ggplot2 plots
   * See ?bayesplot_theme_set for details on theme setting

Attaching package: 'CalvinBayes'
The following object is masked from 'package:bayesplot':

    rhat
The following object is masked from 'package:datasets':

    HairEyeColor
diag_mcmc(as.mcmc(jags_fit), parName = "b.0")

diag_mcmc(as.mcmc(jags_fit), parName = "b.region[1]")

8.1.4.5 Interpretación y comentarios

En la parte logística del modelo, la distribución de los datos se representa mediante una binomial con \(N=1\), lo que implica que \(y_i = 1\) con probabilidad \(p_i\) y \(0\) en caso contrario. Para asegurar que \(p_i\) permanezca entre 0 y 1, se define una restricción de borde (p.bound). El modelo incorpora una formulación multinivel completa, con bucles anidados que controlan las interacciones edad-educación.

Aunque este modelo puede ser lento en converger debido a su parametrización, conserva la estructura fundamental útil para la interpretación conceptual y puede mejorarse posteriormente mediante reparametrización (ver Sección 19.4 del texto base).

8.2 Modelo Logístico con params. redundantes

8.2.1 Ajuste del modelo con parámetros redundantes

#rm(y)
attach(polls.subset)
The following object is masked _by_ .GlobalEnv:

    state
The following objects are masked from polls.subset (pos = 9):

    age, age.edu, black, edu, female, org, region.full, state, survey,
    v.prev.full, weight, y, year
The following object is masked from package:MASS:

    survey
# Prepare data for JAGS
data_jags <- list(
  y = y,
  female = female,
  black = black,
  age = age,
  edu = edu,
  state = state,
  region = region,
  v.prev = v.prev,
  n = length(y),
  n.age = length(unique(age)),
  n.edu = length(unique(edu)),
  n.state = length(unique(state)),
  n.region = length(unique(region))
)

# Initial values for the parameters
inits <- function() {
  list(
    b.0 = rnorm(1),
    b.female = rnorm(1),
    b.black = rnorm(1),
    b.female.black = rnorm(1),
    b.age = rnorm(data_jags$n.age),
    b.edu = rnorm(data_jags$n.edu),
    b.age.edu = matrix(rnorm(data_jags$n.age * data_jags$n.edu), nrow = data_jags$n.age),
    b.state = rnorm(data_jags$n.state),
    b.region = rnorm(data_jags$n.region),
    b.v.prev = rnorm(1),
    sigma.age = runif(1, 0, 100),
    sigma.edu = runif(1, 0, 100),
    sigma.age.edu = runif(1, 0, 100),
    sigma.state = runif(1, 0, 100),
    sigma.region = runif(1, 0, 100),
    mu.age = rnorm(1),
    mu.edu = rnorm(1),
    mu.age.edu = rnorm(1),
    mu.region = rnorm(1)
  )
}

# Parameters to monitor
params <- c("mu.adj", "b.0", "b.female", "b.black", "b.female.black", 
            "b.age.adj", "b.edu.adj", "b.age.edu.adj", "b.state", "b.region.adj", 
            "b.v.prev", "sigma.age", "sigma.edu", "sigma.age.edu", "sigma.state", 
            "sigma.region", "mu.age", "mu.edu", "mu.age.edu", "mu.region")

# Fit the model with JAGS
jags_fit <- jags(
  data = data_jags, 
  inits = inits, 
  parameters.to.save = params, 
  model.file = "codigoJAGS/logistic_centered.jags",  # Replace with the actual model file name
  n.chains = 3, 
  n.iter = 5000, 
  n.burnin = 1000, 
  n.thin = 10
)
Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 2015
   Unobserved stochastic nodes: 270
   Total graph size: 17254

Initializing model
# View summary of the model fit
print(jags_fit)
Inference for Bugs model at "codigoJAGS/logistic_centered.jags", fit using jags,
 3 chains, each with 5000 iterations (first 1000 discarded), n.thin = 10
 n.sims = 1200 iterations saved. Running time = 181.908 secs
                    mu.vect sd.vect     2.5%      25%      50%      75%
b.0                   0.488   3.520   -4.703   -2.640   -0.106    4.159
b.age.adj[1]          0.068   0.091   -0.086    0.000    0.055    0.128
b.age.adj[2]         -0.060   0.089   -0.244   -0.115   -0.051   -0.001
b.age.adj[3]          0.039   0.082   -0.110   -0.010    0.027    0.086
b.age.adj[4]         -0.047   0.097   -0.264   -0.100   -0.030    0.010
b.age.edu.adj[1,1]   -0.042   0.155   -0.427   -0.110   -0.020    0.033
b.age.edu.adj[2,1]    0.091   0.167   -0.177   -0.004    0.055    0.160
b.age.edu.adj[3,1]    0.022   0.137   -0.243   -0.051    0.010    0.086
b.age.edu.adj[4,1]   -0.168   0.192   -0.633   -0.269   -0.116   -0.019
b.age.edu.adj[1,2]    0.064   0.130   -0.163   -0.011    0.041    0.126
b.age.edu.adj[2,2]   -0.108   0.137   -0.413   -0.195   -0.078   -0.007
b.age.edu.adj[3,2]    0.016   0.119   -0.248   -0.040    0.011    0.078
b.age.edu.adj[4,2]   -0.001   0.136   -0.282   -0.062   -0.003    0.063
b.age.edu.adj[1,3]    0.061   0.138   -0.169   -0.017    0.035    0.129
b.age.edu.adj[2,3]   -0.017   0.125   -0.291   -0.080   -0.011    0.043
b.age.edu.adj[3,3]    0.003   0.130   -0.270   -0.062    0.000    0.068
b.age.edu.adj[4,3]    0.075   0.153   -0.165   -0.015    0.040    0.151
b.age.edu.adj[1,4]   -0.001   0.129   -0.275   -0.066   -0.001    0.061
b.age.edu.adj[2,4]   -0.045   0.127   -0.335   -0.106   -0.026    0.022
b.age.edu.adj[3,4]   -0.003   0.120   -0.267   -0.061   -0.001    0.056
b.age.edu.adj[4,4]    0.054   0.139   -0.196   -0.021    0.029    0.124
b.black              -1.664   0.335   -2.348   -1.876   -1.654   -1.444
b.edu.adj[1]         -0.160   0.126   -0.415   -0.248   -0.157   -0.063
b.edu.adj[2]         -0.036   0.088   -0.212   -0.091   -0.029    0.016
b.edu.adj[3]          0.166   0.115   -0.023    0.077    0.164    0.243
b.edu.adj[4]          0.030   0.093   -0.158   -0.027    0.023    0.087
b.female             -0.092   0.096   -0.281   -0.155   -0.088   -0.025
b.female.black       -0.204   0.429   -1.040   -0.495   -0.207    0.094
b.region.adj[1]      -0.223   0.209   -0.705   -0.340   -0.189   -0.050
b.region.adj[2]      -0.012   0.168   -0.415   -0.081    0.001    0.081
b.region.adj[3]       0.038   0.168   -0.363   -0.034    0.034    0.131
b.region.adj[4]      -0.043   0.175   -0.509   -0.105   -0.011    0.052
b.region.adj[5]       0.241   0.503   -0.335   -0.024    0.074    0.319
b.state[1]            0.773   1.507   -2.277   -0.187    1.034    1.795
b.state[2]            0.411   1.507   -2.552   -0.603    0.625    1.449
b.state[3]            0.056   1.516   -2.971   -0.910    0.266    1.106
b.state[4]            0.004   1.495   -3.049   -0.930    0.261    1.023
b.state[5]            0.057   1.525   -3.031   -0.892    0.312    1.102
b.state[6]            0.229   1.512   -2.875   -0.722    0.489    1.228
b.state[7]           -0.235   1.558   -3.501   -1.179    0.017    0.855
b.state[8]           -0.156   1.561   -3.319   -1.127    0.103    0.949
b.state[9]            0.222   1.495   -2.862   -0.681    0.495    1.243
b.state[10]           0.440   1.490   -2.563   -0.493    0.698    1.473
b.state[11]          -0.040   1.548   -3.185   -1.009    0.181    1.070
b.state[12]          -0.165   1.498   -3.217   -1.058    0.112    0.843
b.state[13]           0.389   1.500   -2.565   -0.635    0.669    1.409
b.state[14]          -0.329   1.542   -3.451   -1.233   -0.071    0.770
b.state[15]           0.532   1.524   -2.598   -0.419    0.795    1.587
b.state[16]           0.303   1.521   -2.791   -0.635    0.575    1.341
b.state[17]           0.330   1.505   -2.714   -0.589    0.591    1.344
b.state[18]          -0.022   1.548   -3.228   -0.957    0.214    1.071
b.state[19]          -0.391   1.534   -3.493   -1.373   -0.158    0.657
b.state[20]          -0.407   1.507   -3.446   -1.377   -0.164    0.630
b.state[21]          -0.106   1.503   -3.155   -0.984    0.143    0.921
b.state[22]          -0.444   1.523   -3.602   -1.383   -0.127    0.614
b.state[23]           0.588   1.512   -2.465   -0.363    0.884    1.621
b.state[24]           0.037   1.520   -3.080   -0.917    0.318    1.079
b.state[25]           0.111   1.552   -3.050   -0.909    0.366    1.176
b.state[26]           0.148   1.517   -2.970   -0.788    0.389    1.175
b.state[27]           0.245   1.544   -2.893   -0.698    0.479    1.332
b.state[28]           0.339   1.528   -2.652   -0.645    0.508    1.378
b.state[29]           0.183   1.498   -2.847   -0.764    0.437    1.175
b.state[30]          -0.136   1.520   -3.209   -1.082    0.081    0.913
b.state[31]          -0.355   1.496   -3.357   -1.282   -0.084    0.664
b.state[32]           0.536   1.511   -2.503   -0.387    0.781    1.565
b.state[33]          -0.045   1.555   -3.252   -0.956    0.176    1.048
b.state[34]           0.448   1.501   -2.556   -0.469    0.740    1.458
b.state[35]           0.304   1.508   -2.690   -0.677    0.537    1.316
b.state[36]          -0.252   1.527   -3.357   -1.165    0.043    0.793
b.state[37]          -0.005   1.497   -3.019   -0.972    0.225    0.986
b.state[38]          -0.112   1.546   -3.238   -1.052    0.141    0.943
b.state[39]           0.400   1.510   -2.719   -0.514    0.685    1.399
b.state[40]          -0.118   1.580   -3.503   -1.038    0.097    1.032
b.state[41]           0.857   1.516   -2.204   -0.122    1.113    1.884
b.state[42]           0.277   1.499   -2.751   -0.666    0.529    1.283
b.state[43]           0.773   1.551   -2.486   -0.244    1.020    1.864
b.state[44]           0.435   1.541   -2.673   -0.523    0.626    1.448
b.state[45]           0.663   1.493   -2.273   -0.300    0.884    1.676
b.state[46]           0.007   1.505   -3.101   -0.928    0.241    1.005
b.state[47]           0.230   1.499   -2.749   -0.689    0.463    1.254
b.state[48]          -0.166   1.518   -3.241   -1.105    0.105    0.891
b.state[49]           0.034   1.579   -3.249   -0.914    0.318    1.180
b.v.prev              1.160   1.625   -1.302    0.107    0.838    1.890
mu.adj                0.448   0.091    0.271    0.387    0.446    0.512
mu.age                0.416   2.879   -3.623   -1.717   -0.501    3.360
mu.age.edu           -0.294   2.665   -3.805   -2.075   -1.496    2.762
mu.edu               -0.290   2.395   -3.885   -2.323   -0.768    1.942
mu.region            -0.452   1.849   -5.005   -1.275    0.073    0.852
sigma.age             0.238   0.507    0.006    0.068    0.140    0.263
sigma.age.edu         0.149   0.096    0.015    0.073    0.137    0.208
sigma.edu             0.349   0.468    0.020    0.134    0.226    0.394
sigma.region          0.399   0.568    0.007    0.119    0.236    0.469
sigma.state           0.434   0.103    0.240    0.364    0.429    0.499
deviance           2609.009  11.561 2587.716 2600.988 2608.294 2616.152
                      97.5%  Rhat n.eff
b.0                   6.278 5.010     3
b.age.adj[1]          0.276 1.002   880
b.age.adj[2]          0.093 1.007  1200
b.age.adj[3]          0.219 1.003  1200
b.age.adj[4]          0.109 1.003   640
b.age.edu.adj[1,1]    0.252 1.000  1200
b.age.edu.adj[2,1]    0.518 1.008   680
b.age.edu.adj[3,1]    0.337 1.003  1200
b.age.edu.adj[4,1]    0.072 1.002   950
b.age.edu.adj[1,2]    0.370 1.004   440
b.age.edu.adj[2,2]    0.101 1.003   710
b.age.edu.adj[3,2]    0.270 1.001  1200
b.age.edu.adj[4,2]    0.295 1.000  1200
b.age.edu.adj[1,3]    0.391 1.004  1200
b.age.edu.adj[2,3]    0.250 1.003   650
b.age.edu.adj[3,3]    0.276 1.004  1000
b.age.edu.adj[4,3]    0.456 1.003   690
b.age.edu.adj[1,4]    0.273 1.003  1000
b.age.edu.adj[2,4]    0.185 1.000  1200
b.age.edu.adj[3,4]    0.259 1.000  1200
b.age.edu.adj[4,4]    0.394 1.005   740
b.black              -1.006 1.000  1200
b.edu.adj[1]          0.054 1.001  1200
b.edu.adj[2]          0.132 1.002  1200
b.edu.adj[3]          0.394 1.002  1200
b.edu.adj[4]          0.217 1.001  1200
b.female              0.090 1.001  1200
b.female.black        0.650 1.000  1200
b.region.adj[1]       0.056 1.056    52
b.region.adj[2]       0.287 1.057    64
b.region.adj[3]       0.348 1.073    51
b.region.adj[4]       0.233 1.115    33
b.region.adj[5]       1.765 1.185    24
b.state[1]            3.512 2.400     4
b.state[2]            3.092 2.385     4
b.state[3]            2.805 2.333     4
b.state[4]            2.726 2.478     4
b.state[5]            2.844 2.415     4
b.state[6]            3.021 2.356     4
b.state[7]            2.480 2.296     4
b.state[8]            2.457 2.270     5
b.state[9]            2.931 2.511     4
b.state[10]           3.158 2.425     4
b.state[11]           2.691 2.346     4
b.state[12]           2.465 2.488     4
b.state[13]           3.157 2.371     4
b.state[14]           2.477 2.356     4
b.state[15]           3.232 2.392     4
b.state[16]           3.002 2.405     4
b.state[17]           3.081 2.385     4
b.state[18]           2.640 2.359     4
b.state[19]           2.433 2.343     4
b.state[20]           2.301 2.446     4
b.state[21]           2.587 2.463     4
b.state[22]           2.230 2.426     4
b.state[23]           3.311 2.430     4
b.state[24]           2.713 2.448     4
b.state[25]           2.842 2.315     4
b.state[26]           2.861 2.409     4
b.state[27]           2.968 2.265     5
b.state[28]           3.339 2.163     5
b.state[29]           2.994 2.443     4
b.state[30]           2.727 2.336     4
b.state[31]           2.378 2.498     4
b.state[32]           3.224 2.415     4
b.state[33]           2.837 2.366     4
b.state[34]           3.176 2.447     4
b.state[35]           3.145 2.286     4
b.state[36]           2.463 2.354     4
b.state[37]           2.668 2.418     4
b.state[38]           2.621 2.377     4
b.state[39]           3.072 2.392     4
b.state[40]           2.510 2.371     4
b.state[41]           3.622 2.414     4
b.state[42]           3.077 2.454     4
b.state[43]           3.529 2.307     4
b.state[44]           3.216 2.222     5
b.state[45]           3.442 2.389     4
b.state[46]           2.748 2.411     4
b.state[47]           2.966 2.330     4
b.state[48]           2.580 2.453     4
b.state[49]           2.678 2.401     4
b.v.prev              5.349 1.203    17
mu.adj                0.615 1.001  1200
mu.age                5.658 5.652     3
mu.age.edu            4.483 5.851     3
mu.edu                3.540 3.513     4
mu.region             1.852 2.524     4
sigma.age             0.894 1.009   280
sigma.age.edu         0.360 1.007   410
sigma.edu             1.469 1.007   400
sigma.region          1.742 1.035    61
sigma.state           0.652 1.001  1200
deviance           2633.379 1.000  1200

For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).

DIC info (using the rule: pV = var(deviance)/2)
pV = 66.9 and DIC = 2675.9
DIC is an estimate of expected predictive error (lower deviance is better).

8.2.2 Estimación de la opinión promedio por estado usando inferencias del modelo

8.2.2.1 Panorama general

El modelo de regresión logística ofrece una forma de estimar la probabilidad de que cualquier adulto de un grupo demográfico y estado dados prefiera a Bush. Usando estas probabilidades, podemos calcular promedios ponderados para estimar la proporción de simpatizantes de Bush en distintos subconjuntos de la población.

Preparación de datos

Usando datos del Censo de EE. UU., creamos un conjunto de 3264 celdas (cruces de demografía y estados) donde cada celda representa una combinación única de: — SexoEtnicidadEdadNivel educativoEstado.

Cada celda contiene el número de personas que encajan en esa combinación, almacenado en el data frame census.

Cálculo del apoyo esperado a Bush (y.pred)

Tras ajustar el modelo en JAGS y obtener n.sims réplicas simuladas, calculamos y.pred, la probabilidad predicha de apoyar a Bush para cada celda demográfica en cada simulación.

# Suponemos que `census` contiene la información demográfica y de estado por celda
library (foreign)
library(tidyverse)
census <- read.dta ("../ARM_Data/election88/census88.dta")

census <- census %>% filter(state <= 49)
attach.jags(jags_fit)

L <- nrow(census)  # Número de celdas del censo
y.pred <- array(NA, c(n.sims, L))  # Matriz para almacenar predicciones

for (l in 1:L) {
  y.pred[, l] <- invlogit(
    b.0 + b.female * census$female[l] +
    b.black * census$black[l] +
    b.female.black * census$female[l] * census$black[l] +
    b.age.adj[, census$age[l]] + b.edu.adj[, census$edu[l]] +
    b.age.edu.adj[, census$age[l], census$edu[l]] + b.state[, census$state[l]]
  )
}

8.2.3 Estimación del apoyo promedio por estado

Para cada estado \(j\), estimamos la respuesta promedio tomando una suma ponderada de las predicciones a lo largo de las 64 categorías demográficas dentro del estado. Este promedio ponderado refleja la proporción esperada de simpatizantes de Bush en cada estado.

El promedio ponderado para el estado \(j\) se calcula como:

\[ y^{\text{state}}_{\text{pred}, j} = \frac{\sum_{l \in j} N_l \theta_l}{\sum_{l \in j} N_l} \]

donde:

\(N_l\) es el conteo poblacional del grupo demográfico \(l\) en el estado \(j\). — \(\theta_l\) es la probabilidad predicha de apoyo a Bush para el grupo \(l\).

n.state <- max(census$state)  # Número de estados
# Arreglo para almacenar predicciones a nivel de estado
y.pred.state <- array(NA, c(n.sims, n.state))

# Promedios ponderados por estado de las predicciones
for (s in 1:n.sims) {
  for (j in 1:n.state) {
    ok <- census$state == j  # Celdas correspondientes al estado j
    y.pred.state[s, j] <- sum(census$N[ok] * y.pred[s, ok]) / sum(census$N[ok])
  }
}

8.2.4 Resumen de las predicciones por estado

Para cada estado, calculamos un estimador puntual y un intervalo de predicción del 50% a partir de las n.sims simulaciones. Esto resume la proporción de adultos en cada estado que se predice que apoyan a Bush.

# Arreglo para almacenar estadísticas resumidas
state.pred <- array(NA, c(3, n.state))

# Cálculo del intervalo 50% (25–75) y la mediana para cada estado
for (j in 1:n.state) {
  state.pred[, j] <- quantile(y.pred.state[, j], c(0.25, 0.5, 0.75))
}

8.2.5 Interpretación

El objeto state.pred contiene: — El percentil 25 (límite inferior del intervalo 50%), — La mediana (predicción puntual) y — El percentil 75 (límite superior del intervalo 50%) de la proporción de adultos en cada estado que apoyaron a Bush. Estas estimaciones incorporan la variación demográfica dentro de cada estado y brindan información sobre el nivel predicho de apoyo por estado.

8.3 Modelo Poisson con sobredispersión en JAGS

8.3.1 Contexto general

El estudio analiza tasas de detenciones policiales en Nueva York diferenciadas por grupo étnico y precinto policial, permitiendo:

  • Efectos cruzados entre etnia y precinto (no anidados).
  • Un componente de sobredispersión a nivel observación.
  • Un offset logarítmico proporcional al número de arrestos del año previo, escalado a 15 meses.

Cada observación \((e,p)\) representa una combinación entre grupo étnico \(e\) y precinto \(p\). La variable respuesta \(y_{ep}\) es el número de detenciones en ese grupo y área.

8.3.2 Estructura del modelo jerárquico

Modelo Poisson sobredisperso con offset:

\[ \begin{aligned} y_{ep} &\sim \text{Poisson}\left(\frac{15}{12}n_{ep},e^{\mu+\alpha_e+\beta_p+\epsilon_{ep}}\right),\\ \alpha_e &\sim \mathrm{N}(0,\sigma_\alpha^2),\\ \beta_p &\sim \mathrm{N}(0,\sigma_\beta^2),\\ \epsilon_{ep} &\sim \mathrm{N}(0,\sigma_\epsilon^2). \end{aligned} \]

  • \(\alpha_e\): efecto aleatorio por grupo étnico.
  • \(\beta_p\): efecto aleatorio por precinto.
  • \(\epsilon_{ep}\): término de sobredispersión a nivel observación.
  • \(\log\left(\frac{15}{12}n_{ep}\right)\) actúa como offset fijo (conocido).

8.3.3 Preparación de los datos en R

Simulación de los datos acorde al modelo descrito:

set.seed(5454)  # para reproducibilidad
# Parámetros "verdaderos" de la simulación
E  <- 3                    # etnias
P  <- 60                   # precintos
mu <- -1.0                 # intercepto en la escala log
alpha_true <- c( 0.40, 0.10, -0.50 ) # efectos por etnia: blacks, hispanics, whites (suma != 0, luego centraremos)
sigma_beta     <- 0.50     # sd entre precintos
sigma_epsilon  <- 0.30     # sd de sobredispersión (log-escala)
lambda_arresto <- 50       # media base de arrestos por celda (antes de escalado)
scale_15m      <- 15/12    # factor de 15 meses

# Diseño completo (todas las combinaciones etnia-precinto)
design <- expand_grid(
  eth = factor(1:E, labels = c("black","hispanic","white")),
  precinct = factor(1:P)
) %>%
  # Asignamos un ID único por observación (para OLRE)
  mutate(obs_id = row_number())

# Efectos aleatorios por precinto
beta_precinct <- rnorm(P, mean = 0, sd = sigma_beta)
names(beta_precinct) <- levels(design$precinct)

# Sobredispersión a nivel observación (OLRE, normal en la escala log)
eps_obs <- rnorm(nrow(design), mean = 0, sd = sigma_epsilon)

# Arrestos del año previo: n_ep ~ Poisson(lambda_arresto) (no negativos)
n_prev <- rpois(nrow(design), lambda = lambda_arresto) %>% pmax(1)

# Tasa esperada de "stops" (en escala log) siguiendo (15.1):
# log E[y_ep] = log( (15/12) * n_ep ) + mu + alpha_e + beta_p + eps_ep
linpred <- log(scale_15m * n_prev) +
           mu +
           alpha_true[as.integer(design$eth)] +
           beta_precinct[as.character(design$precinct)] +
           eps_obs

# Respuesta: y_ep ~ Poisson( exp(linpred) )
y <- rpois(nrow(design), lambda = exp(linpred))

# Armamos el data frame final
sim_data <- design %>%
  mutate(
    n_prev = n_prev,
    y      = y
  )

glimpse(sim_data)
Rows: 180
Columns: 5
$ eth      <fct> black, black, black, black, black, black, black, black, black…
$ precinct <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18…
$ obs_id   <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18…
$ n_prev   <dbl> 54, 45, 46, 38, 45, 56, 50, 39, 42, 49, 56, 51, 46, 48, 55, 5…
$ y        <int> 80, 37, 48, 87, 43, 39, 24, 23, 6, 23, 45, 23, 53, 27, 46, 10…
# Offset en log: log( (15/12) * n_prev )
sim_data <- sim_data %>%
  mutate(offset_log = log(scale_15m * n_prev))

Los datos deben pasar a JAGS en formato lista, por ejemplo:

library(R2jags)

data_jags <- list(
  N = nrow(sim_data),
  y = sim_data$y,
  offset = log(15/12 * sim_data$n_prev),
  eth = as.numeric(sim_data$eth),
  precinct = as.numeric(sim_data$precinct),
  n_eth = length(unique(sim_data$eth)),
  n_precinct = length(unique(sim_data$precinct))
)

8.3.4 Esquema de inicialización y parámetros a guardar

inits <- function() list(
  mu = rnorm(1),
  sigma_eth = runif(1, 0, 1),
  sigma_precinct = runif(1, 0, 1),
  sigma_eps = runif(1, 0, 1)
)

params <- c("mu", "sigma_eth", "sigma_precinct", "sigma_eps", "b_eth", "b_precinct")

8.3.5 Ejecución del modelo

fit_jags <- jags(
  data = data_jags,
  inits = inits,
  parameters.to.save = params,
  model.file = "codigoJAGS/poisson_sobredisp.jags",
  n.chains = 3,
  n.iter = 5000,
  n.burnin = 1000,
  n.thin = 5
)
Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 180
   Unobserved stochastic nodes: 247
   Total graph size: 1338

Initializing model
print(fit_jags)
Inference for Bugs model at "codigoJAGS/poisson_sobredisp.jags", fit using jags,
 3 chains, each with 5000 iterations (first 1000 discarded), n.thin = 5
 n.sims = 2400 iterations saved. Running time = 10.092 secs
                mu.vect sd.vect     2.5%      25%      50%      75%    97.5%
b_eth[1]          0.397   1.448   -2.043    0.068    0.378    0.713    2.989
b_eth[2]          0.119   1.446   -2.313   -0.214    0.100    0.428    2.701
b_eth[3]         -0.464   1.445   -2.915   -0.800   -0.477   -0.151    2.061
b_precinct[1]     0.599   0.188    0.230    0.473    0.597    0.722    0.971
b_precinct[2]     0.261   0.195   -0.145    0.133    0.267    0.394    0.628
b_precinct[3]     0.165   0.195   -0.212    0.034    0.159    0.300    0.558
b_precinct[4]     0.950   0.181    0.593    0.829    0.953    1.071    1.307
b_precinct[5]     0.618   0.183    0.270    0.495    0.622    0.740    0.974
b_precinct[6]    -0.088   0.202   -0.486   -0.225   -0.088    0.052    0.304
b_precinct[7]    -0.175   0.200   -0.562   -0.317   -0.169   -0.033    0.207
b_precinct[8]    -0.005   0.194   -0.386   -0.131   -0.001    0.121    0.374
b_precinct[9]    -0.556   0.216   -0.981   -0.704   -0.553   -0.408   -0.125
b_precinct[10]   -0.057   0.201   -0.442   -0.188   -0.058    0.078    0.330
b_precinct[11]    0.192   0.193   -0.197    0.064    0.194    0.325    0.571
b_precinct[12]    0.120   0.194   -0.255   -0.009    0.120    0.255    0.482
b_precinct[13]    0.307   0.191   -0.070    0.175    0.309    0.439    0.666
b_precinct[14]   -0.128   0.199   -0.517   -0.264   -0.126    0.009    0.253
b_precinct[15]    0.534   0.187    0.165    0.411    0.531    0.659    0.902
b_precinct[16]    0.726   0.181    0.356    0.611    0.727    0.850    1.072
b_precinct[17]   -0.046   0.200   -0.439   -0.186   -0.045    0.087    0.344
b_precinct[18]   -0.831   0.239   -1.322   -0.992   -0.824   -0.665   -0.383
b_precinct[19]    0.907   0.180    0.556    0.786    0.908    1.028    1.251
b_precinct[20]   -0.880   0.238   -1.348   -1.039   -0.878   -0.716   -0.435
b_precinct[21]    0.433   0.188    0.071    0.304    0.436    0.561    0.805
b_precinct[22]    0.292   0.190   -0.081    0.168    0.294    0.419    0.660
b_precinct[23]   -0.090   0.204   -0.492   -0.219   -0.093    0.047    0.306
b_precinct[24]   -0.114   0.206   -0.518   -0.258   -0.111    0.030    0.286
b_precinct[25]   -0.899   0.245   -1.380   -1.062   -0.896   -0.733   -0.413
b_precinct[26]    0.053   0.200   -0.339   -0.084    0.055    0.188    0.443
b_precinct[27]   -0.669   0.219   -1.109   -0.810   -0.664   -0.529   -0.252
b_precinct[28]   -0.051   0.202   -0.441   -0.186   -0.048    0.085    0.335
b_precinct[29]   -0.660   0.222   -1.095   -0.814   -0.658   -0.509   -0.229
b_precinct[30]    0.279   0.189   -0.095    0.151    0.283    0.407    0.656
b_precinct[31]    0.049   0.203   -0.350   -0.088    0.053    0.191    0.437
b_precinct[32]   -0.024   0.198   -0.421   -0.157   -0.024    0.109    0.354
b_precinct[33]    0.299   0.194   -0.085    0.171    0.299    0.426    0.674
b_precinct[34]    0.141   0.192   -0.227    0.008    0.145    0.267    0.515
b_precinct[35]   -0.050   0.203   -0.447   -0.186   -0.051    0.086    0.354
b_precinct[36]    0.465   0.187    0.107    0.340    0.463    0.591    0.835
b_precinct[37]   -0.329   0.207   -0.725   -0.467   -0.331   -0.189    0.071
b_precinct[38]   -0.072   0.202   -0.475   -0.205   -0.071    0.062    0.327
b_precinct[39]    0.884   0.184    0.518    0.753    0.891    1.011    1.237
b_precinct[40]   -0.040   0.195   -0.429   -0.165   -0.040    0.091    0.340
b_precinct[41]   -0.711   0.227   -1.177   -0.860   -0.708   -0.557   -0.294
b_precinct[42]    0.287   0.188   -0.086    0.159    0.288    0.414    0.650
b_precinct[43]   -0.748   0.234   -1.219   -0.900   -0.745   -0.594   -0.292
b_precinct[44]    0.086   0.189   -0.297   -0.036    0.085    0.213    0.460
b_precinct[45]   -0.123   0.199   -0.504   -0.255   -0.124    0.008    0.263
b_precinct[46]    0.075   0.195   -0.292   -0.058    0.069    0.206    0.465
b_precinct[47]    0.132   0.193   -0.247    0.002    0.134    0.262    0.503
b_precinct[48]   -0.366   0.208   -0.783   -0.505   -0.367   -0.226    0.027
b_precinct[49]   -0.942   0.249   -1.424   -1.108   -0.944   -0.783   -0.447
b_precinct[50]   -0.223   0.205   -0.628   -0.364   -0.221   -0.080    0.175
b_precinct[51]    0.410   0.184    0.058    0.284    0.411    0.536    0.760
b_precinct[52]   -0.267   0.209   -0.695   -0.409   -0.261   -0.126    0.129
b_precinct[53]    0.275   0.188   -0.092    0.148    0.274    0.400    0.653
b_precinct[54]   -0.534   0.222   -0.984   -0.679   -0.536   -0.379   -0.112
b_precinct[55]   -0.226   0.208   -0.629   -0.367   -0.228   -0.085    0.175
b_precinct[56]   -0.657   0.231   -1.112   -0.804   -0.657   -0.503   -0.212
b_precinct[57]    0.007   0.198   -0.374   -0.129    0.007    0.143    0.401
b_precinct[58]   -0.015   0.198   -0.401   -0.148   -0.017    0.125    0.372
b_precinct[59]    0.601   0.180    0.235    0.480    0.604    0.724    0.937
b_precinct[60]    0.436   0.184    0.066    0.316    0.436    0.559    0.802
mu               -1.126   1.446   -3.741   -1.448   -1.101   -0.794    1.329
sigma_eps         0.271   0.030    0.214    0.250    0.270    0.290    0.334
sigma_eth         1.579   2.341    0.274    0.530    0.860    1.593    8.388
sigma_precinct    0.511   0.058    0.401    0.470    0.511    0.546    0.624
deviance       1056.501  19.060 1020.500 1043.323 1056.156 1069.032 1095.065
                Rhat n.eff
b_eth[1]       1.003  2300
b_eth[2]       1.002  2400
b_eth[3]       1.003  2400
b_precinct[1]  1.001  2400
b_precinct[2]  1.001  2400
b_precinct[3]  1.002  1200
b_precinct[4]  1.000  2400
b_precinct[5]  1.001  2400
b_precinct[6]  1.001  2400
b_precinct[7]  1.002  1400
b_precinct[8]  1.002  1500
b_precinct[9]  1.008   280
b_precinct[10] 1.001  2400
b_precinct[11] 1.002  1600
b_precinct[12] 1.001  2400
b_precinct[13] 1.001  2400
b_precinct[14] 1.001  2400
b_precinct[15] 1.000  2400
b_precinct[16] 1.002  1600
b_precinct[17] 1.002  1400
b_precinct[18] 1.001  2400
b_precinct[19] 1.002  1900
b_precinct[20] 1.001  2400
b_precinct[21] 1.001  2400
b_precinct[22] 1.001  2400
b_precinct[23] 1.002  1100
b_precinct[24] 1.000  2400
b_precinct[25] 1.001  2400
b_precinct[26] 1.003   830
b_precinct[27] 1.007   320
b_precinct[28] 1.002  1200
b_precinct[29] 1.006   380
b_precinct[30] 1.001  2400
b_precinct[31] 1.001  2400
b_precinct[32] 1.002  1100
b_precinct[33] 1.002  1300
b_precinct[34] 1.003   680
b_precinct[35] 1.001  2400
b_precinct[36] 1.001  2400
b_precinct[37] 1.002  1100
b_precinct[38] 1.003   850
b_precinct[39] 1.001  2400
b_precinct[40] 1.004   510
b_precinct[41] 1.001  2400
b_precinct[42] 1.002  1100
b_precinct[43] 1.000  2400
b_precinct[44] 1.001  2400
b_precinct[45] 1.001  2400
b_precinct[46] 1.003   650
b_precinct[47] 1.001  2400
b_precinct[48] 1.000  2400
b_precinct[49] 1.001  2400
b_precinct[50] 1.001  2400
b_precinct[51] 1.002  1100
b_precinct[52] 1.001  2400
b_precinct[53] 1.004   620
b_precinct[54] 1.001  2400
b_precinct[55] 1.006   360
b_precinct[56] 1.003   660
b_precinct[57] 1.002  1100
b_precinct[58] 1.002  1700
b_precinct[59] 1.002  1700
b_precinct[60] 1.002  1000
mu             1.003  2400
sigma_eps      1.004   590
sigma_eth      1.002  1900
sigma_precinct 1.001  2400
deviance       1.003   730

For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).

DIC info (using the rule: pV = var(deviance)/2)
pV = 181.3 and DIC = 1237.8
DIC is an estimate of expected predictive error (lower deviance is better).

8.3.6 Diagnóstico del modelo

mcmc_list <- as.mcmc(fit_jags)

## Trazas y densidades para parámetros escala
diag_mcmc(mcmc_list, parName = "mu")

diag_mcmc(mcmc_list, parName = "sigma_eth")

diag_mcmc(mcmc_list, parName = "sigma_precinct")

diag_mcmc(mcmc_list, parName = "sigma_eps")

## Gelman-Rubin (R-hat) resumido
gelman_diag <- gelman.diag(mcmc_list, autoburnin = FALSE, multivariate = FALSE)
gelman_diag
Potential scale reduction factors:

               Point est. Upper C.I.
b_eth[1]            1.003      1.005
b_eth[2]            1.002      1.004
b_eth[3]            1.003      1.005
b_precinct[1]       1.000      1.000
b_precinct[10]      1.000      1.001
b_precinct[11]      1.001      1.004
b_precinct[12]      1.001      1.002
b_precinct[13]      1.001      1.002
b_precinct[14]      1.000      1.001
b_precinct[15]      1.000      1.000
b_precinct[16]      1.002      1.006
b_precinct[17]      1.002      1.006
b_precinct[18]      1.001      1.003
b_precinct[19]      1.001      1.004
b_precinct[2]       1.000      1.002
b_precinct[20]      1.001      1.003
b_precinct[21]      1.001      1.001
b_precinct[22]      1.001      1.002
b_precinct[23]      1.002      1.006
b_precinct[24]      0.999      0.999
b_precinct[25]      1.000      1.002
b_precinct[26]      1.002      1.009
b_precinct[27]      1.006      1.023
b_precinct[28]      1.002      1.006
b_precinct[29]      1.006      1.020
b_precinct[3]       1.001      1.006
b_precinct[30]      1.000      1.002
b_precinct[31]      1.000      1.001
b_precinct[32]      1.002      1.007
b_precinct[33]      1.001      1.006
b_precinct[34]      1.002      1.010
b_precinct[35]      1.001      1.003
b_precinct[36]      1.000      1.000
b_precinct[37]      1.001      1.006
b_precinct[38]      1.002      1.009
b_precinct[39]      1.001      1.003
b_precinct[4]       0.999      1.000
b_precinct[40]      1.004      1.014
b_precinct[41]      1.000      1.001
b_precinct[42]      1.001      1.006
b_precinct[43]      1.000      1.001
b_precinct[44]      1.000      1.001
b_precinct[45]      1.001      1.002
b_precinct[46]      1.003      1.011
b_precinct[47]      1.000      1.002
b_precinct[48]      0.999      1.000
b_precinct[49]      1.000      1.001
b_precinct[5]       1.000      1.000
b_precinct[50]      1.001      1.001
b_precinct[51]      1.002      1.007
b_precinct[52]      1.000      1.002
b_precinct[53]      1.003      1.012
b_precinct[54]      1.001      1.003
b_precinct[55]      1.005      1.020
b_precinct[56]      1.003      1.011
b_precinct[57]      1.002      1.007
b_precinct[58]      1.001      1.004
b_precinct[59]      1.001      1.004
b_precinct[6]       1.000      1.001
b_precinct[60]      1.002      1.007
b_precinct[7]       1.001      1.005
b_precinct[8]       1.001      1.004
b_precinct[9]       1.007      1.026
deviance            1.003      1.010
mu                  1.003      1.005
sigma_eps           1.003      1.012
sigma_eth           1.059      1.069
sigma_precinct      1.001      1.003