Sensitivity analysis
Load code
First, we load the required packages and include some Helper functions:
include("helper_functions.jl")
include("Derivatives_HelperFunctions.jl")load all T₁-mapping methods:
include("T1_mapping_methods.jl")All simulations in this analysis are performed with the generalized Bloch model:
MT_model = gBloch()Set parameters
We analyze overall 9 ROIs and use the MT parameters from the paper Unconstrained quantitative magnetization transfer imaging: Disentangling T₁ of the free and semi-solid spin pools:
ROI = [:WM, :anteriorCC, :posteriorCC, :GM, :caudate, :putamen, :pallidum, :thalamus, :hippocampus]
m0s = [0.212, 0.237, 0.235, 0.098, 0.113, 0.118, 0.164, 0.158, 0.097]
T1f = [1.84, 1.77, 1.80, 2.46, 1.95, 1.84, 1.664, 2.02, 2.65]
T2f = [76.9, 69.9, 76.3, 83, 73.3, 67.4, 59.3, 70.8, 91] .* 1e-3
Rex = [13.6, 13.4, 13.5, 14.0, 13.8, 14.9, 15.8, 14.2, 15.3]
T1s = [0.34, 0.349, 0.350, 0.42, 0.432, 0.385, 0.351, 0.396, 0.376]
T2s = [12.5, 14.5, 12.6, 14.4, 15.1, 15.4, 14.9, 13.0, 13.0]For consistency, we use times instead of rates throughout
Tex = 1 ./ Rexand we neglect B₀ and B₁ inhomogeneities:
ω0 = 0
B1 = 1We define vectors of symbols for all MT parameters and the derivatives ∂T₁ᵒ/∂pⱼᴹᵀ:
p = [:m0s, :T1f, :T2f, :Tex, :T1s, :T2s]
j = [:dT1odm0s, :dT1odT1f, :dT1odT2f, :dT1odTex, :dT1odT1s, :dT1odT2s]Compute derivatives
Since the calculation of the derivatives is computationally intensive, it is pre-computed and saved in a CSV file. To recompute the derivatives, set the following parameter to true:
calculate_jacobian = falseWe store all derivatives in a DataFrame list (saved in the file jacobian.csv), concatenating ROIs and T₁ mapping methods:
Nrows = length(ROI) * length(T1_functions)
if calculate_jacobian
    df = DataFrame(
        seq_name=Vector{String}(undef, Nrows),
        seq_type=Vector{Symbol}(undef, Nrows),
        ROI=Vector{Symbol}(undef, Nrows),
        m0s=Vector{Float64}(undef, Nrows),
        T1f=Vector{Float64}(undef, Nrows),
        T2f=Vector{Float64}(undef, Nrows),
        Tex=Vector{Float64}(undef, Nrows),
        T1s=Vector{Float64}(undef, Nrows),
        T2s=Vector{Float64}(undef, Nrows),
        dT1odm0s=Vector{Float64}(undef, Nrows),
        dT1odT1f=Vector{Float64}(undef, Nrows),
        dT1odT2f=Vector{Float64}(undef, Nrows),
        dT1odTex=Vector{Float64}(undef, Nrows),
        dT1odT1s=Vector{Float64}(undef, Nrows),
        dT1odT2s=Vector{Float64}(undef, Nrows),
    )
    Threads.@threads for iROI ∈ eachindex(ROI)
        _p = (m0s[iROI], T1f[iROI], T2f[iROI], Tex[iROI], T1s[iROI], T2s[iROI])
        Threads.@threads for iseq ∈ eachindex(T1_functions)
            df_idx = iseq + length(T1_functions) * (iROI - 1)
            # The following line performs the actual computation of the derivatives.
            # `T1_functions` takes the rates of several MT parameters, so we have to call it with `1/T`.
            # `T₂ˢ` is rescaled to ensure stability of the finite difference algorithm.
            j = jacobian(central_fdm(5, 1), p -> T1_functions[iseq](p[1], 1 / p[2], 1 / p[3], 1 / p[4], 1 / p[5], 1e-6p[6]), _p)[1]
            df[df_idx, :seq_name] = seq_name[iseq]
            df[df_idx, :seq_type] = seq_type[iseq]
            df[df_idx, :ROI] = ROI[iROI]
            df[df_idx, :m0s] = m0s[iROI]
            df[df_idx, :T1f] = T1f[iROI]
            df[df_idx, :T2f] = T2f[iROI]
            df[df_idx, :Tex] = Tex[iROI]
            df[df_idx, :T1s] = T1s[iROI]
            df[df_idx, :T2s] = T2s[iROI]
            df[df_idx, :dT1odm0s] = j[1]
            df[df_idx, :dT1odT1f] = j[2]
            df[df_idx, :dT1odT2f] = j[3]
            df[df_idx, :dT1odTex] = j[4]
            df[df_idx, :dT1odT1s] = j[5]
            df[df_idx, :dT1odT2s] = j[6]
        end
    end
    CSV.write("$(get_main_path())/jacobian.csv", df)
else
    df = DataFrame(CSV.File("$(get_main_path())/jacobian.csv"))
end
show(df; allrows=false, allcols=true)225×15 DataFrame
 Row │ seq_name                           seq_type  ROI          m0s      T1f      T2f      Tex        T1s      T2s      dT1odm0s  dT1odT1f  dT1odT2f      dT1odTex  dT1odT1s  dT1odT2s
     │ String                             String7   String15     Float64  Float64  Float64  Float64    Float64  Float64  Float64   Float64   Float64       Float64   Float64   Float64
─────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1 │ IR Stanisz et al.                  IR        WM             0.212     1.84   0.0769  0.0735294    0.34      12.5  -2.10292  0.267415  -0.000110262  1.30037   1.37382    0.000508036
   2 │ IR Stikhov et al.                  IR        WM             0.212     1.84   0.0769  0.0735294    0.34      12.5  -2.28826  0.222543  -0.0506166    0.912075  1.02529   -0.000920414
   3 │ IR Preibisch et al.                IR        WM             0.212     1.84   0.0769  0.0735294    0.34      12.5  -2.46361  0.228864   0.20441      1.52069   1.06454    0.00100851
   4 │ IR Shin et al.                     IR        WM             0.212     1.84   0.0769  0.0735294    0.34      12.5  -2.3551   0.246296  -0.022935     0.743902  1.27175   -1.53697e-5
   5 │ LL Shin et al.                     LL        WM             0.212     1.84   0.0769  0.0735294    0.34      12.5  -2.23983  0.251802  -0.00230623   0.658765  1.39083   -7.00089e-5
   6 │ IR Lu et al.                       IR        WM             0.212     1.84   0.0769  0.0735294    0.34      12.5  -2.22802  0.252726  -0.00395377   0.66974   1.40637   -5.633e-7
   7 │ LL Stikhov et al.                  LL        WM             0.212     1.84   0.0769  0.0735294    0.34      12.5  -2.24     0.18384    0.207381     1.85413   0.707039   0.00407056
   8 │ vFA Stikhov et al.                 vFA       WM             0.212     1.84   0.0769  0.0735294    0.34      12.5  -2.30226  0.298184   0.0182888    2.22977   1.23039    0.0082391
  ⋮  │                 ⋮                     ⋮           ⋮          ⋮        ⋮        ⋮         ⋮         ⋮        ⋮        ⋮         ⋮           ⋮           ⋮         ⋮           ⋮
 219 │ vFA CSMT w/ B1rms = 2uT Teixeira…  vFA       hippocampus    0.097     2.65   0.091   0.0653595    0.376     13.0  -7.14758  0.199928   0.119478     3.20735   0.484602   0.0100345
 220 │ MP2RAGE Marques et al.             MP2RAGE   hippocampus    0.097     2.65   0.091   0.0653595    0.376     13.0  -6.38943  0.349287   1.0598       1.6882    1.25555   -6.6475e-5
 221 │ MPRAGE Wright et al.               MP2RAGE   hippocampus    0.097     2.65   0.091   0.0653595    0.376     13.0  -6.28059  0.394443  -0.00842034   0.53405   1.53763   -9.50526e-5
 222 │ IR EPI Wright et al.               IR        hippocampus    0.097     2.65   0.091   0.0653595    0.376     13.0  -6.28075  0.395435  -0.00386423   0.670863  1.55602    0.000378673
 223 │ IR ad. Reynolds et al.             IR        hippocampus    0.097     2.65   0.091   0.0653595    0.376     13.0  -6.48347  0.376248  -0.0332049    1.16102   1.3979     0.000621703
 224 │ IR sinc Reynolds et al.            IR        hippocampus    0.097     2.65   0.091   0.0653595    0.376     13.0  -6.60535  0.361462  -0.00268402   1.05062   1.3209     0.00289585
 225 │ SR Reynolds et al.                 SR        hippocampus    0.097     2.65   0.091   0.0653595    0.376     13.0  -6.24995  0.4047     0.232175     1.09583   1.62443    0.00010026
                                                                                                                                                                            210 rows omittedFigure 1
Fig. 1 is a scatter plot and _x ensures the desired alignment along the x-axis and ymax ensures a uniform scaling of the plot:
_x = Float64.(map(x -> findfirst(isequal(x), unique(df.ROI)), df.ROI))
_x .+= (map(x -> findfirst(isequal(x), sort(unique(df.seq_type))), df.seq_type) .- 3) ./ 8
ymax = maximum([maximum(abs.(df[!, j[i]] .* mean(df[!, p[i]]))) for i ∈ eachindex(j)]) * 1.1pall = similar(j, Plots.Plot)
ylabels = ["|∂T₁ᵒ/∂m₀ˢ⋅μ(m₀ˢ)| (s)", "|∂T₁ᵒ/∂T₁ᶠ⋅μ(T₁ᶠ)| (s)", "|∂T₁ᵒ/∂T₂ᶠ⋅μ(T₂ᶠ)| (s)", "|∂T₁ᵒ/∂Tₓ⋅μ(Tₓ)| (s)", "|∂T₁ᵒ/∂T₁ˢ⋅μ(T₁ˢ)| (s)", "|∂T₁ᵒ/∂T₂ˢ⋅μ(T₂ˢ)| (s)"]
for id ∈ eachindex(j)
    pall[id] = scatter(_x, abs.(df[!, j[id]] .* mean(df[!, p[id]])),
        group=df.seq_type,
        hover=df.seq_name,
        xticks=id == length(j) ? (1:9, String.(unique(df.ROI))) : (1:9, ["" for _ ∈ 1:9]),
        ylim=(0, ymax),
        ylabel=ylabels[id],
        legend_position=:none,
    )
end
plt = plot(pall..., layout=(6, 1), size=(800, 1500))Derivatives of the observed T₁ᵒ with respect to the 6 MT parameters. I calculated the derivatives for 9 brain regions of interest (ROIs) and for 25 pulse sequences, grouped into different sequence types (cf. legend). Here, WM denotes white matter, CC the corpus callosum, and GM gray matter. The derivatives ∂T₁ᵒ/∂pᵢᴹᵀ are normalized by pᵢᴹᵀ, averaged over all ROIs, to allow for a comparison between the parameters.
Linear Mixed model
First, we initialize a few empty DataFrame objects that are filled in the for-loop.
df_pred = DataFrame(
    dT1odm0s_simulated=Vector{Float64}(undef, Nrows),
    dT1odT1f_simulated=Vector{Float64}(undef, Nrows),
    dT1odT2f_simulated=Vector{Float64}(undef, Nrows),
    dT1odTex_simulated=Vector{Float64}(undef, Nrows),
    dT1odT1s_simulated=Vector{Float64}(undef, Nrows),
    dT1odT2s_simulated=Vector{Float64}(undef, Nrows),
    dT1odm0s_predicted=Vector{Float64}(undef, Nrows),
    dT1odT1f_predicted=Vector{Float64}(undef, Nrows),
    dT1odT2f_predicted=Vector{Float64}(undef, Nrows),
    dT1odTex_predicted=Vector{Float64}(undef, Nrows),
    dT1odT1s_predicted=Vector{Float64}(undef, Nrows),
    dT1odT2s_predicted=Vector{Float64}(undef, Nrows),
    color=Vector{Int}(undef, Nrows),
    ROI=Vector{Symbol}(undef, Nrows),
)
r2 = DataFrame()
r2[!, Symbol("∂T₁ᵒ / ∂pᵢᴹᵀ")]=Symbol[]
r2[!, Symbol("μ(|∂T₁ᵒ/∂pᵢᴹᵀ|) ⋅ μ(pᵢᴹᵀ)")]=Float64[]
r2[!, Symbol("σ(∂T₁ᵒ/∂pᵢᴹᵀ) / μ(|∂T₁ᵒ/∂pᵢᴹᵀ|)")]=Float64[]
r2[!, Symbol("R²(fixed)")]=Float64[]
r2[!, Symbol("R²(ROI)")]=Float64[]
r2[!, Symbol("R²(seq. type)")]=Float64[]
r2[!, Symbol("R²(ind. seq)")]=Float64[]
r2[!, Symbol("R²(full)")]=Float64[]
r2_fixed = DataFrame()
r2_fixed[!, Symbol("∂T₁ᵒ / ∂pᵢᴹᵀ")]=Symbol[]
r2_fixed[!, Symbol("R²(m₀ˢ)")]=Float64[]
r2_fixed[!, Symbol("R²(T₁ᶠ)")]=Float64[]
r2_fixed[!, Symbol("R²(T₂ᶠ)")]=Float64[]
r2_fixed[!, Symbol("R²(Tₓ)")]=Float64[]
r2_fixed[!, Symbol("R²(T₁ˢ)")]=Float64[]
r2_fixed[!, Symbol("R²(T₂ˢ)")]=Float64[]
r2_fixed[!, Symbol("R²(fixed)")]=Float64[]
fixed_model = DataFrame()
fixed_model[!, Symbol("∂T₁ᵒ / ∂pᵢᴹᵀ")]=Symbol[]
fixed_model[!, Symbol("a₀")]=Float64[]
fixed_model[!, Symbol("a(m₀ˢ)")]=Float64[]
fixed_model[!, Symbol("a(T₁ᶠ) (1/s)")]=Float64[]
fixed_model[!, Symbol("a(T₂ᶠ) (1/s)")]=Float64[]
fixed_model[!, Symbol("a(Tₓ) (1/s)")]=Float64[]
fixed_model[!, Symbol("a(T₁ˢ) (1/s)")]=Float64[]
fixed_model[!, Symbol("a(T₂ˢ) (1/s)")]=Float64[]
fixed_model[!, Symbol("m₀ˢ = 0")]=Float64[]
pall = similar(j, Plots.Plot)
for id ∈ eachindex(j)
    μⱼ = mean(abs.(df[!, j[id]])) * mean(df[!, p[id]])
    cv = std(df[!, j[id]]) / mean(abs.(df[!, j[id]]))
    frm = @formula(derivative ~ 1 + m0s + T1f + T2f + Tex + T1s + T2s + (1 | seq_type) + (1 | seq_name) + (1 | ROI))
    model = fit(MixedModel, term(j[id]) ~ frm.rhs, df)
    pred = sum(param -> model.βs[param] .* df[!, param], Symbol.(frm.rhs[collect(typeof.(frm.rhs) .== Term)]))
    σ²_fixed = var(pred; corrected=false)
    σ²_ROI = model.σs[:ROI][1]^2
    σ²_seqname = model.σs[:seq_name][1]^2
    σ²_seqtype = model.σs[:seq_type][1]^2
    σ²_ϵ = model.σ^2
    σ²_all = σ²_fixed + σ²_ROI + σ²_seqtype + σ²_seqname + σ²_ϵ
    r²_fixed = σ²_fixed / σ²_all
    r²_ROI = σ²_ROI / σ²_all
    r²_seqname = σ²_seqname / σ²_all
    r²_seqtype = σ²_seqtype / σ²_all
    r²_full = (σ²_fixed + σ²_ROI + σ²_seqtype + σ²_seqname) / σ²_all
    fe = fixef(model)
    m0s_intercept = fe[1] + mean(df.T1f) * fe[3] + mean(df.T2f) * fe[4] + mean(df.Tex) * fe[5] + mean(df.T1s) * fe[6] + mean(df.T2s) * fe[7]
    push!(r2, (j[id], μⱼ, cv, r²_fixed, r²_ROI, r²_seqtype, r²_seqname, r²_full))
    Δr² = shapley_regression(df, j[id], p)
    push!(r2_fixed, (j[id], Δr²..., sum(Δr²)))
    push!(fixed_model, (j[id], fe..., m0s_intercept))
    # Plot: simulated vs. full-model prediction
    dlim = maximum(df[!, j[id]]) - minimum(df[!, j[id]])
    xlim = (minimum(df[!, j[id]]) - 0.15dlim, maximum(df[!, j[id]]) + 0.15dlim)
    available_markers = [:circle, :rect, :diamond, :hexagon, :cross, :xcross, :utriangle, :x, :dtriangle]
    unique_ROI = unique(df.ROI)
    label_to_marker = Dict(label => available_markers[i] for (i, label) in enumerate(unique_ROI))
    markers = [label_to_marker[label] for label in df.ROI]
    pall[id] = scatter(df[!, j[id]], predict(model);
        group=df.seq_type,
        hover=df.seq_name .* "; " .* String.(df.ROI),
        m=markers,
        title = j[id],
        xlabel=id ∈ [5,6] ? "simulated" : "",
        ylabel=id ∈ [1,3,5] ? "predicted" : "",
        legend_position=:outerbottomright,
        xlim,
        ylim=xlim
    )
    plot!(pall[id], [xlim...], [xlim...], label="ideal", lc=:white, ls=:dash)
endTable 1
Print the results of the mixed effects model fit:
r2| Row | ∂T₁ᵒ / ∂pᵢᴹᵀ | μ(|∂T₁ᵒ/∂pᵢᴹᵀ|) ⋅ μ(pᵢᴹᵀ) | σ(∂T₁ᵒ/∂pᵢᴹᵀ) / μ(|∂T₁ᵒ/∂pᵢᴹᵀ|) | R²(fixed) | R²(ROI) | R²(seq. type) | R²(ind. seq) | R²(full) | 
|---|---|---|---|---|---|---|---|---|
| Symbol | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
| 1 | dT1odm0s | 0.556933 | 0.440659 | 0.964721 | 0.0 | 0.0 | 0.0210201 | 0.985741 | 
| 2 | dT1odT1f | 0.680899 | 0.329069 | 0.661442 | 0.0 | 0.0 | 0.322802 | 0.984244 | 
| 3 | dT1odT2f | 0.00756273 | 1.96182 | 0.0170527 | 0.0 | 0.198561 | 0.683024 | 0.898637 | 
| 4 | dT1odTex | 0.0982651 | 0.510041 | 0.110739 | 0.0 | 0.494323 | 0.364271 | 0.969334 | 
| 5 | dT1odT1s | 0.390455 | 0.283929 | 0.23283 | 0.000234572 | 0.0130214 | 0.722424 | 0.968509 | 
| 6 | dT1odT2s | 0.0475484 | 0.984586 | 0.0145129 | 0.0 | 0.63195 | 0.263944 | 0.910407 | 
This table corresponds to Tab. 1 in the paper. The column μ(|∂T₁ᵒ/∂pᵢᴹᵀ|) ⋅ μ(pᵢᴹᵀ) denotes the mean derivative, normalized by the average parameter, and serves as a measure for the sensitivity of T₁ᵒ to the respective parameter. The column σ(∂T₁ᵒ/∂pᵢᴹᵀ) / μ(|∂T₁ᵒ/∂pᵢᴹᵀ|) denotes the coefficient of variation. The coefficients of determination for the full model R^2(full) is dissected into its components: R^2(full) = R²(fixed) + R^2(ROI) + R^2(seq. type) + R^2(ind. seq), where R²(fixed) captures all fixed effects, that is, the degree to which variations of the pᵢᴹᵀ between the ROIs explain the derivatives' variability. R^2(ROI) captures the ROI-identifier as a random variable, potentially modeling inter-ROI variations not captured by the linear model of pᵢᴹᵀ. R^2(seq. type) captures the degree to which the sequence type, that is, the groups inversion-recovery, Look-Locker, saturation-recovery, variable flip angle, and MP²RAGE, explains variability of the derivatives, and R^2(ind. seq) captures each sequence by itself.
Table 2
r2_fixed| Row | ∂T₁ᵒ / ∂pᵢᴹᵀ | R²(m₀ˢ) | R²(T₁ᶠ) | R²(T₂ᶠ) | R²(Tₓ) | R²(T₁ˢ) | R²(T₂ˢ) | R²(fixed) | 
|---|---|---|---|---|---|---|---|---|
| Symbol | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
| 1 | dT1odm0s | 0.236289 | 0.328903 | 0.203157 | 0.0683046 | 0.0982412 | 0.0298259 | 0.96472 | 
| 2 | dT1odT1f | 0.244748 | 0.0608523 | 0.0252796 | 0.0522312 | 0.176062 | 0.102258 | 0.66143 | 
| 3 | dT1odT2f | 0.00602466 | 0.00203602 | 0.000862497 | 0.00220685 | 0.00370932 | 0.00221337 | 0.0170527 | 
| 4 | dT1odTex | 0.0082649 | 0.0397436 | 0.0317466 | 0.00488157 | 0.00945707 | 0.0166449 | 0.110739 | 
| 5 | dT1odT1s | 0.0207222 | 0.0614811 | 0.0516094 | 0.0130687 | 0.0413631 | 0.0445847 | 0.232829 | 
| 6 | dT1odT2s | 0.000965823 | 0.00507139 | 0.0044098 | 0.000746721 | 0.000482004 | 0.00283735 | 0.0145131 | 
This table corresponds to Tab. 2 in the paper and analyzes the fixed effects. R²(fixed) is separated into the individual effects with Shapley regression. Note that `R²(fixed) = R²(m₀ˢ) + R²(T₁ᶠ) + R²(T₂ᶠ) + R²(Tₓ) + R²(T₁ˢ) + R²(T₂ˢ).
Figure 2
plt = plot(pall..., layout=(3, 2), size=(800, 1000))Validation of the mixed effects model, where "simulated" denotes the simulated derivatives, and "predicted" the output of the mixed model. The sequence type is here color-coded, while the maker shape identifies the region of interest. Here, WM denotes white matter, CC the corpus callosum, and GM gray matter. The dotted line represents the perfect fit.
Table A1
fixed_model| Row | ∂T₁ᵒ / ∂pᵢᴹᵀ | a₀ | a(m₀ˢ) | a(T₁ᶠ) (1/s) | a(T₂ᶠ) (1/s) | a(Tₓ) (1/s) | a(T₁ˢ) (1/s) | a(T₂ˢ) (1/s) | m₀ˢ = 0 | 
|---|---|---|---|---|---|---|---|---|---|
| Symbol | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
| 1 | dT1odm0s | 3.27099 | 5.54264 | -2.9847 | -39.1503 | 61.8731 | -0.68595 | -0.20586 | -4.38218 | 
| 2 | dT1odT1f | 0.22123 | -1.44109 | -0.157866 | 3.61564 | -2.01963 | 0.831118 | 0.0160723 | 0.569857 | 
| 3 | dT1odT2f | 0.1658 | -0.376151 | 0.0265572 | -1.23998 | -0.130622 | 0.0929158 | 0.00253391 | 0.188097 | 
| 4 | dT1odTex | 1.10669 | 1.40698 | 0.909191 | -3.20033 | -5.16708 | -2.99036 | -0.019092 | 0.928565 | 
| 5 | dT1odT1s | 1.51722 | 1.09856 | 0.546026 | -2.39964 | -4.95085 | -2.56266 | -0.0181253 | 0.862619 | 
| 6 | dT1odT2s | -0.00133644 | 0.00399748 | 0.00163344 | -0.0156081 | 0.0175875 | 0.00170419 | -9.99052e-5 | 0.00125829 | 
Coefficients of the fixed effects for each derivative's mixed model fit, including the intercept a₀. The last column denotes the derivatives assuming m₀ˢ = 0 and the mean values for the other parameters. For m₀ˢ = 0, the MT model reduces to the Bloch model, and a perfect statistical model should result in ∂T₁ᵒ/∂T₁ᶠ = 1 and ∂T₁ᵒ/∂Tₓ = ∂T₁ᵒ/∂T₁ˢ = ∂T₁ᵒ/∂T₂ˢ = 0. This is clearly not the case, highlighting the limitations of the mixed-effects model when extrapolating.