Non-Linear Least Square Fitting
This section gives a brief overview of the interface to fit the generalized Bloch model to hybrid-state free precession data. We use the LsqFit.jl package and supply the algorithm with analytic gradients that are calculated with the calculatesignal_linearapprox
function that implements the linear approximation of the generalized Bloch model for a train of rectangular RF pulses.
Basic Interface
This tutorial uses the following packages:
using MRIgeneralizedBloch
using MAT
using LinearAlgebra
using Plots
and we demonstrate the concept at the example of the RF pulse train that we published in the paper Rapid quantitative magnetization transfer imaging: utilizing the hybrid state and the generalized Bloch model:
control = matread(normpath(joinpath(pathof(MRIgeneralizedBloch), "../../docs/control_MT_v3p2_TR3p5ms_discretized.mat")))
α = control["alpha"]
TRF = control["TRF"]
TR = 3.5e-3
t = TR .* (1:length(TRF));
As an example we can assume the following ground truth parameters
m0s = 0.15
R1f = 0.5 # 1/s
R2f = 17 # 1/s
Rx = 30 # 1/s
R1s = 3 # 1/s
T2s = 12e-6 # s
ω0 = 100 # rad/s
B1 = 0.9; # in units of B1_nominal
precompute the linear approximation
R2slT = precompute_R2sl();
and simulate the signal:
s = calculatesignal_linearapprox(α, TRF, TR, ω0, B1, m0s, R1f, R2f, Rx, R1s, T2s, R2slT)
s = vec(s)
1142-element Vector{ComplexF64}:
-0.3252174288893315 + 0.00938912166837356im
-0.23144911831664405 + 0.004770838501329179im
-0.12903777535997424 + 0.011349799622266895im
-0.056787606424231515 + 0.0007970576142408911im
-0.026755594378778585 + 0.003982695309854096im
-0.01673038309211195 - 0.0022524739525476727im
-0.012251527817036773 + 0.0027438145833248856im
-0.007219584139105733 - 0.0018204651504731412im
-0.009658435998686056 + 0.0012105680918308816im
-0.004850045141186905 - 0.0003889148428230819im
⋮
0.00935617815421774 + 0.00011234342445982951im
0.010555789247299562 + 0.00020142668135760577im
0.013338250629915492 + 0.0004095201366372554im
0.019556535785238114 + 0.0008491703773060018im
0.03288203849050157 + 0.0017547684481495536im
0.06717248746066432 + 0.004745654341955107im
0.15801535875047534 + 0.012316532464395217im
0.2654078941316742 + 0.009131261105131041im
0.35381356507971934 + 0.010214700429185606im
To make this example a bit more realistic, we add complex valued Gaussian noise:
s .+= 0.01 * randn(ComplexF64, size(s));
Now we can fit the model to the noisy data:
qM = fit_gBloch(s, α, TRF, TR; R2slT=R2slT)
MRIgeneralizedBloch.qMTparam(0.9982667951213906 - 0.00467223499086084im, 0.1596899173109596, 0.5377120191230745, 17.992646774707527, 19.340028552281787, 2.8752266138024085, 1.2e-5, 92.27299300401852, 0.89038065232014, 0.33305383580197606, LsqFit.LsqFitResult{Vector{Float64}, Vector{Float64}, Matrix{Float64}, Vector{Float64}, Vector{LsqFit.LMState{LsqFit.LevenbergMarquardt}}}([0.9982667951213906, -0.00467223499086084, 0.1596899173109596, 0.5377120191230745, 17.992646774707527, 19.340028552281787, 2.8752266138024085, 1.2e-5, 92.27299300401852, 0.89038065232014], [-0.000870697074624005, -0.0018862207475342785, -0.004332202267433832, -0.00754428520424924, 0.0006414827595124517, -0.0025036078315675923, 0.0010216474699819497, 0.0030856043078662833, 0.00949811343550757, 0.00426712184051633 … -0.015609775025637719, 0.0011098845731215311, 0.01169236785730621, -0.0025536634066561804, 7.294602571433274e-5, 0.015981433813182892, 0.0018476961272479954, -0.004742376317575344, 0.0010159327582026194, 0.007964335986886461], [-0.3195262335275741 -0.00868835151019618 … -4.709576743266641e-6 -0.2951699996173363; -0.22757311236062067 -0.003956944674468921 … -3.779276441801672e-5 -0.373289408726605; … ; 0.008394646846416656 0.26230810023996565 … 9.791960441641599e-5 0.0009126966725946444; 0.009532952202635173 0.3505876008966472 … 9.161184728587987e-5 -0.0037345751006543477], false, Iter Function value Gradient norm
------ -------------- --------------
, Float64[]))
The last keyword argument is optional. It allows to recycle the precomputed R2sl
, which improves speed. If not specified, it is re-calculated internally.
The results are stored in a struct
and we can access the fitted parameters with
qM.m0s
0.1596899173109596
qM.R1f # 1/s
0.5377120191230745
qM.R2f # 1/s
17.992646774707527
qM.Rx # 1/s
19.340028552281787
qM.R1s # 1/s
2.8752266138024085
1e6qM.T2s # μs
12.0
qM.ω0 # rad/s
92.27299300401852
qM.B1 # 1/B1_nominal
0.89038065232014
We can also simulate the signal with the fitted parameters
s_fitted = calculatesignal_linearapprox(α, TRF, TR, qM.ω0, qM.B1, qM.m0s, qM.R1f, qM.R2f, qM.Rx, qM.R1s, qM.T2s, R2slT)
s_fitted = vec(s_fitted);
and compare it to the noisy data:
p = plot(xlabel="t (s)", ylabel="signal (normalized)", legend=:topleft)
plot!(p, t, real.(s), label="Re(s)")
plot!(p, t, imag.(s), label="Im(s)")
plot!(p, t, real.(s_fitted), label="Re(s_fitted)")
plot!(p, t, imag.(s_fitted), label="Im(s_fitted)")
Bounds and Fixed Parameters
Above example uses the default bounds
reM0 = (-Inf, 1, Inf)
imM0 = (-Inf, 0, Inf)
m0s = ( 0, 0.2, 1)
R1f = ( 0, 0.3, Inf)
R2f = ( 0, 15, Inf)
Rx = ( 0, 20, Inf)
R1s = ( 0, 3, Inf)
T2s = (8e-6,1e-5,12e-6)
ω0 = (-Inf, 0, Inf)
B1 = ( 0, 1, 1.5)
where the three entries refer to (minimum, start_value, maximum)
(cf. fit_gBloch
).
With keyword arguments, one can modify each of these bounds. For example:
qM = fit_gBloch(s, α, TRF, TR; R2slT=R2slT, m0s = (0.1, 0.3, 0.5))
MRIgeneralizedBloch.qMTparam(0.9982099333848325 - 0.004677978913332954im, 0.16092223619935414, 0.5399034311868554, 18.01453245556514, 18.842457299682298, 2.871955484744232, 1.2e-5, 92.2648661715259, 0.8906385101781277, 0.3330521048870099, LsqFit.LsqFitResult{Vector{Float64}, Vector{Float64}, Matrix{Float64}, Vector{Float64}, Vector{LsqFit.LMState{LsqFit.LevenbergMarquardt}}}([0.9982099333848325, -0.004677978913332954, 0.16092223619935414, 0.5399034311868554, 18.01453245556514, 18.842457299682298, 2.871955484744232, 1.2e-5, 92.2648661715259, 0.8906385101781277], [-0.0008803027701501098, -0.00200539248745385, -0.004340107069882526, -0.007618242256913321, 0.0006445997370214743, -0.002523354899186094, 0.0010005811211580562, 0.00310017421875326, 0.009457329035163296, 0.0042963842181690404 … -0.015609776093720927, 0.0011098831187598096, 0.01169236561589013, -0.0025536669464108746, 7.294143385455903e-5, 0.01598142634141967, 0.001847623869724398, -0.0047427691008222785, 0.001014701820115392, 0.007960414342977548], [-0.31955410476995816 -0.00868769604453918 … -4.6796515052135345e-6 -0.2947773124966359; -0.22770538500196547 -0.0039358231255770225 … -3.770782081330364e-5 -0.37242311470733414; … ; 0.008395531084674017 0.2623358006448833 … 9.792903454193849e-5 0.0009112266979651102; 0.009531621885062175 0.35059570257425615 … 9.160180790347629e-5 -0.0037382685518744675], false, Iter Function value Gradient norm
------ -------------- --------------
, Float64[]))
starts the fit at m0s = 0.3
and uses a lower bound of 0.1
and an upper bound of 0.5
. Alternatively, one also fix parameters to specified values:
qM = fit_gBloch(s, α, TRF, TR; R2slT=R2slT, ω0 = 0, B1 = 1)
MRIgeneralizedBloch.qMTparam(0.9623493001314775 + 0.0006247541439508438im, 0.18612155043781226, 0.45976267143589783, 14.608264480190991, 27.498292707439635, 2.780579887717563, 8.0e-6, 0, 1, 0.3369004046285255, LsqFit.LsqFitResult{Vector{Float64}, Vector{Float64}, Matrix{Float64}, Vector{Float64}, Vector{LsqFit.LMState{LsqFit.LevenbergMarquardt}}}([0.9623493001314775, 0.0006247541439508438, 0.18612155043781226, 0.45976267143589783, 14.608264480190991, 27.498292707439635, 2.780579887717563, 8.0e-6], [-0.01084923846833058, -0.013080030302736023, -0.008865610712488509, -0.013581456039288203, 0.0034243844706953602, -0.007972548689448173, 0.006119212579379978, -0.0032334847991031916, 0.015640309650621374, -0.0023337404415906798 … -0.01563466752003938, 0.0010545119739680778, 0.011562106490245826, -0.002858937978803805, -0.000600398574681336, 0.014552334167085118, -0.0021277472598287698, -0.015142425811379093, -0.005968698733258675, 0.00031066293588068606], [-0.34177857917510096 0.0 … -0.03764238690204058 2369.01492674857; -0.2476793025257498 -0.0 … -0.027335115778240486 1592.7245367045628; … ; 0.0 0.2719479804954932 … 1.9468283482737637e-5 -1.1818609936523066; 0.0 0.35970779588283996 … 2.5719266297301178e-5 -1.6186360849509724], false, Iter Function value Gradient norm
------ -------------- --------------
, Float64[]))
In this case, the derivatives wrt. ω0
and B1
are not calculated and the result is accordingly
qM.ω0
0
qM.B1
1
Linear Compression
As originally suggested by McGivney et al. for MR Fingerprinting, the manifold of signal evolution or fingerprints is low rank and it is often beneficial to reconstruct images directly in this domain. We can calculate a low rank basis with
sv = Array{ComplexF64}(undef, length(s), 50)
for i=1:size(sv,2)
sv[:,i] = calculatesignal_linearapprox(α, TRF, TR, 500randn(), 0.8 + 0.4rand(), rand(), rand(), 20rand(), 30rand(), 3rand(), 8e-6+5e-6rand(), R2slT)
end
u, _, _ = svd(sv)
u = u[:,1:9];
where the rank 9 was chosen heuristically. The noisy signal can be compressed with
sc = u' * s
9-element Vector{ComplexF64}:
2.1015975428589426 - 0.03719971709136826im
-0.17734175899428967 + 0.019298087325503154im
0.2961051742773636 + 0.011738151163018472im
0.2330686929414954 + 0.08934188811709468im
0.16395317110542076 - 0.00999937439532056im
-0.021514340563465108 + 0.019806068571058627im
0.073456562807343 - 0.0859584111310483im
0.03742581073559734 - 0.004144431959588149im
0.008096051780501254 + 0.007297969924577735im
and fitted by calling fit_gBloch
with the keyword argument u=u
:
qM = fit_gBloch(sc, α, TRF, TR; R2slT=R2slT, u=u)
MRIgeneralizedBloch.qMTparam(1.0153556276789146 - 0.003953242044046788im, 0.14779405528095785, 0.5586063176869526, 18.64803659640963, 18.896927361334765, 2.506655002054887, 1.2e-5, 77.0875124232642, 0.8840380823929265, 0.02084883290509469, LsqFit.LsqFitResult{Vector{Float64}, Vector{Float64}, Matrix{Float64}, Vector{Float64}, Vector{LsqFit.LMState{LsqFit.LevenbergMarquardt}}}([1.0153556276789146, -0.003953242044046788, 0.14779405528095785, 0.5586063176869526, 18.64803659640963, 18.896927361334765, 2.506655002054887, 1.2e-5, 77.0875124232642, 0.8840380823929265], [-0.0005639025301582556, 0.0006080625844320231, 0.0014109013530599346, 0.0009883067694611003, 0.0030869030949433607, 0.0031309381365541566, 0.004782445137848923, -0.00015729306531302167, -0.00019345738830593012, -0.0018794398012962482, -0.009580331036585215, 0.005141678674364304, -0.0011633734144179397, 0.005566686304019505, -0.008172837405373674, 0.000580086281579989, -0.007805635231535393, -0.010324726562208106], [2.069377353734403 0.030431118408745732 … 3.0477793833554826e-5 0.07203341163526364; -0.17409550664340825 -0.00889295766549977 … -0.0002576389647449671 -0.9687534119865375; … ; -0.011626256458514117 0.03675015734283352 … -3.893712477827588e-6 0.03303947666378017; -0.0029506339849454527 0.00779456847116014 … 1.0679101962329341e-5 -0.053951018144891394], false, Iter Function value Gradient norm
------ -------------- --------------
, Float64[]))
Apparent $R_1$
Above fits tread R1f
and R1s
of the free and the semi-solid as independent parameters. As we discussed in our paper, many publications in the literature assume an apparent R1a = R1f = R1s
. The corresponding model can be fitted by specifying fit_apparentR1=true
:
R1a = 1 # 1/s
s = calculatesignal_linearapprox(α, TRF, TR, ω0, B1, m0s, R1a, R2f, Rx, R1a, T2s, R2slT)
qM = fit_gBloch(vec(s), α, TRF, TR; fit_apparentR1=true, R1a = (0, 0.7, Inf), R2slT=R2slT)
MRIgeneralizedBloch.qMTparam(0.9999999999999029 - 2.9353833270856017e-17im, 0.14999999999999586, 1.0000000000000766, 16.9999999999995, 30.00000000000871, 1.0000000000000766, 1.1999999999995426e-5, 100.0000000000033, 0.900000000000004, 5.5665512302381396e-14, LsqFit.LsqFitResult{Vector{Float64}, Vector{Float64}, Matrix{Float64}, Vector{Float64}, Vector{LsqFit.LMState{LsqFit.LevenbergMarquardt}}}([0.9999999999999029, -2.9353833270856017e-17, 0.14999999999999586, 1.0000000000000766, 16.9999999999995, 30.00000000000871, 1.1999999999995426e-5, 100.0000000000033, 0.900000000000004], [4.773959005888173e-15, 4.440892098500626e-15, 2.525757381022231e-15, 1.817990202823694e-15, 6.036837696399289e-16, 6.661338147750939e-16, 4.198030811863873e-16, 2.0296264668928643e-16, 5.2562121322097255e-16, -3.469446951953614e-18 … -1.0598076236045806e-17, 8.782037597132586e-18, -3.550762114890027e-18, 1.3769367590565906e-17, 1.3552527156068805e-17, 4.380176776841438e-17, 1.2663481374630692e-16, 2.42861286636753e-16, 2.0296264668928643e-16, 1.196959198423997e-16], [-0.3660244468227895 -0.010586030300053242 … -9.513950180636651e-6 -0.3385167031421623; -0.25917103878448683 -0.005590003320327343 … -5.161520397915822e-5 -0.414488814043313; … ; 0.010272016221619737 0.2986179796234614 … 0.00011260598742670224 0.0024409771989725984; 0.011516852381789986 0.3982087149479641 … 9.988697776758494e-5 -0.0033242872071595517], true, Iter Function value Gradient norm
------ -------------- --------------
, Float64[]))
Here, we specified the limits of R1a
explicitly, which is optional.
This page was generated using Literate.jl.