Non-Linear Least Square Fitting

This section gives a brief overview of the interface to fit the generalized Bloch model to hybrid-state free precession data. We use the LsqFit.jl package and supply the algorithm with analytic gradients that are calculated with the calculatesignal_linearapprox function that implements the linear approximation of the generalized Bloch model for a train of rectangular RF pulses.

Basic Interface

This tutorial uses the following packages:

using MRIgeneralizedBloch
using MAT
using LinearAlgebra
using Plots

and we demonstrate the concept at the example of the RF pulse train that we published in the paper Rapid quantitative magnetization transfer imaging: utilizing the hybrid state and the generalized Bloch model:

control = matread(normpath(joinpath(pathof(MRIgeneralizedBloch), "../../docs/control_MT_v3p2_TR3p5ms_discretized.mat")))
α   = control["alpha"]
TRF = control["TRF"]

TR = 3.5e-3
t = TR .* (1:length(TRF));

As an example we can assume the following ground truth parameters

m0s = 0.15
R1f = 0.5 # 1/s
R2f = 17 # 1/s
Rx = 30 # 1/s
R1s = 3 # 1/s
T2s = 12e-6 # s
ω0 = 100 # rad/s
B1 = 0.9; # in units of B1_nominal

precompute the linear approximation

R2slT = precompute_R2sl();

and simulate the signal:

s = calculatesignal_linearapprox(α, TRF, TR, ω0, B1, m0s, R1f, R2f, Rx, R1s, T2s, R2slT)
s = vec(s)
1142-element Vector{ComplexF64}:
  -0.32521731534875903 + 0.009389118419680724im
  -0.23144917764360304 + 0.004770811454991214im
   -0.1290377467849806 + 0.01134984015106346im
   -0.0567876651345546 + 0.0007970042626803417im
  -0.02675559139202616 + 0.003982755900190034im
  -0.01673039007555239 - 0.0022525327725476467im
 -0.012251559494301099 + 0.0027438655695917425im
 -0.007219555756391133 - 0.0018205028945847297im
 -0.009658488390963555 + 0.0012105896643953551im
 -0.004850006834127853 - 0.00038891966697234177im
                       ⋮
  0.009356175012659632 + 0.00011234338454070802im
  0.010555785673979995 + 0.00020142661069043585im
  0.013338246077603587 + 0.00040951999381068783im
  0.019556529056948786 + 0.0008491700809056521im
   0.03288202710045458 + 0.0017547678348315665im
   0.06717246407976532 + 0.0047456526833682im
   0.15801530357449572 + 0.012316528152781963im
   0.26540780134223896 + 0.009131257906679586im
   0.35381344155560296 + 0.010214696894837408im

To make this example a bit more realistic, we add complex valued Gaussian noise:

s .+= 0.01 * randn(ComplexF64, size(s));

Now we can fit the model to the noisy data:

qM = fit_gBloch(s, α, TRF, TR; R2slT=R2slT)
MRIgeneralizedBloch.qMTparam(1.0348271502066184 + 0.003867153616122628im, 0.13370265608115106, 0.5190522410816119, 17.783606822848373, 34.28496645834916, 2.512010523913911, 1.2e-5, -5.4409459119238015, 0.8947133210545734, 0.33506434179879474, LsqFit.LsqFitResult{Vector{Float64}, Vector{Float64}, Matrix{Float64}, Vector{Float64}, Vector{LsqFit.LMState{LsqFit.LevenbergMarquardt}}}([1.0348271502066184, 0.003867153616122628, 0.13370265608115106, 0.5190522410816119, 17.783606822848373, 34.28496645834916, 2.512010523913911, 1.2e-5, -5.4409459119238015, 0.8947133210545734], [0.0021583600164622463, 0.002970798674710784, -0.0011172474760935275, -0.0030045613279917407, -0.003696789482116075, 0.0042454320926074915, -0.013501466008005296, 0.0034066863886113748, -0.0035657658394100678, 0.0008494109435216134  …  -0.0052797454600390335, 0.004973771263043705, 0.003150770424012444, -0.013918231114435374, 0.0015642175600380792, -0.009638083374086255, -0.0016714539561739922, -0.012763045611160229, -0.0008064855379815445, -0.010313751101964459], [-0.3119023749182563 0.0005262332876208204 … -2.6137839523138995e-7 -0.2879750708440754; -0.21948776882018853 0.00022465452700863378 … 2.1548487457428803e-6 -0.36519235105389375; … ; -0.0004616494229200584 0.25302802382705564 … 8.781450694711425e-5 0.00048722434270016184; -0.0005756297730948124 0.34118003844579686 … 0.00010943879839635763 0.0006617883631637038], false, Iter     Function value   Gradient norm 
------   --------------   --------------
, Float64[]))

The last keyword argument is optional. It allows to recycle the precomputed R2sl, which improves speed. If not specified, it is re-calculated internally.

The results are stored in a struct and we can access the fitted parameters with

qM.m0s
0.13370265608115106
qM.R1f # 1/s
0.5190522410816119
qM.R2f # 1/s
17.783606822848373
qM.Rx # 1/s
34.28496645834916
qM.R1s # 1/s
2.512010523913911
1e6qM.T2s # μs
12.0
qM.ω0 # rad/s
-5.4409459119238015
qM.B1 # 1/B1_nominal
0.8947133210545734

We can also simulate the signal with the fitted parameters

s_fitted = calculatesignal_linearapprox(α, TRF, TR, qM.ω0, qM.B1, qM.m0s, qM.R1f, qM.R2f, qM.Rx, qM.R1s, qM.T2s, R2slT)
s_fitted = vec(s_fitted);

and compare it to the noisy data:

p = plot(xlabel="t (s)", ylabel="signal (normalized)", legend=:topleft)
plot!(p, t, real.(s), label="Re(s)")
plot!(p, t, imag.(s), label="Im(s)")
plot!(p, t, real.(s_fitted), label="Re(s_fitted)")
plot!(p, t, imag.(s_fitted), label="Im(s_fitted)")

Bounds and Fixed Parameters

Above example uses the default bounds

reM0 = (-Inf, 1, Inf)

imM0 = (-Inf, 0, Inf)

m0s = ( 0, 0.2, 1)

R1f = ( 0, 0.3, Inf)

R2f = ( 0, 15, Inf)

Rx = ( 0, 20, Inf)

R1s = ( 0, 3, Inf)

T2s = (8e-6,1e-5,12e-6)

ω0 = (-Inf, 0, Inf)

B1 = ( 0, 1, 1.5)

where the three entries refer to (minimum, start_value, maximum) (cf. fit_gBloch).

With keyword arguments, one can modify each of these bounds. For example:

qM = fit_gBloch(s, α, TRF, TR; R2slT=R2slT, m0s  = (0.1, 0.3, 0.5))
MRIgeneralizedBloch.qMTparam(1.03355493262202 + 0.003760996591661712im, 0.13364077197826466, 0.5160350037495423, 17.809192898404113, 34.2815113722717, 2.5624099088408556, 1.2e-5, -3.589942964931003, 0.8936035477529637, 0.33506539239123706, LsqFit.LsqFitResult{Vector{Float64}, Vector{Float64}, Matrix{Float64}, Vector{Float64}, Vector{LsqFit.LMState{LsqFit.LevenbergMarquardt}}}([1.03355493262202, 0.003760996591661712, 0.13364077197826466, 0.5160350037495423, 17.809192898404113, 34.2815113722717, 2.5624099088408556, 1.2e-5, -3.589942964931003, 0.8936035477529637], [0.002153926892089153, 0.003125621083824387, -0.0011436606536403687, -0.0029035299225861483, -0.003757453178730368, 0.0043177489361428956, -0.01355599945766896, 0.0034621604762157115, -0.0036083136254932086, 0.0008906829394319422  …  -0.005279181547502962, 0.004974948157762101, 0.003153452563448003, -0.013912035533497078, 0.001577796303950947, -0.009609320620619975, -0.0015911709046995284, -0.012554140817433314, -0.0006695595559004825, -0.010145476891354617], [-0.31228988641191296 0.0003479477377328592 … -2.8671605109903396e-7 -0.2903110747778179; -0.21960784478555787 0.00014892060292723467 … 1.3739754527595778e-6 -0.368789483869573; … ; -0.00030530231152574566 0.25345504827657933 … 8.790011238412273e-5 0.0005051908983665547; -0.0003808657519264067 0.34183444669671326 … 0.00010963410225216583 0.0005904844363291056], false, Iter     Function value   Gradient norm 
------   --------------   --------------
, Float64[]))

starts the fit at m0s = 0.3 and uses a lower bound of 0.1 and an upper bound of 0.5. Alternatively, one also fix parameters to specified values:

qM = fit_gBloch(s, α, TRF, TR; R2slT=R2slT, ω0 = 0, B1 = 1)
MRIgeneralizedBloch.qMTparam(0.9761606832726175 + 0.003229845981024707im, 0.16616219762680726, 0.4451326270202993, 14.608867501033648, 37.098345976061196, 2.7126321474312958, 8.0e-6, 0, 1, 0.33805980839964667, LsqFit.LsqFitResult{Vector{Float64}, Vector{Float64}, Matrix{Float64}, Vector{Float64}, Vector{LsqFit.LMState{LsqFit.LevenbergMarquardt}}}([0.9761606832726175, 0.003229845981024707, 0.16616219762680726, 0.4451326270202993, 14.608867501033648, 37.098345976061196, 2.7126321474312958, 8.0e-6], [-0.0058099839825108135, -0.008650493637923334, -0.006447081482903058, -0.007392162167518576, -0.003518024975575025, 0.0019717667429412257, -0.012516398094600504, 0.0016078763869528528, -0.0024527733803805752, -0.0007783221312783369  …  -0.005278784922274009, 0.004976498357759675, 0.0031578406668435343, -0.013901024698644236, 0.0016026928715920498, -0.009555918883228891, -0.0014403189823074972, -0.01216235488967039, -0.00043658266279052063, -0.009885762956280608], [-0.3388083133023861 0.0 … -0.03860690872245894 2438.9029505065014; -0.24458301791416992 -0.0 … -0.027926147561654773 1608.030761876991; … ; 0.0 0.269571292362993 … 0.00010175288027011232 -6.16806849369588; 0.0 0.356582466692264 … 0.00013444091421160843 -8.492996543614428], false, Iter     Function value   Gradient norm 
------   --------------   --------------
, Float64[]))

In this case, the derivatives wrt. ω0 and B1 are not calculated and the result is accordingly

qM.ω0
0
qM.B1
1

Linear Compression

As originally suggested by McGivney et al. for MR Fingerprinting, the manifold of signal evolution or fingerprints is low rank and it is often beneficial to reconstruct images directly in this domain. We can calculate a low rank basis with

sv = Array{ComplexF64}(undef, length(s), 50)
for i=1:size(sv,2)
    sv[:,i] = calculatesignal_linearapprox(α, TRF, TR, 500randn(), 0.8 + 0.4rand(), rand(), rand(), 20rand(), 30rand(), 3rand(), 8e-6+5e-6rand(), R2slT)
end
u, _, _ = svd(sv)
u = u[:,1:9];

where the rank 9 was chosen heuristically. The noisy signal can be compressed with

sc = u' * s
9-element Vector{ComplexF64}:
     2.095232191795673 + 0.07062335326507223im
   0.31182551677438886 - 0.20171528828098245im
    0.3402738966065719 - 0.05530539921624074im
  -0.14587886049515875 + 0.017255460534054773im
    0.0679029312395589 + 0.020642152730977997im
   0.11054193407258323 - 0.01576559752974921im
    0.0679530642359181 + 0.01914663822240742im
   0.09858663181839403 + 0.08861455789387537im
 -0.021786151035901508 - 0.004813548004764301im

and fitted by calling fit_gBloch with the keyword argument u=u:

qM = fit_gBloch(sc, α, TRF, TR; R2slT=R2slT, u=u)
MRIgeneralizedBloch.qMTparam(0.9869404824853937 - 0.001505334063416182im, 0.14011830977402992, 0.33233342451894365, 17.024536900721152, 43.45457720343623, 4.183435086430274, 8.783621861326565e-6, 89.42688237208401, 0.8904456302093543, 0.028024322448403666, LsqFit.LsqFitResult{Vector{Float64}, Vector{Float64}, Matrix{Float64}, Vector{Float64}, Vector{LsqFit.LMState{LsqFit.LevenbergMarquardt}}}([0.9869404824853937, -0.001505334063416182, 0.14011830977402992, 0.33233342451894365, 17.024536900721152, 43.45457720343623, 4.183435086430274, 8.783621861326565e-6, 89.42688237208401, 0.8904456302093543], [0.002779973438214345, -0.00947136541684096, -0.009607594388388874, 0.0005385486405461437, 0.0017427236637608118, 0.00981233459846631, -0.004091173481812649, 0.007053005380511623, -0.007757233336257455, -0.0001249583392965714, 0.004973124289277431, 0.007613865474995131, -0.0009957889360007882, -0.0019189931885383142, -0.005236780540504589, 0.01208625024829972, -0.010413927489936303, -0.00010779396701444954], [2.1256598488138523 -0.07467342196546489 … 0.00010616475738559906 0.23410009747895882; 0.3066583388140995 0.1988777887134272 … -0.0001610522006014012 -0.522517721788592; … ; 0.07939848114661538 0.10691639246058995 … 4.22757184153542e-5 0.043774023470116205; -0.005032108466215483 -0.02992663680538732 … -1.946618985127333e-5 -0.10156157782560828], false, Iter     Function value   Gradient norm 
------   --------------   --------------
, Float64[]))

Apparent $R_1$

Above fits tread R1f and R1s of the free and the semi-solid as independent parameters. As we discussed in our paper, many publications in the literature assume an apparent R1a = R1f = R1s. The corresponding model can be fitted by specifying fit_apparentR1=true:

R1a = 1 # 1/s
s = calculatesignal_linearapprox(α, TRF, TR, ω0, B1, m0s, R1a, R2f, Rx, R1a, T2s, R2slT)
qM = fit_gBloch(vec(s), α, TRF, TR; fit_apparentR1=true, R1a = (0, 0.7, Inf), R2slT=R2slT)
MRIgeneralizedBloch.qMTparam(0.9999999999999134 + 8.486795534874881e-17im, 0.14999999999999608, 1.0000000000000748, 16.99999999999966, 30.000000000003325, 1.0000000000000748, 1.1999999999998139e-5, 100.00000000000044, 0.9000000000000087, 2.2541084629588707e-14, LsqFit.LsqFitResult{Vector{Float64}, Vector{Float64}, Matrix{Float64}, Vector{Float64}, Vector{LsqFit.LMState{LsqFit.LevenbergMarquardt}}}([0.9999999999999134, 8.486795534874881e-17, 0.14999999999999608, 1.0000000000000748, 16.99999999999966, 30.000000000003325, 1.1999999999998139e-5, 100.00000000000044, 0.9000000000000087], [4.163336342344337e-15, 2.1649348980190553e-15, 2.275957200481571e-15, 3.469446951953614e-16, 6.765421556309548e-16, 2.393918396847994e-16, 7.112366251504909e-17, 4.2500725161431774e-16, -2.0990154059319366e-16, 5.178149575790769e-16  …  3.2255014631443757e-18, 1.2197274440461925e-18, -8.294146619514109e-18, 3.63207727782644e-18, -1.4203048459560108e-17, -2.168404344971009e-18, -4.9439619065339e-17, -1.0408340855860843e-16, -8.847089727481716e-17, -1.8908485888147197e-16], [-0.3660243243186909 -0.010586026795916676 … -9.51394915123431e-6 -0.3385167610156441; -0.25917109878819117 -0.005589974863074092 … -5.161513042198512e-5 -0.414490405383883; … ; 0.01027201277001961 0.29861787950552005 … 0.00011260594964831614 0.0024409814452967323; 0.01151684856953726 0.39820858167216716 … 9.988694484534133e-5 -0.003324282121722268], true, Iter     Function value   Gradient norm 
------   --------------   --------------
, Float64[]))

Here, we specified the limits of R1a explicitly, which is optional.


This page was generated using Literate.jl.