The data association module in the front-end includes a short-term data association block and a long-term one.
Short-term data association is responsible for associating corresponding features in consecutive sensor measurements.
Odometry constraints:
$$ x_{i+1} \: \sim \: \mathcal{N}(f(x_{i}, u_{i}), \Sigma_{i}) $$
Features between consecutive images can be tracked using feature matching methods or optical flow algorithm.
Long-term data association (or loop closure) is in charge of associating new measurements to older landmarks. They are responsible for correcting the drift accumulated due to sensor noise.
$$ x_{j} \: \sim \: \mathcal{N}(f(x_{i}, u_{ij}), \Lambda_{ij}) $$
Once features are tracked then PnP or ICP algorithm can be used to calculate visual odometry.
In the case of more than one reliable source of odometry, then EKF based sensor fusion is used to reduce uncertainty in motion estimates.
Local Bundle Adjustment is performed around the newly inserted keyframes.
Robot odometry and loop closure transformation along with uncertainites in measurement are passed by frontend to backend.
Weighted non-linear solver like LM algorithm is used to obtain the optimal robot poses based on relative transformations, $U$:
$$ X^* = \underset{X}{argmax}\medspace \: P(X|U) $$
Global map assembly is done based on optimized poses and local point clouds. Thus we obtained the solution of complete SLAM:
$$ P (X_{t}, M | U_{t-1}, Z_{t}) $$