Filters
EkfSplineTracker = EKFSplineTracker
module-attribute
BoxParticleFilter = EuclideanBoxParticleFilter
module-attribute
DecorrelatedScGpTracker = DecorrelatedSCGPTracker
module-attribute
SCGPTracker = FullSCGPTracker
module-attribute
ScGpTracker = FullSCGPTracker
module-attribute
IMM = InteractingMultipleModelFilter
module-attribute
JPDAF = JointProbabilisticDataAssociationFilter
module-attribute
MemEkfStarTracker = MEMEKFStarTracker
module-attribute
MemEkfTracker = MEMEKFTracker
module-attribute
MemSoekfTracker = MEMSOEKFTracker
module-attribute
SphericalHarmonicsExtendedObjectTracker = SphericalHarmonicsEOTTracker
module-attribute
__all__ = ['AbstractAxialFilter', 'AbstractDummyFilter', 'AxialKalmanFilter', 'AbstractExtendedObjectTracker', 'AbstractFilter', 'AbstractFilterManifoldMixin', 'AbstractGridFilter', 'AbstractHypersphereSubsetFilter', 'AbstractMultipleExtendedObjectTracker', 'AbstractMultitargetTracker', 'AbstractNearestNeighborTracker', 'AbstractParticleFilter', 'AbstractTrackerWithLogging', 'AssociationHypothesis', 'AssociationResult', 'BinghamFilter', 'BoxParticleFilter', 'CircularFilterMixin', 'CircularParticleFilter', 'CircularUKF', 'CostThresholdGate', 'DecorrelatedSCGPTracker', 'DecorrelatedScGpTracker', 'EKFSplineTracker', 'EkfSplineTracker', 'EuclideanBoxParticleFilter', 'EuclideanFilterMixin', 'EuclideanParticleFilter', 'FullSCGPTracker', 'ExtendedObjectAssociationResult', 'ExtendedObjectEstimate', 'FixedLagBuffer', 'FourierRHMTracker', 'GoalConditionedReplayIMMFilter', 'GoalConditionedReplayParticleFilter', 'GoalConditionedReplayParticleIMMFilter', 'GGIWTracker', 'GlobalNearestNeighbor', 'JPDAF', 'JointProbabilisticDataAssociationFilter', 'IMM', 'InteractingMultipleModelFilter', 'GPRHMTracker', 'HypercylindricalFilterMixin', 'HypercylindricalParticleFilter', 'HyperhemisphereCartProdParticleFilter', 'HyperhemisphericalFilterMixin', 'HyperhemisphericalGridFilter', 'HyperhemisphericalParticleFilter', 'HypersphericalDummyFilter', 'HypersphericalFilterMixin', 'HypersphericalParticleFilter', 'HypersphericalUKF', 'HypertoroidalDummyFilter', 'HypertoroidalFilterMixin', 'HypertoroidalFourierFilter', 'AbstractParticleFilter', 'HypercylindricalParticleFilter', 'HypertoroidalParticleFilter', 'KalmanFilter', 'UnscentedKalmanFilter', 'UKFOnManifolds', 'KernelSMEFilter', 'LinBoundedFilterMixin', 'LinBoundedParticleFilter', 'LinPeriodicFilterMixin', 'BernoulliComponent', 'ManifoldExponentialMovingAverage', 'MeasurementRecord', 'MeasurementTimeBuffer', 'MultiBernoulliTracker', 'MultipleExtendedObjectStepResult', 'LinPeriodicParticleFilter', 'MEMEKFStarTracker', 'MEMEKFTracker', 'MEMSOEKFTracker', 'MemEkfStarTracker', 'MemEkfTracker', 'MemSoekfTracker', 'NISGate', 'OutOfSequenceKalmanUpdater', 'OutOfSequenceParticleUpdater', 'OutOfSequenceResult', 'PartitionedSO3ProductParticleFilter', 'PiecewiseConstantFilter', 'ProbabilityThresholdGate', 'RandomMatrixTracker', 'SCGPTracker', 'ScGpTracker', 'TimestampedItem', 'TopKGate', 'association_result_from_hypotheses', 'build_global_nearest_neighbor_associator', 'build_kalman_measurement_initiator', 'build_linear_gaussian_hypothesis_associator', 'build_linear_gaussian_predictor', 'build_linear_gaussian_updater', 'filter_hypotheses', 'gate_hypotheses', 'hypotheses_to_cost_matrix', 'hypotheses_to_log_likelihood_matrix', 'hypotheses_to_probability_matrix', 'hypothesis_cost', 'infer_hypothesis_shape', 'linear_gaussian_association_hypotheses', 'missed_detection_hypothesis', 'quaternion_grid_transition_density', 'Track', 'TrackManager', 'TrackManagerStepResult', 'TrackStatus', 'build_linear_gaussian_updater', 'solve_global_nearest_neighbor', 'student_t_covariance_scale', 'retrodict_linear_gaussian', 'retrodict_linear_gaussian_state', 'SE2FilterMixin', 'SE2UKF', 'SO3ProductParticleFilter', 'so3_right_multiplication_grid_transition', 'SphericalHarmonicsEOTTracker', 'SphericalHarmonicsExtendedObjectTracker', 'StateSpaceSubdivisionFilter', 'ToroidalFilterMixin', 'ToroidalParticleFilter', 'ToroidalWrappedNormalFilter', 'VonMisesFilter', 'VonMisesFisherFilter', 'WrappedNormalFilter']
module-attribute
AbstractAxialFilter
Bases: AbstractFilter
Abstract base class for filters on the hypersphere with antipodal symmetry.
composition_operator = None
instance-attribute
composition_operator_derivative = None
instance-attribute
__init__(initial_filter_state=None)
get_point_estimate()
AbstractDummyFilter
Bases: AbstractFilter
Abstract dummy filter that does nothing on predictions and updates.
Subclasses should call super().init with the initial distribution.
dist
property
filter_state
property
writable
__init__(initial_filter_state)
set_state(dist)
predict_identity(noise_distribution)
predict_nonlinear(f, *args, **kwargs)
predict_nonlinear_via_transition_density(transition_density, *args)
update_identity(noise_distribution, measurement)
update_nonlinear(likelihood, measurement=None)
get_estimate()
get_point_estimate()
AbstractExtendedObjectTracker
Bases: AbstractTrackerWithLogging
log_prior_extents = log_prior_extents
instance-attribute
log_posterior_extents = log_posterior_extents
instance-attribute
prior_extents_over_time = self.history.register('prior_extents', pad_with_nan=True)
instance-attribute
posterior_extents_over_time = self.history.register('posterior_extents', pad_with_nan=True)
instance-attribute
filter_state
property
__init__(log_prior_estimates=False, log_posterior_estimates=False, log_prior_extents=False, log_posterior_extents=False)
store_prior_estimates()
store_posterior_estimates()
store_prior_extent()
store_posterior_extents()
get_point_estimate()
abstractmethod
Retrieve the estimated kinematic state and extent of the object, all flattened into a single vector.
Returns: - A vector representing both the estimated kinematic state and extent.
get_point_estimate_kinematics()
abstractmethod
Retrieve the estimated kinematic state of the object.
Returns: - A vector representing the estimated kinematic state.
get_point_estimate_extent(flatten_matrix=False)
abstractmethod
Retrieve the estimated extent of the object.
Parameters: - flatten_matrix: whether to flatten the extent matrix or not
Returns: - A matrix or vector representing the estimated extent.
get_contour_points(n)
abstractmethod
plot_point_estimate()
AbstractFilter
Bases: ABC
Abstract base class for all filters.
history = HistoryRecorder()
instance-attribute
filter_state
property
writable
dim
property
Convenience function to get the dimension of the filter. Overwrite if the filter is not directly based on a distribution.
__init__(initial_filter_state)
abstractmethod
get_point_estimate()
Get a point estimate
record_history(name, value, pad_with_nan=False, copy_value=True)
Append a value to a named history and return the updated history.
clear_history(name=None)
Clear a named history or all registered histories.
record_filter_state(history_name='filter_state')
Store a deep-copied snapshot of the current filter state.
record_point_estimate(history_name='point_estimate')
Store the current point estimate as a padded numeric history.
plot_filter_state()
Plot the filter state.
AbstractGridFilter
Bases: AbstractFilter
filter_state
property
writable
Expose the parent property so we can attach a setter to it.
__init__(state_init)
update_nonlinear(likelihood, z)
update_model(measurement_model, z)
Update the grid state using a reusable likelihood model.
Parameters
measurement_model : object
Model object exposing either likelihood_values(z, grid) or a
callable likelihood(z, grid) attribute. The method must return
one likelihood value per point of self.filter_state.get_grid().
z : array-like
Measurement passed to the model.
Notes
This adapter preserves the existing :meth:update_nonlinear API and
simply delegates to it after extracting the likelihood capability from
the model object.
predict_model(transition_model)
Predict using a reusable grid transition-density model.
Parameters
transition_model : object
Model object exposing either transition_density_for_filter(self)
or a transition_density attribute compatible with this filter's
density-based prediction method.
Notes
This generic adapter is intentionally conservative: concrete grid
filters still define the actual density-based prediction method. If a
filter does not expose predict_nonlinear_via_transition_density, a
clear NotImplementedError is raised.
plot_filter_state()
AbstractMultipleExtendedObjectTracker
Bases: AbstractMultitargetTracker
Base class for trackers of multiple extended objects.
The canonical high-level output is a list of :class:ExtendedObjectEstimate
instances. get_point_estimate remains available for compatibility with
:class:AbstractMultitargetTracker and returns one vectorized estimate per
extracted object.
extraction_threshold = extraction_threshold
instance-attribute
max_extracted_objects = max_extracted_objects
instance-attribute
extract_confirmed_only = bool(extract_confirmed_only)
instance-attribute
log_prior_extents = bool(log_prior_extents)
instance-attribute
log_posterior_extents = bool(log_posterior_extents)
instance-attribute
log_prior_measurement_rates = bool(log_prior_measurement_rates)
instance-attribute
log_posterior_measurement_rates = bool(log_posterior_measurement_rates)
instance-attribute
log_cardinality = bool(log_cardinality)
instance-attribute
log_associations = bool(log_associations)
instance-attribute
prior_measurement_rates_over_time = None
instance-attribute
posterior_measurement_rates_over_time = None
instance-attribute
cardinality_over_time = None
instance-attribute
associations_over_time = None
instance-attribute
latest_step_result = None
instance-attribute
prior_extents_over_time = self.history.register('prior_extents', pad_with_nan=True)
instance-attribute
posterior_extents_over_time = self.history.register('posterior_extents', pad_with_nan=True)
instance-attribute
__init__(extraction_threshold=None, max_extracted_objects=None, extract_confirmed_only=True, log_prior_estimates=True, log_posterior_estimates=True, log_prior_extents=False, log_posterior_extents=False, log_prior_measurement_rates=False, log_posterior_measurement_rates=False, log_cardinality=False, log_associations=False)
predict(dt=None, dynamic_model=None, process_noise=None, survival_probability=None, birth_model=None, **kwargs)
abstractmethod
Propagate the multi-object posterior one time step.
update(measurements, measurement_model=None, meas_noise_cov=None, detection_probability=None, clutter_model=None, measurement_partitions=None, sensor_state=None, **kwargs)
abstractmethod
Update from one complete scan of measurements.
measurement_partitions may contain precomputed cells or alternative
partitions for trackers that separate partitioning from association.
step(measurements, predict_kwargs=None, update_kwargs=None)
Run a complete predict/update step and record enabled histories.
get_object_estimates(extraction_threshold=None, max_objects=None, confirmed_only=None)
abstractmethod
Return extracted extended-object estimates.
get_number_of_targets(confirmed_only=None)
Return the number of currently extracted objects.
get_track_labels(extraction_threshold=None, max_objects=None, confirmed_only=None)
Return labels of extracted objects.
get_point_estimate(flatten_vector=False, include_extent=True, extraction_threshold=None, max_objects=None, confirmed_only=None)
Return vectorized extracted-object estimates.
With flatten_vector=False, each column represents one object. With
flatten_vector=True, the matrix is flattened for compatibility with
existing multi-target logging helpers.
get_point_estimate_kinematics(extraction_threshold=None, max_objects=None, confirmed_only=None)
Return kinematic estimates only.
get_point_estimate_extents(extraction_threshold=None, max_objects=None, confirmed_only=None)
Return extent estimates only.
get_measurement_rate_estimates(extraction_threshold=None, max_objects=None, confirmed_only=None, unavailable_value=None)
Return measurement-rate estimates where available.
get_contour_points(n, labels=None, scaling_factor=1.0, **kwargs)
abstractmethod
Return drawable contour points for extracted objects.
get_cardinality_distribution()
Return the cardinality distribution if represented explicitly.
get_expected_number_of_targets()
Return expected cardinality when it can be inferred from estimates.
prune(*args, **kwargs)
Optional complexity-reduction hook.
merge(*args, **kwargs)
Optional component-merging hook.
cap(*args, **kwargs)
Optional component-capping hook.
reduce(*args, **kwargs)
Run optional pruning, merging, and capping hooks.
store_prior_extents()
Record prior extent estimates.
store_posterior_extents()
Record posterior extent estimates.
store_prior_measurement_rates()
Record prior measurement-rate estimates.
store_posterior_measurement_rates()
Record posterior measurement-rate estimates.
clear_history(name=None)
Clear histories and keep MEOT-specific mirrors synchronized.
ExtendedObjectAssociationResult
dataclass
Association result for one extended-object measurement scan.
In multiple extended-object tracking, one object can generate multiple detections in one scan. Therefore, associations are represented as object to measurement-cell assignments rather than one-to-one target/measurement pairs.
object_to_measurement_indices = field(default_factory=dict)
class-attribute
instance-attribute
clutter_indices = field(default_factory=list)
class-attribute
instance-attribute
birth_cell_indices = field(default_factory=list)
class-attribute
instance-attribute
global_hypotheses = None
class-attribute
instance-attribute
selected_partition = None
class-attribute
instance-attribute
log_likelihood = None
class-attribute
instance-attribute
diagnostics = field(default_factory=dict)
class-attribute
instance-attribute
__init__(object_to_measurement_indices=dict(), clutter_indices=list(), birth_cell_indices=list(), global_hypotheses=None, selected_partition=None, log_likelihood=None, diagnostics=dict())
ExtendedObjectEstimate
dataclass
Extracted estimate of one extended object.
The extent field is intentionally untyped because different EOT models use different shape parameterizations, e.g., random matrices, star-convex radial functions, polygons, or subobject collections.
label
instance-attribute
kinematics
instance-attribute
extent
instance-attribute
existence_probability = None
class-attribute
instance-attribute
measurement_rate = None
class-attribute
instance-attribute
weight = None
class-attribute
instance-attribute
covariance = None
class-attribute
instance-attribute
extent_uncertainty = None
class-attribute
instance-attribute
status = None
class-attribute
instance-attribute
source_component = None
class-attribute
instance-attribute
metadata = field(default_factory=dict)
class-attribute
instance-attribute
__init__(label, kinematics, extent, existence_probability=None, measurement_rate=None, weight=None, covariance=None, extent_uncertainty=None, status=None, source_component=None, metadata=dict())
MultipleExtendedObjectStepResult
dataclass
Summary of one predict/update step of a MEOT tracker.
estimates = field(default_factory=list)
class-attribute
instance-attribute
association = None
class-attribute
instance-attribute
created_labels = field(default_factory=list)
class-attribute
instance-attribute
deleted_labels = field(default_factory=list)
class-attribute
instance-attribute
confirmed_labels = field(default_factory=list)
class-attribute
instance-attribute
cardinality_distribution = None
class-attribute
instance-attribute
expected_number_of_objects = None
class-attribute
instance-attribute
diagnostics = field(default_factory=dict)
class-attribute
instance-attribute
__init__(estimates=list(), association=None, created_labels=list(), deleted_labels=list(), confirmed_labels=list(), cardinality_distribution=None, expected_number_of_objects=None, diagnostics=dict())
AbstractMultitargetTracker
Bases: AbstractTrackerWithLogging
__init__(log_prior_estimates=True, log_posterior_estimates=True)
store_prior_estimates()
store_posterior_estimates()
get_point_estimate(flatten_vector=False)
abstractmethod
get_number_of_targets()
abstractmethod
AbstractNearestNeighborTracker
Bases: AbstractMultitargetTracker
association_param = association_param or {}
instance-attribute
filter_state
property
writable
dim
property
__init__(initial_prior=None, association_param=None, log_prior_estimates=True, log_posterior_estimates=True)
find_association(measurements, measurement_matrix, cov_mats_meas)
abstractmethod
This method must be implemented in subclass
get_number_of_targets()
predict_linear(system_matrices, sys_noises, inputs=None)
update_linear(measurements, measurement_matrix, covMatsMeas, pairwise_cost_matrix=None)
get_point_estimate(flatten_vector=False)
AbstractParticleFilter
Bases: AbstractFilter
resampling_criterion
property
writable
Criterion deciding whether to resample after an update.
None preserves the historical behavior and always resamples.
Otherwise, the callable receives the current weighted filter state and
must return a truthy value if the particle set should be resampled.
filter_state
property
writable
__init__(initial_filter_state=None, resampling_criterion=None)
set_resampling_criterion(criterion)
Set the post-update resampling criterion and return the filter.
should_resample()
Return whether the current weighted particle set should resample.
The default criterion, None, always returns True to retain the
previous update behavior.
resample()
Manually resample particles according to their current weights.
The particle locations are sampled with replacement from the current weighted particle set, and the resulting weights are reset to uniform.
resample_if_needed()
Resample if the configured criterion requests it.
Returns
bool
True if resampling was performed, otherwise False.
predict_identity(noise_distribution)
predict_model(transition_model)
Predict using a reusable particle transition model.
predict_nonlinear(f, noise_distribution=None, function_is_vectorized=True, shift_instead_of_add=True)
predict_nonlinear_nonadditive(f, samples, weights)
update_model(measurement_model, measurement=None)
Update using a reusable particle measurement model.
update_identity(meas_noise, measurement, shift_instead_of_add=True)
update_nonlinear_using_likelihood(likelihood, measurement=None)
association_likelihood(likelihood)
AbstractTrackerWithLogging
Bases: ABC
log_prior_estimates = log_prior_estimates
instance-attribute
log_posterior_estimates = log_posterior_estimates
instance-attribute
history = HistoryRecorder()
instance-attribute
prior_estimates_over_time = None
instance-attribute
posterior_estimates_over_time = None
instance-attribute
prior_extents_over_time = None
instance-attribute
posterior_extents_over_time = None
instance-attribute
__init__(log_prior_estimates=False, log_posterior_estimates=False)
record_history(name, value, pad_with_nan=False, copy_value=True)
Append a value to a named history and return the updated history.
clear_history(name=None)
Clear a named history or all registered histories.
AssociationHypothesis
dataclass
Pairwise association score between one track and one measurement.
measurement_index=None denotes a missed-detection hypothesis. Costs are
interpreted as values to minimize; likelihoods and probabilities are values
to maximize. Conversion helpers use the most explicit available score in
the order cost, normalized_innovation_squared, log_likelihood,
then probability.
track_index
instance-attribute
measurement_index
instance-attribute
cost = None
class-attribute
instance-attribute
log_likelihood = None
class-attribute
instance-attribute
probability = None
class-attribute
instance-attribute
innovation = None
class-attribute
instance-attribute
innovation_covariance = None
class-attribute
instance-attribute
normalized_innovation_squared = None
class-attribute
instance-attribute
accepted = True
class-attribute
instance-attribute
reason = None
class-attribute
instance-attribute
metadata = None
class-attribute
instance-attribute
is_missed_detection
property
Return whether the hypothesis represents an unmatched track.
with_acceptance(accepted, reason=None)
Return a copy with updated gate acceptance metadata.
__init__(track_index, measurement_index, cost=None, log_likelihood=None, probability=None, innovation=None, innovation_covariance=None, normalized_innovation_squared=None, accepted=True, reason=None, metadata=None)
CostThresholdGate
Gate hypotheses by maximum minimization cost.
threshold = float(threshold)
instance-attribute
missing_cost = float(missing_cost)
instance-attribute
__init__(threshold, *, missing_cost=np.inf)
accepts(hypothesis)
__call__(hypothesis)
NISGate
Gate association hypotheses by normalized innovation squared.
threshold = float(threshold)
instance-attribute
__init__(threshold=None, *, measurement_dim=None, confidence=None)
accepts(hypothesis)
Return whether hypothesis is accepted by the NIS threshold.
__call__(hypothesis)
ProbabilityThresholdGate
Gate hypotheses by minimum probability or likelihood.
threshold = float(threshold)
instance-attribute
use_likelihood = bool(use_likelihood)
instance-attribute
__init__(threshold, *, use_likelihood=False)
accepts(hypothesis)
__call__(hypothesis)
TopKGate
Keep the best k hypotheses per track or per measurement.
k = int(k)
instance-attribute
mode = mode
instance-attribute
missing_cost = float(missing_cost)
instance-attribute
__init__(k, *, mode='track', missing_cost=np.inf)
filter(hypotheses)
Return hypotheses accepted by the top-k rule.
accepts(hypothesis)
__call__(hypotheses)
AxialKalmanFilter
Bases: AbstractAxialFilter
Kalman Filter for directional estimation with antipodal symmetry.
Works for antipodally symmetric complex numbers (2D unit vectors) and quaternions (4D unit vectors).
References: - Gerhard Kurz, Igor Gilitschenski, Simon Julier, Uwe D. Hanebeck, Recursive Bingham Filter for Directional Estimation Involving 180 Degree Symmetry, Journal of Advances in Information Fusion, 9(2):90-105, December 2014.
dim
property
Manifold dimension (1 for complex/circle, 3 for quaternions).
filter_state
property
writable
__init__()
predict_identity(gauss_w)
Predict assuming identity system model with noise gauss_w.
Computes x(k+1) = x(k) ⊕ w(k), where ⊕ is complex or quaternion multiplication.
Parameters: gauss_w (GaussianDistribution): system noise with unit vector mean
update_identity(gauss_v, z)
Update assuming identity measurement model with noise gauss_v.
Computes z(k) = x(k) ⊕ v(k), where ⊕ is complex or quaternion multiplication.
Parameters: gauss_v (GaussianDistribution): measurement noise with unit vector mean z (array): measurement as a unit vector of shape (2,) or (4,)
get_point_estimate()
Return the mean of the current filter state.
BinghamFilter
Bases: AbstractFilter
Recursive filter based on the Bingham distribution.
Supports antipodally symmetric complex numbers (2D) and quaternions (4D).
References: - Gerhard Kurz, Igor Gilitschenski, Simon Julier, Uwe D. Hanebeck, Recursive Bingham Filter for Directional Estimation Involving 180 Degree Symmetry, Journal of Advances in Information Fusion, 9(2):90-105, December 2014. - Igor Gilitschenski, Gerhard Kurz, Simon J. Julier, Uwe D. Hanebeck, Unscented Orientation Estimation Based on the Bingham Distribution, IEEE Transactions on Automatic Control, January 2016.
filter_state
property
writable
__init__()
predict_identity(bw)
Predict assuming identity system model with Bingham noise.
Computes x(k+1) = x(k) () w(k) where () is complex or quaternion multiplication and w(k) ~ bw.
Parameters: bw (BinghamDistribution): noise distribution
predict_nonlinear(a, bw)
Predict assuming nonlinear system model with Bingham noise.
Computes x(k+1) = a(x(k)) (*) w(k) using a sigma-point approximation.
Parameters: a (callable): nonlinear system function mapping R^n -> R^n bw (BinghamDistribution): noise distribution
update_identity(bv, z)
Update assuming identity measurement model with Bingham noise.
Applies the measurement z using likelihood based on Bingham noise bv.
Parameters: bv (BinghamDistribution): measurement noise distribution z (numpy.ndarray): measurement as a unit vector of shape (dim+1,)
get_point_estimate()
Return the mode of the current distribution as a point estimate.
CircularParticleFilter
Bases: HypertoroidalParticleFilter, CircularFilterMixin
Sequential importance resampling particle filter on the circle.
References
Kurz, G., Gilitschenski, I., & Hanebeck, U. D. (2015). Recursive Bayesian Filtering in Circular State Spaces. arXiv preprint.
__init__(n_particles)
Initialize the CircularParticleFilter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n_particles
|
Union[int, int32, int64]
|
number of particles |
required |
compute_association_likelihood(likelihood)
Compute the likelihood of association based on the PDF of the likelihood and the filter state.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
likelihood
|
likelihood object with a PDF method |
required |
Returns:
| Type | Description |
|---|---|
float64
|
association likelihood value |
CircularUKF
Bases: AbstractFilter, CircularFilterMixin
A modified unscented Kalman filter for circular distributions.
The state is represented as a 1-D :class:GaussianDistribution on the
circle [0, 2*pi).
filter_state
property
writable
__init__(alpha=0.001, beta=2.0, kappa=0.0)
Initialise with a standard Gaussian at 0 with unit variance.
Parameters
alpha: UKF sigma-point spread parameter (default 1e-3). beta: UKF prior distribution parameter (default 2.0, optimal for Gaussian). kappa: UKF secondary scaling parameter (default 0.0).
predict_identity(gauss_sys)
Predict assuming identity system model: x(k+1) = x(k) + w(k) mod 2pi where w(k) is additive noise described by gauss_sys*.
Parameters
gauss_sys:
Distribution of additive system noise (converted to
:class:GaussianDistribution if necessary).
predict_nonlinear(f, gauss_sys)
Predict assuming a nonlinear system model: x(k+1) = f(x(k)) + w(k) mod 2pi where w(k) is additive noise described by gauss_sys*.
Parameters
f: Function from [0, 2pi) to [0, 2pi). gauss_sys: Distribution of additive system noise.
update_identity(gauss_meas, z)
Update assuming identity measurement model: z(k) = x(k) + v(k) mod 2pi where v(k) is additive noise described by gauss_meas*.
Parameters
gauss_meas: Distribution of additive measurement noise. z: Scalar measurement in [0, 2*pi).
update_nonlinear(f, gauss_meas, z, measurement_periodic=False)
Update assuming a nonlinear measurement model: z(k) = f(x(k)) + v(k) (if measurement_periodic is False) z(k) = f(x(k)) + v(k) mod 2pi (if measurement_periodic is True) where v(k) is additive noise described by gauss_meas*.
Parameters
f: Measurement function from [0, 2*pi) to R^d. gauss_meas: Distribution of additive measurement noise. z: Measurement vector of shape (d,). measurement_periodic: Whether the measurement is a periodic quantity.
get_point_estimate()
Return the mean of the current state estimate.
EKFSplineTracker
Bases: AbstractExtendedObjectTracker
EKF tracker for a 2-D extended object with a closed quadratic spline extent.
The state is [x, y, orientation, speed, turn_rate, scale_x, scale_y].
The extent is represented by fixed body-frame spline control points and two
estimated scale factors. Measurements are associated to the closest point on
the currently predicted closed spline, then corrected with an EKF update.
measurement_dim = 2
instance-attribute
kinematic_dim = 5
instance-attribute
state_dim = 7
instance-attribute
control_points = self._validate_control_points(control_points)
instance-attribute
num_control_points = self.control_points.shape[0]
instance-attribute
state = concatenate([kinematic_state, scale_state])
instance-attribute
covariance = self._as_covariance_matrix(covariance, self.state_dim, 'covariance')
instance-attribute
process_noise = self._as_covariance_matrix(process_noise, self.state_dim, 'process_noise', require_positive_semidefinite=False)
instance-attribute
measurement_noise = self._as_covariance_matrix(measurement_noise, self.measurement_dim, 'measurement_noise', require_positive_semidefinite=False)
instance-attribute
dt = float(dt)
instance-attribute
acceleration_variance = float(acceleration_variance)
instance-attribute
turn_rate_variance = float(turn_rate_variance)
instance-attribute
scale_process_noise = float(scale_process_noise)
instance-attribute
scale_correction = bool(scale_correction)
instance-attribute
orientation_correction = bool(orientation_correction)
instance-attribute
finite_difference_step = float(finite_difference_step)
instance-attribute
closest_point_grid_size = int(closest_point_grid_size)
instance-attribute
closest_point_iterations = int(closest_point_iterations)
instance-attribute
last_quadratic_form = None
instance-attribute
__init__(control_points=None, kinematic_state=None, scale_state=None, covariance=None, process_noise=None, measurement_noise=None, dt=1.0, acceleration_variance=0.0, turn_rate_variance=0.0, scale_process_noise=0.0, scale_correction=True, orientation_correction=True, finite_difference_step=1e-05, closest_point_grid_size=11, closest_point_iterations=8, log_prior_estimates=False, log_posterior_estimates=False, log_prior_extents=False, log_posterior_extents=False)
default_control_points()
staticmethod
Return a rounded-rectangle control polygon used by the toolbox demo.
predict(dt=None, process_noise=None)
update(measurements, R=None)
get_scaled_control_points(global_frame=False)
get_point_estimate()
get_point_estimate_kinematics()
get_point_estimate_extent(flatten_matrix=False)
get_contour_points(n=100, scaling_factor=1.0)
get_bounding_box(n=100)
EuclideanBoxParticleFilter
Bases: AbstractParticleFilter, EuclideanFilterMixin
Box particle filter for Euclidean state spaces.
The state is represented by a weighted mixture of uniform hyperrectangles. Prediction propagates boxes through an inclusion function. Correction contracts predicted boxes and multiplies the weights by the contracted volume divided by the predicted volume.
default_box_half_width = LinearBoxParticleDistribution._coerce_half_width(box_half_width, dim)
instance-attribute
split_resampled_boxes = split_resampled_boxes
instance-attribute
filter_state
property
writable
__init__(n_particles, dim, box_half_width=0.5, resampling_criterion=None, split_resampled_boxes=True)
predict_identity(noise_distribution=None, process_noise_bounds=None, scaling_factor=3.0)
Predict with identity dynamics and bounded additive process noise.
predict_interval(inclusion_function, process_noise_bounds=None, function_is_vectorized=True)
Predict boxes using an interval inclusion function.
inclusion_function must return (lower, upper) for the image of
the input boxes. In vectorized mode it receives arrays with shape
(n_boxes, dim); otherwise it is called once per box with vectors of
shape (dim,).
predict_nonlinear(f, noise_distribution=None, function_is_vectorized=True, shift_instead_of_add=False, process_noise_bounds=None, inclusion_function=None, scaling_factor=3.0)
Predict through nonlinear dynamics.
For guaranteed interval propagation, pass inclusion_function. If no
inclusion function is supplied, the method encloses the images of the box
corners. Corner enclosure is exact for affine maps and often useful for
monotone maps, but it is not a mathematically guaranteed inclusion for an
arbitrary nonlinear function.
update_identity_box(measurement_lower, measurement_upper=None, measurement_noise_bounds=None)
Update by intersecting state boxes with an identity measurement box.
If the measurement model is z = x + v and measurement_noise_bounds
is supplied as (v_lower, v_upper), the state constraint is
x in [z_lower - v_upper, z_upper - v_lower].
update_contracted(contractor, measurement=None, likelihood=None)
Update by contracting predicted boxes.
contractor receives (lower, upper) or (lower, upper,
measurement) and returns contracted (lower, upper) arrays. The Box
PF likelihood is the contracted volume divided by the predicted volume.
An optional additional likelihood can be multiplied in, evaluated at the
contracted box centers.
update_nonlinear_using_likelihood(likelihood, measurement=None)
Fallback update using a point likelihood at box centers.
association_likelihood(likelihood)
effective_sample_size()
Return the usual particle-filter effective sample size.
resample(split_resampled_boxes=None)
Multinomially resample boxes and split duplicates along their widest side.
EuclideanParticleFilter
Bases: AbstractParticleFilter, EuclideanFilterMixin
Euclidean Particle Filter Class.
filter_state
property
writable
Get the filter state.
__init__(n_particles, dim)
predict_nonlinear(f, noise_distribution=None, function_is_vectorized=True, shift_instead_of_add=False)
Predict for nonlinear system model.
FourierRHMTracker
Bases: AbstractExtendedObjectTracker
Star-convex Random Hypersurface Model with Fourier coefficients.
The extent is represented by a radial function
r(phi) = b0 / 2 + sum_k a_k cos(k phi) + c_k sin(k phi).
Measurements are processed with the squared RHM pseudo-measurement from
Baum and Hanebeck's star-convex RHM, using an augmented UKF over the shape
and position state, the random scale variable, and additive measurement
noise.
n_harmonics = int(n_harmonics)
instance-attribute
n_fourier_coefficients = 2 * self.n_harmonics + 1
instance-attribute
state_dim = self.n_fourier_coefficients + 2
instance-attribute
fourier_coefficients = self._as_vector(fourier_coefficients, self.n_fourier_coefficients, 'fourier_coefficients')
instance-attribute
kinematic_state = self._as_vector(kinematic_state, 2, 'kinematic_state')
instance-attribute
covariance = self._as_square_matrix(covariance, self.state_dim, 'covariance')
instance-attribute
scale_mean = float(scale_mean)
instance-attribute
scale_variance = float(scale_variance)
instance-attribute
ukf_alpha = float(ukf_alpha)
instance-attribute
ukf_beta = float(ukf_beta)
instance-attribute
ukf_kappa = float(ukf_kappa)
instance-attribute
covariance_regularization = float(covariance_regularization)
instance-attribute
latest_pseudo_measurement = None
instance-attribute
latest_innovation_covariance = None
instance-attribute
__init__(n_harmonics, fourier_coefficients=None, kinematic_state=None, covariance=None, coefficient_covariance=0.02, kinematic_covariance=0.3, initial_radius=1.0, scale_mean=0.7, scale_variance=0.08, ukf_alpha=1.0, ukf_beta=0.0, ukf_kappa=0.0, covariance_regularization=1e-09, log_prior_estimates=False, log_posterior_estimates=False, log_prior_extents=False, log_posterior_extents=False)
fourier_basis(phi)
Return the Fourier basis vector or basis matrix for angle phi.
evaluate_radius(phi)
Evaluate the star-convex radial function at one or more angles.
get_point_estimate()
get_point_estimate_kinematics()
get_point_estimate_extent(flatten_matrix=False)
get_extents_on_grid(n=100)
get_contour_points(n=100)
predict_identity(sys_noise=None)
predict_linear(system_matrix, sys_noise=None, inputs=None)
predict(*args, **kwargs)
update(measurements, meas_noise_cov, scale_mean=None, scale_variance=None)
Update from one or more 2-D measurements.
measurements may be a single vector, a (2, n) matrix, or a
(n, 2) matrix. Each column/row is processed as one RHM contour or
interior-source observation.
GGIWTracker
Bases: AbstractExtendedObjectTracker
Gamma-Gaussian-inverse-Wishart tracker for one extended object.
The posterior is represented by a Gaussian kinematic state, an inverse-Wishart
extent model, and a Gamma model for the expected number of detections per
scan. The public extent estimate follows the same ellipse convention used in
several extended-object tracking references: E[X] = V / (nu - 2d - 2),
where V is the inverse-Wishart scale matrix and d is the measurement
dimension.
The update uses a centroid Kalman step for the kinematics, a scatter update for the extent sufficient statistics, and a conjugate Gamma-Poisson count update. This keeps the implementation compatible with PyRecEst's existing full covariance state representation instead of assuming a separable kinematic/extent covariance.
kinematic_state = array(kinematic_state)
instance-attribute
covariance = self._as_covariance_matrix(covariance, self.kinematic_state.shape[0], 'covariance')
instance-attribute
measurement_dim = extent.shape[0]
instance-attribute
extent_degrees_of_freedom = float(extent_degrees_of_freedom)
instance-attribute
gamma_shape = float(gamma_shape)
instance-attribute
gamma_rate = float(gamma_rate)
instance-attribute
extent_innovation_weight = float(extent_innovation_weight)
instance-attribute
subtract_measurement_noise_from_scatter = bool(subtract_measurement_noise_from_scatter)
instance-attribute
latest_log_likelihood = None
instance-attribute
measurement_matrix = None
instance-attribute
extent_scale = self._symmetrize(extent)
instance-attribute
extent
property
Return the posterior mean extent matrix.
__init__(kinematic_state, covariance, extent, extent_degrees_of_freedom, gamma_shape, gamma_rate, measurement_matrix=None, extent_is_scale=False, extent_innovation_weight=1.0, subtract_measurement_noise_from_scatter=False, log_prior_estimates=False, log_posterior_estimates=False, log_prior_extents=False, log_posterior_extents=False)
get_measurement_rate_estimate()
Return the posterior mean of the Poisson measurement rate.
get_point_estimate()
get_point_estimate_kinematics()
get_point_estimate_extent(flatten_matrix=False)
predict_linear(system_matrix, sys_noise, inputs=None, extent_forgetting_factor=1.0, measurement_rate_forgetting_factor=1.0)
Predict with a linear kinematic model and optional information decay.
predict(*args, **kwargs)
Alias for :meth:predict_linear to match existing EOT tracker APIs.
update(measurements, meas_mat=None, meas_noise_cov=None, extent_innovation_weight=None)
Update from all detections generated by the target in one scan.
get_contour_points(n, scaling_factor=1.0)
GlobalNearestNeighbor
Bases: AbstractNearestNeighborTracker
Global nearest-neighbor tracker for linear/Gaussian multitarget tracking.
Besides the built-in geometric association costs, this implementation can
optionally fuse an externally computed pairwise_cost_matrix of shape
(n_targets, n_meas). This is useful for domains such as longitudinal
calcium-imaging cell tracking where association should depend on arbitrary
pairwise cues like ROI overlap, footprint correlation, or appearance
embeddings in addition to centroid distance.
__init__(initial_prior=None, association_param=None, log_prior_estimates=True, log_posterior_estimates=True)
find_association(measurements, measurement_matrix, cov_mats_meas, warn_on_no_meas_for_track=True, pairwise_cost_matrix=None)
Find the minimum-cost measurement-to-track assignment.
Parameters
measurements : array-like, shape (dim_meas, n_meas)
Measurements for the current update step.
measurement_matrix : array-like
Linear measurement model.
cov_mats_meas : array-like
Measurement covariance matrix or per-measurement covariance tensor.
warn_on_no_meas_for_track : bool, optional
Whether to emit a warning when a track remains unassigned.
pairwise_cost_matrix : array-like, optional
Additional target/measurement association costs of shape
(n_targets, n_meas). These costs are added to the geometric cost
matrix before running the Hungarian algorithm.
update_linear(measurements, measurement_matrix, covMatsMeas, pairwise_cost_matrix=None)
Update the tracker with an optional additional association cost matrix.
GoalConditionedReplayIMMFilter
Bases: AbstractFilter, EuclideanFilterMixin
Goal-conditioned replay filter with discrete goals and IMM-style mode switching.
Parameters
initial_state:
Either a GaussianDistribution or a tuple (mean, covariance).
The mean/covariance may describe either:
- position only, with shape (position_dim,) and
(position_dim, position_dim)
- concatenated position/velocity, with shape (2 * position_dim,) and
(2 * position_dim, 2 * position_dim).
candidate_goals:
Candidate replay goals with shape (n_goals, position_dim).
For 1-D position spaces, a 1-D array is interpreted as multiple scalar
candidate goals.
dt:
Default time step used by :meth:predict_replay.
attraction_strength:
Strength of the smooth goal attraction.
velocity_decay:
Velocity retention in the smooth mode.
jump_fraction:
Fraction of the remaining distance to the goal traversed during a
jump-mode prediction.
jump_velocity_decay:
Velocity retention in the jump mode.
jump_probability:
Default probability of switching from the smooth mode to the jump mode.
Ignored if mode_transition_matrix is provided.
jump_stickiness:
Default probability of staying in the jump mode. Ignored if
mode_transition_matrix is provided.
smooth_sys_noise_cov, jump_sys_noise_cov:
Process-noise covariances for the smooth and jump modes.
goal_transition_matrix:
Row-stochastic matrix with shape (n_goals, n_goals) and entries
P(goal_t = j | goal_{t-1} = i).
mode_transition_matrix:
Row-stochastic matrix with shape (2, 2) and entries
P(mode_t = j | mode_{t-1} = i).
goal_prior, mode_prior:
Initial probability vectors over candidate goals and motion modes.
initial_velocity_covariance:
Covariance used to augment a position-only initial state.
covariance_regularization:
Small diagonal term added after covariance updates for numerical
stability.
weight_floor:
Lower bound used when taking logarithms of mixture weights.
finite_difference_epsilon:
Step size for numerical Jacobians in :meth:predict_nonlinear and
:meth:update_nonlinear if no analytic Jacobian is supplied.
mode_names = ('smooth', 'jump')
class-attribute
instance-attribute
position_dim = parsed_goals.shape[1]
instance-attribute
state_dim = 2 * self.position_dim
instance-attribute
n_goals = parsed_goals.shape[0]
instance-attribute
n_modes = 2
instance-attribute
n_components = self.n_goals * self.n_modes
instance-attribute
dt = float(dt)
instance-attribute
attraction_strength = float(attraction_strength)
instance-attribute
velocity_decay = float(velocity_decay)
instance-attribute
jump_fraction = float(jump_fraction)
instance-attribute
jump_velocity_decay = float(jump_velocity_decay)
instance-attribute
covariance_regularization = float(covariance_regularization)
instance-attribute
weight_floor = float(weight_floor)
instance-attribute
finite_difference_epsilon = float(finite_difference_epsilon)
instance-attribute
smooth_sys_noise_cov = self._as_square_matrix(smooth_sys_noise_cov if smooth_sys_noise_cov is not None else 0.05 * eye(self.state_dim), self.state_dim, 'smooth_sys_noise_cov')
instance-attribute
jump_sys_noise_cov = self._as_square_matrix(jump_sys_noise_cov if jump_sys_noise_cov is not None else 1.0 * eye(self.state_dim), self.state_dim, 'jump_sys_noise_cov')
instance-attribute
goal_transition_matrix = eye(self.n_goals)
instance-attribute
mode_transition_matrix = self._validate_transition_matrix(array([[1.0 - jump_probability, jump_probability], [1.0 - jump_stickiness, jump_stickiness]]), self.n_modes, 'mode_transition_matrix')
instance-attribute
goal_prior = self._validate_probability_vector(full((self.n_goals,), 1.0 / self.n_goals) if goal_prior is None else goal_prior, self.n_goals, 'goal_prior')
instance-attribute
mode_prior = self._validate_probability_vector(array([1.0, 0.0]) if mode_prior is None else mode_prior, self.n_modes, 'mode_prior')
instance-attribute
filter_state
property
writable
dim
property
position_slice
property
velocity_slice
property
candidate_goals
property
goal_candidates
property
component_weights
property
component_filter_states
property
goal_probabilities
property
mode_probabilities
property
last_update_log_marginal
property
__init__(initial_state, candidate_goals=None, *, goal_candidates=None, dt=1.0, attraction_strength=1.0, velocity_decay=0.95, jump_fraction=0.9, jump_velocity_decay=0.25, jump_probability=0.05, jump_stickiness=0.05, smooth_sys_noise_cov=None, jump_sys_noise_cov=None, goal_transition_matrix=None, mode_transition_matrix=None, goal_prior=None, mode_prior=None, initial_velocity_covariance=None, covariance_regularization=1e-09, weight_floor=1e-300, finite_difference_epsilon=1e-05)
from_factorized_priors(position_distribution, candidate_goals=None, velocity_distribution=None, *, goal_candidates=None, goal_prior=None, mode_prior=None, initial_velocity_covariance=None, **kwargs)
classmethod
Construct the filter from separate position/velocity priors.
position_distribution and velocity_distribution may be Gaussian
distributions, tuples (mean, covariance), or any objects exposing
compatible first and second moments through mean() and either
covariance() or C.
initialize_from_state_priors(position_prior, velocity_prior=None, *, goal_prior=None, mode_prior=None)
Reset the filter from separate position / velocity priors.
This is a convenience instance-level analogue of
:meth:from_factorized_priors.
get_point_estimate()
get_state_estimate()
get_position_estimate()
get_velocity_estimate()
get_goal_estimate()
get_position_distribution()
get_velocity_distribution()
get_goal_distribution()
get_goal_posterior_weights(candidate_goals=None)
most_likely_goal_index()
most_likely_goal()
most_likely_mode_index()
most_likely_mode()
reweight_components(likelihoods=None, log_likelihoods=None, *, return_log_marginal=False)
Reweight only the discrete hypotheses.
This is useful when an external spike / replay likelihood is already available and only the discrete goal/mode posterior should be updated.
predict(dt=None, smooth_sys_noise_cov=None, jump_sys_noise_cov=None)
predict_replay(dt=None, smooth_sys_noise_cov=None, jump_sys_noise_cov=None)
predict_goal_conditioned(dt=None, smooth_sys_noise_cov=None, jump_sys_noise_cov=None)
Predict with the built-in goal-conditioned replay dynamics.
predict_identity(sys_noise_cov, sys_input=None)
predict_linear(system_matrix, sys_noise_cov, sys_input=None)
predict_nonlinear(fx, sys_noise_cov, jacobian=None, dt=None, **fx_args)
Predict with a shared nonlinear dynamics model for all components.
update_identity(measurement, meas_noise, return_log_marginal=False)
update_position(measurement, meas_noise, return_log_marginal=False)
update_position_measurement(measurement, meas_noise, return_log_marginal=False)
update_linear(measurement, measurement_matrix, meas_noise, return_log_marginal=False)
update_nonlinear(measurement, hx, cov_mat_meas, jacobian=None, return_log_marginal=False, **hx_args)
Update with a nonlinear measurement function via EKF linearization.
update_velocity(measurement, meas_noise, return_log_marginal=False)
update_goal(measurement, meas_noise, return_log_marginal=False)
Reweight discrete goal hypotheses from a measurement in goal space.
This does not alter the continuous component states. It only updates the posterior weights over discrete goal/mode hypotheses.
association_likelihood_identity(measurement, meas_noise)
association_likelihood_linear(measurement, measurement_matrix, meas_noise)
association_likelihood_position(measurement, meas_noise)
association_likelihood_velocity(measurement, meas_noise)
association_likelihood_goal(measurement, meas_noise)
GoalConditionedReplayParticleFilter
Bases: EuclideanParticleFilter
Particle filter for goal-conditioned replay with sparse jumps.
Parameters
n_particles:
Number of particles.
position_dim:
Dimension of replayed position, velocity, and latent goal.
initial_state:
Optional prior over the full augmented state [z, v, g]. Supported
inputs are a LinearDiracDistribution, any linear distribution with
matching dimension, or a tuple (mean, covariance) which is converted
to GaussianDistribution and sampled.
dt, alpha, beta:
Parameters of the default replay transition.
attraction_field:
Callable implementing the goal-conditioned control field. It may be
vectorized across particles or evaluated particle by particle. If
omitted, the default field is g - z.
goal_transition:
Optional deterministic transition for goals. It may accept one of the
signatures goal_transition(goals),
goal_transition(goals, positions, velocities), or
goal_transition(goals, positions, velocities, dt).
process_noise:
Dense process noise added in velocity space.
goal_noise:
Additive noise applied after the deterministic goal transition.
jump_probability:
Bernoulli probability of a sparse jump for each particle and prediction
step.
jump_distribution:
Distribution for sparse velocity jumps.
position_jump_distribution:
Distribution for sparse direct position jumps.
goal_reset_probability:
Bernoulli probability of resetting / remapping the latent goal before
the control field is evaluated.
goal_reset_distribution:
Distribution from which reset goals are drawn. If omitted and a
candidate-goal bank is stored, resets are sampled from that bank.
candidate_goals, candidate_goal_weights:
Optional discrete goal bank used for initialization, goal resets, and
posterior summaries.
initial_position_distribution, initial_velocity_distribution,
initial_goal_distribution:
Optional factorized priors used when initial_state is omitted.
position_dim = position_dim
instance-attribute
spatial_dim = position_dim
instance-attribute
state_dim = 3 * position_dim
instance-attribute
dt = float(dt)
instance-attribute
alpha = alpha
instance-attribute
beta = beta
instance-attribute
attraction_field = attraction_field if attraction_field is not None else self._default_attraction_field
instance-attribute
goal_transition = goal_transition
instance-attribute
process_noise = process_noise
instance-attribute
goal_noise = goal_noise
instance-attribute
jump_probability = float(jump_probability)
instance-attribute
jump_distribution = jump_distribution
instance-attribute
position_jump_distribution = position_jump_distribution
instance-attribute
goal_reset_probability = float(goal_reset_probability)
instance-attribute
goal_reset_distribution = goal_reset_distribution
instance-attribute
filter_state = self._coerce_initial_state(initial_state)
instance-attribute
n_particles
property
position_slice
property
velocity_slice
property
goal_slice
property
position_particles
property
velocity_particles
property
goal_particles
property
candidate_goals
property
last_update_log_marginal
property
last_transition_diagnostics
property
last_jump_fraction
property
last_goal_remap_fraction
property
last_position_proposal_fraction
property
__init__(n_particles, position_dim=None, initial_state=None, *, spatial_dim=None, dt=1.0, alpha=0.95, beta=1.0, attraction_field=None, goal_transition=None, process_noise=None, goal_noise=None, jump_probability=0.0, jump_distribution=None, position_jump_distribution=None, goal_reset_probability=0.0, goal_reset_distribution=None, candidate_goals=None, candidate_goal_weights=None, initial_position_distribution=None, initial_velocity_distribution=None, initial_goal_distribution=None)
from_factorized_priors(n_particles, position_dim=None, position_distribution=None, velocity_distribution=None, goal_distribution=None, *, spatial_dim=None, candidate_goals=None, goal_prior_weights=None, **kwargs)
classmethod
initialize_from_factorized_priors(position_distribution=None, velocity_distribution=None, goal_distribution=None, *, candidate_goals=None, goal_prior_weights=None)
Initialize the particle cloud from factorized component priors.
initialize_from_state_priors(position_prior=None, velocity_prior=None, goal_prior=None, weights=None, *, candidate_goals=None, goal_prior_weights=None)
Backward-compatible alias for factorized initialization.
Each prior may be given either as a distribution, a single state vector to broadcast to all particles, or an explicit particle matrix.
set_state_components(positions, velocities=None, goals=None, weights=None)
Directly set particle states from component samples or distributions.
set_candidate_goals(candidate_goals, goal_prior_weights=None)
sample_goals_from_candidates(candidate_goals=None, goal_prior_weights=None)
Replace the current goal particles by samples from a goal bank.
get_state_estimate()
get_point_estimate()
get_position_estimate()
get_velocity_estimate()
get_goal_estimate()
get_position_distribution()
get_velocity_distribution()
get_goal_distribution()
position_distribution()
velocity_distribution()
goal_distribution()
get_goal_posterior_weights(candidate_goals=None)
Approximate posterior mass over a candidate-goal bank.
predict(**kwargs)
predict_goal_conditioned(**kwargs)
predict_replay(dt=None, alpha=None, beta=None, attraction_field=None, goal_transition=None, process_noise=None, goal_noise=None, jump_probability=None, jump_distribution=None, position_jump_distribution=None, goal_reset_probability=None, goal_reset_distribution=None, attraction_field_is_vectorized=None, gradient_is_vectorized=None, function_is_vectorized=None, use_semi_implicit_position_update=False)
Predict one replay step under the goal-conditioned sparse-jump model.
association_likelihood(likelihood, measurement=None)
update_nonlinear_using_likelihood(likelihood, measurement=None, return_log_marginal=False)
update_position_likelihood(likelihood, measurement=None, return_log_marginal=False)
apply_position_proposal(position_proposal, proposal_weights=None, *, proposal_probability=1.0)
Rejuvenate position particles from a measurement-guided proposal.
position_proposal may be a linear distribution, a (mean,
covariance) tuple, or a matrix of candidate positions. Candidate
matrices are sampled with proposal_weights when provided. Velocity
and goal particles, particle weights, and IMM mode indices in subclasses
are left unchanged.
update_position_likelihood_with_proposal(likelihood, measurement=None, *, position_proposal, proposal_weights=None, proposal_probability=1.0, return_log_marginal=False)
Update by position likelihood, then refresh positions from a proposal.
This is useful when the observation likelihood is available on a grid or candidate bank: the usual likelihood update preserves Bayesian reweighting/resampling of the augmented particles, while the proposal step moves a configurable subset of position particles onto measurement-supported states for subsequent replay predictions.
update_linear(measurement, measurement_matrix, meas_noise, return_log_marginal=False)
association_likelihood_linear(measurement, measurement_matrix, meas_noise)
update_identity(meas_noise, measurement, shift_instead_of_add=True, return_log_marginal=False)
update_position(measurement, meas_noise, return_log_marginal=False)
update_velocity(measurement, meas_noise, return_log_marginal=False)
update_goal(measurement, meas_noise, return_log_marginal=False)
update_position_measurement(measurement, meas_noise, return_log_marginal=False)
association_likelihood_position(measurement, meas_noise)
association_likelihood_velocity(measurement, meas_noise)
association_likelihood_goal(measurement, meas_noise)
GoalConditionedReplayParticleIMMFilter
Bases: GoalConditionedReplayParticleFilter
Goal-conditioned particle filter with IMM-style motion-mode switching.
The mode variable is represented by one discrete mode index per particle and
is resampled together with the continuous particles during measurement
updates. This keeps mode posteriors usable after likelihood updates while
retaining the particle-filter interface used by
:class:GoalConditionedReplayParticleFilter.
mode_names = ('stationary', 'diffusion', 'momentum', 'goal_directed', 'jump')
class-attribute
instance-attribute
n_modes = len(self.mode_names)
instance-attribute
mode_transition_matrix = self._prepare_mode_transition_matrix(mode_transition_matrix, mode_stickiness)
instance-attribute
mode_prior = self._validate_probability_vector(ones((self.n_modes,)) / self.n_modes if mode_prior is None else mode_prior, self.n_modes, 'mode_prior')
instance-attribute
stationary_velocity_decay = float(stationary_velocity_decay)
instance-attribute
diffusion_velocity_decay = float(diffusion_velocity_decay)
instance-attribute
momentum_velocity_decay = float(momentum_velocity_decay)
instance-attribute
jump_fraction = float(jump_fraction)
instance-attribute
jump_velocity_decay = float(jump_velocity_decay)
instance-attribute
mode_indices
property
Current discrete mode index for each particle.
mode_probabilities
property
Weighted posterior probability for each motion mode.
last_mode_transition_fraction
property
__init__(n_particles, position_dim=None, initial_state=None, *, spatial_dim=None, mode_transition_matrix=None, mode_prior=None, mode_stickiness=0.95, stationary_velocity_decay=0.0, diffusion_velocity_decay=0.0, momentum_velocity_decay=0.95, jump_fraction=0.9, jump_velocity_decay=0.25, **kwargs)
most_likely_mode_index()
most_likely_mode()
set_mode_indices(mode_indices)
Set one discrete mode index per particle.
sample_modes_from_prior(mode_prior=None)
Sample particle mode indices from a prior probability vector.
predict_replay(dt=None, alpha=None, beta=None, attraction_field=None, goal_transition=None, process_noise=None, goal_noise=None, jump_probability=None, jump_distribution=None, position_jump_distribution=None, goal_reset_probability=None, goal_reset_distribution=None, attraction_field_is_vectorized=None, gradient_is_vectorized=None, function_is_vectorized=None, use_semi_implicit_position_update=False, *, mode_transition_matrix=None, stationary_velocity_decay=None, diffusion_velocity_decay=None, momentum_velocity_decay=None, jump_fraction=None, jump_velocity_decay=None)
Predict one replay step after Markov switching the particle modes.
DecorrelatedSCGPTracker
Bases: FullSCGPTracker
SCGP tracker variant with zeroed kinematic-shape cross covariance.
predict(*args, **kwargs)
update(*args, **kwargs)
FullSCGPTracker
Bases: AbstractExtendedObjectTracker
Full star-convex Gaussian-process tracker.
The state is [x, y, orientation, velocity, turn_rate, f_1, ..., f_n] by
default. Set velocities=False to use the reduced kinematic state
[x, y, orientation]. Unlike :class:GPRHMTracker, this variant keeps one
joint covariance over kinematics and GP extent coefficients, so update and
prediction steps can propagate kinematic-shape correlations.
measurement_dim = 2
instance-attribute
phi_pts = linspace(0.0, 2 * pi, n_base_points, endpoint=False)
instance-attribute
kernel_params = kernel_params
instance-attribute
kinematic_state = self._normalize_kinematic_state(kinematic_state, velocities)
instance-attribute
kinematic_dim = self.kinematic_state.shape[0]
instance-attribute
velocities = self.kinematic_dim == 5
instance-attribute
shape_state = self._normalize_shape_state(shape_state, n_base_points)
instance-attribute
shape_dim = self.shape_state.shape[0]
instance-attribute
state = concatenate([self.kinematic_state, self.shape_state])
instance-attribute
kinematic_covariance = self._as_covariance_matrix(0.1 * eye(self.kinematic_dim) if kinematic_covariance is None else kinematic_covariance, self.kinematic_dim, 'kinematic_covariance')
instance-attribute
shape_covariance = self._as_covariance_matrix(self._k_uu if shape_covariance is None else shape_covariance, self.shape_dim, 'shape_covariance')
instance-attribute
covariance = linalg.block_diag(self.kinematic_covariance, self.shape_covariance)
instance-attribute
dt = float(dt)
instance-attribute
sys_noise = self._as_covariance_matrix(zeros((self.kinematic_dim, self.kinematic_dim)) if sys_noise is None else sys_noise, self.kinematic_dim, 'sys_noise', require_positive_semidefinite=False)
instance-attribute
acceleration_variance = float(acceleration_variance)
instance-attribute
extent_forgetting_rate = float(extent_forgetting_rate)
instance-attribute
reference_extent = self._normalize_shape_state(reference_extent, n_base_points)
instance-attribute
radial_noise_variance = float(radial_noise_variance)
instance-attribute
measurement_noise = self._as_covariance_matrix(zeros((self.measurement_dim, self.measurement_dim)) if measurement_noise is None else measurement_noise, self.measurement_dim, 'measurement_noise', require_positive_semidefinite=False)
instance-attribute
scale_mean = float(scale_mean)
instance-attribute
scale_variance = float(scale_variance)
instance-attribute
alpha = float(alpha)
instance-attribute
beta = float(beta)
instance-attribute
kappa = float(kappa)
instance-attribute
last_quadratic_form = None
instance-attribute
last_active_measurement_indices = None
instance-attribute
last_measurement_weights = None
instance-attribute
__init__(n_base_points, kinematic_state=None, kinematic_covariance=None, shape_state=None, shape_covariance=None, joint_covariance=None, velocities=True, kernel_params=(2.0, pi / 4), dt=1.0, sys_noise=None, acceleration_variance=0.0, extent_forgetting_rate=0.0, reference_extent=None, radial_noise_variance=0.0, measurement_noise=None, scale_mean=1.0, scale_variance=0.0, alpha=1.0, beta=2.0, kappa=2.0, log_prior_estimates=False, log_posterior_estimates=False, log_prior_extents=False, log_posterior_extents=False)
predict(dt=None, sys_noise=None)
update(measurements, R=None, s_hat=None, sigma_squared_s=None, measurement_weights=None, active_measurement_mask=None)
Update the tracker with optional per-measurement reliabilities.
measurement_weights scales each measurement covariance block as
R_i / weight_i. Zero-weight or masked measurements are skipped.
active_measurement_mask can be used to explicitly disable cluttered,
occluded, or otherwise unsupported measurements.
get_point_estimate()
get_point_estimate_kinematics()
get_point_estimate_shape()
get_point_estimate_extent(flatten_matrix=False)
get_extents_on_grid(n=100, angles=None, body_frame=False)
get_contour_points(n=100, scaling_factor=1.0)
get_bounding_box(n=100)
GPRHMTracker
Bases: AbstractExtendedObjectTracker
kernel = sin_kernel
instance-attribute
phi_pts = linspace(0.0, 2 * pi, n_base_points, endpoint=False)
instance-attribute
m = zeros(2)
instance-attribute
H = eye(2)
instance-attribute
C_m = 0.1 * eye(2)
instance-attribute
p = zeros_like(self.phi_pts)
instance-attribute
C_p = 0.1 * eye(self.phi_pts.shape[0])
instance-attribute
A_fun = lambda phi: linalg.solve(K_p, K_fun(phi).T).T
instance-attribute
C_e_fun = lambda phi: self.kernel(phi, phi) - K_fun(phi) @ linalg.solve(K_p, K_fun(phi).T).T
instance-attribute
__init__(n_base_points, dimension=2, velocities=False, kernel_params=(2.0, pi / 4), log_prior_estimates=True, log_posterior_estimates=True, log_prior_extents=True, log_posterior_extents=True)
get_point_estimate()
get_point_estimate_kinematics()
get_point_estimate_extent(flatten_matrix=False)
get_extents_on_grid(n=100)
get_contour_points(n=100)
update(z, R, s_hat=1, sigma_squared_s=0)
HypercylindricalParticleFilter
Bases: AbstractParticleFilter, HypercylindricalFilterMixin
__init__(n_particles, bound_dim, lin_dim)
predict_nonlinear(f, noise_distribution=None, function_is_vectorized=True, shift_instead_of_add=True)
HyperhemisphereCartProdParticleFilter
Bases: AbstractParticleFilter
filter_state
property
writable
__init__(n_particles, dim_hemisphere, n_hemispheres)
Constructor
Parameters: n_particles (int > 0): Number of particles to use dim (int > 0): Dimension
set_state(new_state)
Sets the current system state
Parameters: dist_ (HyperhemisphericalDiracDistribution): New state
predict_nonlinear_each_part(f, noise_distribution=None, function_is_vectorized=True, shift_instead_of_add=True)
Predicts the next state for each hyperhemisphere
HyperhemisphericalGridFilter
Bases: AbstractGridFilter, HyperhemisphericalFilterMixin
Grid-based recursive Bayesian filter on the hyperhemisphere.
The state is represented as a :class:HyperhemisphericalGridDistribution.
Ported from libDirectional's HyperhemisphericalGridFilter.m.
filter_state
property
writable
__init__(no_of_grid_points, dim, grid_type='leopardi_symm')
Parameters
no_of_grid_points : int
Number of grid points on the hemisphere.
dim : int
Manifold dimension of the hyperhemisphere (e.g. 2 for S2-half).
grid_type : str
Grid type, defaults to 'leopardi_symm'.
predict_identity(d_sys)
Predict assuming an identity system model with noise d_sys.
Supported: :class:HyperhemisphericalWatsonDistribution,
:class:WatsonDistribution, symmetric two-component
:class:HypersphericalMixture of VMF distributions.
Parameters
d_sys : AbstractDistribution
System noise distribution with the same dim as the filter.
predict_nonlinear_via_transition_density(f_trans)
Perform prediction using a precomputed transition density.
Parameters
f_trans : SdHalfCondSdHalfGridDistribution Must use the same grid as the current filter state.
update_identity(meas_noise, z)
Perform a measurement update assuming an identity measurement model.
Supported noise: :class:HyperhemisphericalWatsonDistribution,
:class:WatsonDistribution, :class:VonMisesFisherDistribution
(with mu[-1] == 0), symmetric :class:HypersphericalMixture.
Parameters
meas_noise : AbstractDistribution
Measurement noise centred at [0, …, 0, 1].
z : array, shape (dim,)
Measurement on the hemisphere.
get_point_estimate()
Compute a point estimate from the dominant scatter-matrix eigenvector.
Returns
p : array, shape (dim,) Point estimate on the upper hemisphere.
sys_noise_to_transition_density(d_sys, no_grid_points)
staticmethod
Build a :class:SdHalfCondSdHalfGridDistribution from a noise distribution.
Parameters
d_sys : AbstractDistribution
Supported: :class:HyperhemisphericalWatsonDistribution,
:class:WatsonDistribution, symmetric :class:HypersphericalMixture.
no_grid_points : int
Number of grid points on the hemisphere.
Returns
SdHalfCondSdHalfGridDistribution
HyperhemisphericalParticleFilter
Bases: AbstractParticleFilter, HyperhemisphericalFilterMixin
__init__(n_particles, dim)
Constructor
Parameters: n_particles (int > 0): Number of particles to use dim (int > 0): Dimension
set_state(new_state)
Sets the current system state
Parameters: dist_ (HyperhemisphericalDiracDistribution): New state
HypersphericalDummyFilter
Bases: AbstractDummyFilter, HypersphericalFilterMixin
Hyperspherical dummy filter initialized with a uniform distribution.
This filter does nothing on predictions and updates, always returning samples from the initial uniform distribution as point estimates.
__init__(dim)
Initialize HypersphericalDummyFilter.
Parameters: dim (int >= 2): Manifold dimension of the hypersphere (e.g. 2 for S^2).
get_point_estimate()
HypersphericalParticleFilter
Bases: AbstractParticleFilter, HypersphericalFilterMixin
filter_state
property
writable
__init__(n_particles, dim)
predict_identity(noise_distribution)
update_identity(meas_noise, measurement, shift_instead_of_add=True)
update_nonlinear(likelihood, z=None)
get_estimate_mean()
HypersphericalUKF
Bases: AbstractFilter, HypersphericalFilterMixin
Unscented Kalman filter on the unit hypersphere S^(d-1).
The state is represented as a d-dimensional :class:GaussianDistribution
whose mean is kept on the unit hypersphere via normalization after each
prediction/update step.
Parameters
dim:
Embedding-space dimension (e.g. 2 for S^1, 3 for S^2).
alpha, beta, kappa:
Sigma-point spread parameters for :class:MerweScaledSigmaPoints.
filter_state
property
writable
__init__(dim=2, alpha=0.001, beta=2.0, kappa=0.0)
predict_nonlinear(f, gauss_sys)
Predict assuming a nonlinear system model: x(k+1) = normalize(f(normalize(x(k)))) + w(k)
Parameters
f: Function from S^(d-1) to S^(d-1). gauss_sys: Distribution of additive system noise (mean is ignored).
predict_nonlinear_arbitrary_noise(f, noise_samples, noise_weights)
Predict assuming nonlinear system model with arbitrary noise: x(k+1) = f(x(k), v_k)
Parameters
f:
Function f(x, v) -> x_new where x is a unit vector in R^d.
noise_samples:
Array of shape (noise_dim, n_noise) with noise samples (columns).
noise_weights:
Array of length n_noise with positive weights.
predict_identity(gauss_sys)
Predict with identity system model: x(k+1) = x(k) + w(k)
Parameters
gauss_sys: Distribution of additive system noise (mean is ignored).
update_nonlinear(f, gauss_meas, z)
Update assuming a nonlinear measurement model: z(k) = f(normalize(x(k))) + v_k
Parameters
f: Measurement function from S^(d-1) to R^m. gauss_meas: Distribution of additive measurement noise (mean is ignored). z: Measurement vector of shape (m,) or scalar.
update_identity(gauss_meas, z)
Update with identity measurement model: z(k) = x(k) + v_k
Parameters
gauss_meas: Distribution of additive measurement noise (mean is ignored). z: Measurement vector on S^(d-1).
get_point_estimate()
Return the mean of the current state estimate (unit vector).
HypertoroidalDummyFilter
Bases: AbstractDummyFilter, HypertoroidalFilterMixin
Hypertoroidal dummy filter initialized with a uniform distribution.
This filter does nothing on predictions and updates, always returning samples from the initial uniform distribution as point estimates.
__init__(dim)
Initialize HypertoroidalDummyFilter.
Parameters: dim (int >= 1): Manifold dimension of the hypertorus (e.g. 1 for T^1).
get_point_estimate()
HypertoroidalFourierFilter
Bases: AbstractFilter, HypertoroidalFilterMixin
Filter based on Fourier series on the hypertorus.
References: - Florian Pfaff, Gerhard Kurz, Uwe D. Hanebeck, Multivariate Angular Filtering Using Fourier Series Journal of Advances in Information Fusion, December 2016.
filter_state
property
writable
Return the current filter state.
__init__(n_coefficients, transformation='sqrt')
Constructor.
Parameters
n_coefficients : int or tuple of int Number of Fourier coefficients per dimension. The length of the tuple determines the dimensionality of the distribution. transformation : str, optional Transformation to use ('sqrt' or 'identity'). Default is 'sqrt'.
predict_identity(d_sys)
Predicts assuming identity system model, i.e., x(k+1) = x(k) + w(k) mod 2*pi, where w(k) is additive noise given by d_sys.
Parameters
d_sys : AbstractHypertoroidalDistribution Distribution of additive noise.
update_identity(d_meas, z)
Updates assuming identity measurement model, i.e., z(k) = x(k) + v(k) mod 2*pi, where v(k) is additive noise given by d_meas.
Parameters
d_meas : AbstractHypertoroidalDistribution Distribution of additive noise. z : array_like, shape (dim,) Measurement in [0, 2*pi)^dim.
get_f_trans_as_hfd(f, noise_distribution)
Build a HypertoroidalFourierDistribution representing the transition density f(x_{k+1} | x_k) from a deterministic system function f and an additive noise distribution.
Parameters
f : callable
Deterministic system function. Takes dim arrays (x_k values)
and returns either a tuple of dim arrays or a single array
(for dim == 1). Must support vectorized evaluation on N-D grids.
noise_distribution : AbstractHypertoroidalDistribution
Additive noise distribution.
Returns
HypertoroidalFourierDistribution 2*dim-dimensional transition density.
predict_nonlinear(f, noise_distribution, truncate_joint_sqrt=True)
Predicts assuming a nonlinear system model, i.e., x(k+1) = f(x(k)) + w(k) mod 2*pi.
Parameters
f : callable
System function. See get_f_trans_as_hfd for calling convention.
noise_distribution : AbstractHypertoroidalDistribution
Additive process noise distribution.
truncate_joint_sqrt : bool, optional
Whether to truncate the intermediate joint sqrt representation
(only relevant for sqrt transformation). Default is True.
predict_nonlinear_via_transition_density(f_trans, truncate_joint_sqrt=True)
Predicts using a probabilistic transition density.
Parameters
f_trans : HypertoroidalFourierDistribution or callable Transition density f(x_{k+1} | x_k).
* If a ``HypertoroidalFourierDistribution``: the first ``dim``
dimensions must be for x_{k+1} and the remaining ``dim``
dimensions for x_k. Its transformation must match that of
the current filter state.
* If a callable: a function of 2*dim arguments (first dim for
x_{k+1}, last dim for x_k); it is converted internally.
truncate_joint_sqrt : bool, optional Whether to truncate the intermediate joint sqrt coefficient tensor. Default is True.
update_nonlinear(likelihood, z=None)
Updates using an arbitrary likelihood function and measurement.
Parameters
likelihood : HypertoroidalFourierDistribution or callable
* If a HypertoroidalFourierDistribution: used directly for the
Bayes update (multiplication). z must be None.
* If a callable likelihood(z, x): the likelihood function
f(z | x), where both z and x are dim x n_pts
arrays. z must be provided.
z : array_like, shape (dim,), optional
Measurement. Must be provided when likelihood is a callable.
HypertoroidalParticleFilter
Bases: AbstractParticleFilter, HypertoroidalFilterMixin
__init__(n_particles, dim)
predict_nonlinear(f, noise_distribution=None, function_is_vectorized=True, shift_instead_of_add=True)
predict_nonlinear_nonadditive(f, samples, weights)
InteractingMultipleModelFilter
Bases: AbstractFilter, EuclideanFilterMixin
Linear-Gaussian interacting multiple model (IMM) filter.
The filter is built from a bank of single-model filters whose states are assumed
to be representable by :class:~pyrecest.distributions.GaussianDistribution.
Each subfilter is expected to expose filter_state together with the methods
predict_identity / predict_linear / update_identity /
update_linear. Nonlinear prediction and update are also supported when the
corresponding subfilter methods exist, but nonlinear updates require externally
supplied model likelihoods because PyRecEst's current nonlinear filters do not
yet expose a uniform predictive-likelihood interface.
The transition matrix is interpreted row-wise,
transition_matrix[i, j] = p(m_k=j | m_{k-1}=i).
filter_bank = [(copy.deepcopy(curr_filter)) for curr_filter in filter_bank]
instance-attribute
transition_matrix = self._prepare_transition_matrix(transition_matrix, self.n_models)
instance-attribute
mode_probabilities = self._prepare_mode_probabilities(mode_probabilities, self.n_models)
instance-attribute
latest_mixing_probabilities = None
instance-attribute
latest_model_likelihoods = None
instance-attribute
latest_log_model_likelihoods = None
instance-attribute
n_models
property
Number of interacting models.
dim
property
State dimension.
model_probabilities
property
writable
Alias for :attr:mode_probabilities.
filter_state
property
writable
Current Gaussian-mixture state of the IMM.
Returns a Gaussian mixture whose components are the current subfilter states and whose weights are the current mode probabilities. Components with zero probability are omitted to avoid downstream issues with zero-weight mixtures.
combined_filter_state
property
Moment-matched single-Gaussian approximation of the IMM state.
most_likely_model_index
property
Index of the most likely current model.
__init__(filter_bank, transition_matrix, mode_probabilities=None)
interact()
Perform the IMM interaction (mixing) step.
The current model probabilities are propagated through the transition matrix, and a mixed Gaussian prior is computed for each destination model.
predict_identity(sys_noise_covs, sys_inputs=None)
Predict each model with an identity system model.
sys_noise_covs and sys_inputs can either be shared across all models
or be provided as lists/tuples with one entry per model.
predict_linear(system_matrices, sys_noise_covs, sys_inputs=None)
Predict each model with a linear system model.
system_matrices, sys_noise_covs, and sys_inputs can either be
shared across all models or be provided as lists/tuples with one entry per
model.
predict_nonlinear(transition_functions, sys_noise_covs, dts=None, fx_args=None)
Predict each model with a nonlinear transition function.
Parameters can be shared across all models or supplied per model. fx_args
may be None, a single dictionary shared across all models, or a list/tuple
of dictionaries.
update_identity(measurement, meas_noises)
Update each model with an identity measurement model.
meas_noises can either be shared across all models or be provided as a
list/tuple with one entry per model.
update_linear(measurement, measurement_matrices, meas_noises)
Update each model with a linear measurement model.
measurement_matrices and meas_noises can either be shared across all
models or be provided as lists/tuples with one entry per model.
update_nonlinear(measurement, measurement_functions, meas_noises, likelihoods=None, log_likelihoods=None, hx_args=None)
Update each model with a nonlinear measurement function.
Because current nonlinear filters in PyRecEst do not expose a uniform predictive-likelihood interface, nonlinear IMM updates require externally supplied model likelihoods (or log-likelihoods).
update_mode_probabilities(likelihoods=None, log_likelihoods=None)
Update model probabilities from external per-model likelihoods.
get_point_estimate()
Return the IMM mean estimate.
JointProbabilisticDataAssociationFilter
Bases: AbstractNearestNeighborTracker
Joint probabilistic data association for linear-Gaussian multitarget tracking.
The implementation enumerates all feasible joint association events exactly. This is appropriate for a modest number of targets and measurements and keeps the code compact and close to the conventions already used by the current tracker classes in PyRecEst.
latest_association_probabilities = None
instance-attribute
latest_map_association = None
instance-attribute
filter_bank = []
instance-attribute
filter_state = initial_prior
instance-attribute
__init__(initial_prior=None, association_param=None, log_prior_estimates=True, log_posterior_estimates=True)
find_association_probabilities(measurements, measurement_matrix, cov_mats_meas, warn_on_no_meas_for_track=True)
Compute marginal association probabilities.
Returns:
tuple[numpy.ndarray, numpy.ndarray]
The first entry contains the marginal probabilities of shape
(n_targets, n_meas + 1), where column 0 corresponds to a
missed detection. The second entry is the most likely joint event,
encoded as measurement indices and -1 for missed detections.
find_association(measurements, measurement_matrix, cov_mats_meas, warn_on_no_meas_for_track=True)
update_linear(measurements, measurement_matrix, covMatsMeas, pairwise_cost_matrix=None)
KalmanFilter
Bases: AbstractFilter, EuclideanFilterMixin
Kalman filter for linear Gaussian Euclidean state-space models.
The filter state is stored as a :class:GaussianDistribution with mean
vector shape (n,) and covariance matrix shape (n, n). Prediction
uses x_k = F x_{k-1} + u + w and updates use
z_k = H x_k + v.
dim
property
Return the dimension n of the Euclidean state vector.
filter_state
property
writable
Return a Gaussian copy of the current filter state.
__init__(initial_state)
Create a Kalman filter from a Gaussian state.
Parameters
initial_state : GaussianDistribution or tuple
Initial state distribution. Tuples must contain (mean,
covariance) with mean shape (n,) or scalar shape for a
one-dimensional state, and covariance shape (n, n).
predict_identity(sys_noise_cov, sys_input=None)
Predict one step with an identity transition matrix.
Parameters
sys_noise_cov : array-like, shape (n, n) Additive process-noise covariance. sys_input : array-like, shape (n,), optional Additive deterministic input applied after the identity transition.
predict_linear(system_matrix, sys_noise_cov, sys_input=None)
Predict one step with a linear Gaussian system model.
Parameters
system_matrix : array-like, shape (n, n)
State-transition matrix F.
sys_noise_cov : array-like, shape (n, n)
Additive process-noise covariance Q.
sys_input : array-like, shape (n,), optional
Additive deterministic input u.
predict_model(transition_model)
Predict one step with a linear Gaussian transition model object.
The model object is consumed structurally. It must expose a
system_matrix attribute and either system_noise_cov or
sys_noise_cov. If present, sys_input or system_input is
forwarded as the deterministic transition input.
Parameters
transition_model : object
Linear Gaussian transition model object compatible with
:meth:predict_linear.
update_identity(meas_noise, measurement, *, return_diagnostics=False, scale=1.0, action='updated')
Update with a measurement matrix equal to the identity.
Parameters
meas_noise : array-like, shape (n, n) Measurement-noise covariance. measurement : array-like, shape (n,) Measurement vector. return_diagnostics : bool, optional If true, return update diagnostics after updating the filter state. scale : float, optional Multiplicative measurement-noise scale used for the update. action : str, optional Caller-defined diagnostic label for the update action.
innovation_linear(measurement, measurement_matrix, meas_noise)
Return innovation and innovation covariance for a linear measurement.
normalized_innovation_squared_linear(measurement, measurement_matrix, meas_noise)
Return the normalized innovation squared for a linear measurement.
update_linear(measurement, measurement_matrix, meas_noise, *, return_diagnostics=False, scale=1.0, action='updated')
Update the state with a linear Gaussian measurement model.
Parameters
measurement : array-like, shape (m,)
Measurement vector z.
measurement_matrix : array-like, shape (m, n)
Measurement matrix H mapping state vectors to measurement
vectors.
meas_noise : array-like, shape (m, m)
Measurement-noise covariance R.
return_diagnostics : bool, optional
If true, return a diagnostics dictionary after updating the filter
state. The dictionary contains nis, residual, scale,
and action.
scale : float, optional
Multiplicative measurement-noise scale used for the update.
action : str, optional
Caller-defined diagnostic label for the update action.
update_linear_robust(measurement, measurement_matrix, meas_noise, *, robust_update='student-t', gate_threshold=None, student_t_dof=4.0, huber_threshold=2.0, inflation_alpha=1.0, return_diagnostics=False)
Robustly update the state with a linear measurement model.
The Gaussian measurement covariance is adaptively inflated according to
the normalized innovation squared. Supported modes are
"student-t", "huber", "nis-inflate", and None/
"none". With robust_update=None and gate_threshold set,
measurements above the gate are rejected and the prior state is kept.
update_model_robust(measurement_model, measurement, **kwargs)
Robustly update with a structural linear Gaussian model object.
update_model(measurement_model, measurement, *, return_diagnostics=False, scale=1.0, action='updated')
Update the state with a linear Gaussian measurement model object.
The model object is consumed structurally. It must expose a
measurement_matrix attribute and either meas_noise or
measurement_noise_cov.
Parameters
measurement_model : object
Linear Gaussian measurement model object compatible with
:meth:update_linear.
measurement : array-like, shape (m,)
Measurement vector z.
return_diagnostics : bool, optional
If true, return update diagnostics after updating the filter state.
scale : float, optional
Multiplicative measurement-noise scale used for the update.
action : str, optional
Caller-defined diagnostic label for the update action.
get_point_estimate()
Return the posterior mean vector with shape (n,).
KernelSMEFilter
Bases: AbstractMultitargetTracker
Using the Kernel Symmetric Measurement Equation (SME) approach for multitarget tracking. Measurements are assumed to be (meas_dim, n_meas) arrays because this simplifies algebraic operations in the implementation.
x = None
instance-attribute
C = None
instance-attribute
n_targets = 0
instance-attribute
filter_state
property
writable
dim
property
__init__(initial_priors=None, log_estimates=True)
get_point_estimate(flatten_vector=False)
get_number_of_targets()
predict_linear(system_matrix, sys_noise, inputs=None)
update_linear(measurements, measurement_matrix, cov_mat_meas, false_alarm_rate=0, clutter_cov=None, lambda_multimeas=1, enable_gating=False, gating_threshold=None)
Update the filter with new measurements using the linear measurement model.
Args:
measurements (numpy.ndarray): Array of shape (dim_meas, n_meas) containing the measurements.
measurementMatrix (numpy.ndarray): Array of shape (dim_meas, dim_state) containing the measurement matrix.
covMatMeas (numpy.ndarray): Array of shape (dim_meas, dim_meas) containing the measurement noise covariance matrix.
falseAlarmRate (float): Scalar representing the false alarm rate. Default is 0, which means that clutter is disabled.
clutterCov (numpy.ndarray): Array of shape (dim_meas, dim_meas) containing the clutter covariance matrix. Default is None, which means that clutter is disabled.
lambdaMultimeas (float): Scalar representing the scaling factor for multimeasurements. Default is 1, which means that multimeasurements are not used.
enableGating (bool): If True, gating will be used to remove unlikely measurements before the update. Default is False.
gatingThreshold (float): Scalar representing the gating threshold. Default is None, which means that the threshold is set to chi2inv(0.99, dim_meas).
Raises:
AssertionError: If enableGating=True and gatingThreshold is None.
gen_test_points(measurements, kernel_width)
staticmethod
calc_pseudo_meas(testPoints, measurements, kernel_width)
staticmethod
calc_moments(x_prior, C_prior, measurement_matrix, covMatMeas, testPoints, kernel_width, n_targets, falseAlarmRate=0, clutterCov=None, lambdaMultimeas=1)
staticmethod
Compute mu_s, Sigma_s, Sigma_xs according to the Kernel SME multi-detection + clutter equations in the paper.
x_prior is stacked [x_1; ...; x_N], C_prior full covariance. lambdaMultimeas = lambda^l (Poisson mean number of detections per target). falseAlarmRate = lambda^c (Poisson mean number of clutter points per scan).
LinBoundedParticleFilter
LinPeriodicParticleFilter
ManifoldExponentialMovingAverage
Bases: AbstractFilter
Exponential moving average on a manifold.
The estimate is updated by moving from the current state toward each new sample in the tangent space at the current estimate:
x_new = phi(x, alpha * phi_inv(x, sample)).
Parameters
initial_state:
Initial manifold element. If None, the first update initializes the
estimate directly from the first sample.
alpha:
Weight of the new sample. Must be in [0, 1].
phi:
Retraction with signature phi(state, tangent_vector) -> state.
phi_inv:
Inverse retraction with signature
phi_inv(state_ref, state) -> tangent_vector.
phi = phi
instance-attribute
phi_inv = phi_inv
instance-attribute
alpha
property
writable
Weight assigned to each new sample.
filter_state
property
writable
Return the current manifold estimate.
__init__(initial_state, alpha, phi, phi_inv)
update(sample)
Update the moving average with a new manifold-valued sample.
get_point_estimate()
Return the current manifold estimate.
AbstractFilterManifoldMixin
filter_state
abstractmethod
property
Contract: Any class inheriting this Mixin must provide 'filter_state'.
get_point_estimate()
Get the point estimate.
This method is responsible for getting the point estimate.
Returns:
| Type | Description |
|---|---|
|
The mean direction of the filter state. |
AbstractHypersphereSubsetFilter
Bases: AbstractFilterManifoldMixin, ABC
CircularFilterMixin
Bases: HypertoroidalFilterMixin, ABC
EuclideanFilterMixin
Bases: AbstractFilterManifoldMixin, ABC
get_point_estimate()
HypercylindricalFilterMixin
Bases: LinPeriodicFilterMixin, ABC
get_point_estimate()
HyperhemisphericalFilterMixin
Bases: AbstractHypersphereSubsetFilter, ABC
HypersphericalFilterMixin
Bases: AbstractHypersphereSubsetFilter, ABC
HypertoroidalFilterMixin
Bases: AbstractFilterManifoldMixin, ABC
LinBoundedFilterMixin
Bases: AbstractFilterManifoldMixin, ABC
LinPeriodicFilterMixin
Bases: LinBoundedFilterMixin, ABC
SE2FilterMixin
Bases: HypercylindricalFilterMixin, ABC
ToroidalFilterMixin
Bases: HypertoroidalFilterMixin, ABC
MEMEKFStarTracker
Bases: MEMEKFTracker
Moment-corrected MEM-EKF* tracker for one 2-D elliptical object.
Compared with :class:MEMEKFTracker, this variant follows the MEM-EKF*
moment approximation. It includes shape-parameter uncertainty in the
measurement covariance and uses the covariance of the quadratic
pseudo-measurement [dx^2, dy^2, dx * dy].
MEMEKFTracker
Bases: AbstractExtendedObjectTracker
Multiplicative-error-model EKF for one 2-D elliptical extended object.
The shape state is [orientation, semi_axis_1, semi_axis_2]. The
measurement model is z = H x + S(p) h + v, where S(p) rotates and
scales the unit multiplicative error h into the ellipse and v is
additive measurement noise.
measurement_dim = 2
instance-attribute
kinematic_state = array(kinematic_state)
instance-attribute
covariance = self._as_covariance_matrix(covariance, self.kinematic_state.shape[0], 'covariance')
instance-attribute
shape_state = array(shape_state)
instance-attribute
shape_covariance = self._as_covariance_matrix(shape_covariance, 3, 'shape_covariance')
instance-attribute
multiplicative_noise_cov = self._as_covariance_matrix(multiplicative_noise_cov, self.measurement_dim, 'multiplicative_noise_cov')
instance-attribute
measurement_matrix = None
instance-attribute
covariance_regularization = float(covariance_regularization)
instance-attribute
extent
property
__init__(kinematic_state, covariance, shape_state, shape_covariance, measurement_matrix=None, multiplicative_noise_cov=None, covariance_regularization=0.0, log_prior_estimates=False, log_posterior_estimates=False, log_prior_extents=False, log_posterior_extents=False)
get_point_estimate()
get_point_estimate_kinematics()
get_point_estimate_shape()
get_point_estimate_extent(flatten_matrix=False)
predict_linear(system_matrix, sys_noise=None, inputs=None, shape_system_matrix=None, shape_sys_noise=None)
predict(*args, **kwargs)
Alias for :meth:predict_linear to match existing EOT tracker APIs.
update(measurements, meas_mat=None, meas_noise_cov=None, multiplicative_noise_cov=None)
Sequentially update from one or more 2-D target-originated measurements.
get_contour_points(n, scaling_factor=1.0)
MEMSOEKFTracker
Bases: MEMEKFTracker
Second-order MEM-EKF tracker for one 2-D elliptical extended object.
The tracker uses the same multiplicative-error-model state convention as
:class:MEMEKFTracker, but performs the measurement update with the SOEKF
pseudo-measurement [dx, dy, dx^2, dx * dy, dy^2]. The state is locally
shifted to the predicted measurement before each single-measurement update,
matching the robust centering used by the reference MEM-SOEKF equations.
finite_difference_step = float(finite_difference_step)
instance-attribute
__init__(*args, finite_difference_step=1e-05, **kwargs)
BernoulliComponent
A Bernoulli component used by :class:MultiBernoulliTracker.
Parameters
existence_probability : float
Probability that the component exists.
single_target_state
Either a single-target filter or a state distribution from which a
:class:KalmanFilter can be instantiated.
label : hashable, optional
Persistent track label. If omitted, :class:MultiBernoulliTracker
assigns a monotonically increasing integer label automatically when the
component becomes active.
existence_probability = float(existence_probability)
instance-attribute
label = copy.deepcopy(label)
instance-attribute
single_target_filter = copy.deepcopy(single_target_state)
instance-attribute
dim
property
Return the state dimension of the component.
filter_state
property
Return the underlying single-target filter state.
__init__(existence_probability, single_target_state, label=None)
get_point_estimate()
Return the single-target point estimate.
MultiBernoulliTracker
Bases: AbstractMultitargetTracker
Approximate multi-Bernoulli tracker for linear/Gaussian models.
The exact multi-Bernoulli update generally yields a multi-Bernoulli mixture. To keep the implementation lightweight and close to the rest of PyRecEst, this tracker retains a single multi-Bernoulli posterior via a best-assignment approximation similar in spirit to the existing nearest-neighbor tracker.
Measurements are represented as arrays with shape (m, num_measurements).
The measurement matrix has shape (m, n), where n is the single-target
state dimension and m is the measurement dimension.
tracker_param = tracker_param
instance-attribute
birth_components = []
instance-attribute
bernoulli_components = []
instance-attribute
filter_state
property
writable
Return a deep copy of the active Bernoulli components.
dim
property
Return the single-target state dimension n.
__init__(initial_prior=None, tracker_param=None, birth_components=None, log_prior_estimates=True, log_posterior_estimates=True)
Create a multi-Bernoulli tracker.
Parameters
initial_prior : iterable, optional
Initial Bernoulli components. Each item can be a
:class:BernoulliComponent, (existence_probability, state),
(existence_probability, state, label), or a single-target state
accepted by :class:KalmanFilter.
tracker_param : dict, optional
Tracker parameters. Supported keys include survival and detection
probabilities, clutter intensity, gating settings, pruning and
capping limits, and optional birth settings.
birth_components : iterable, optional
Bernoulli components appended during prediction.
log_prior_estimates : bool, optional
If true, store prior estimates after prediction.
log_posterior_estimates : bool, optional
If true, store posterior estimates after update.
get_number_of_components()
Return the number of Bernoulli components.
get_existence_probabilities()
Return the existence probabilities of all Bernoulli components.
get_cardinality_distribution()
Return the cardinality PMF implied by the Bernoulli components.
get_expected_number_of_targets()
Return the expected cardinality of the multi-Bernoulli posterior.
get_number_of_targets()
Return the MAP cardinality estimate.
get_component_labels()
Return the labels of all active Bernoulli components.
get_labeled_components(copy_components=True)
Return active Bernoulli components keyed by track label.
Parameters
copy_components : bool, optional If true, return deep copies. If false, return references to the tracker-owned components.
get_component_by_label(label, copy_component=True)
Return a Bernoulli component by track label.
Parameters
label : hashable Track label to retrieve. copy_component : bool, optional If true, return a deep copy. If false, return the tracker-owned component.
get_track_labels(number_of_targets=None)
Return the labels of the extracted target states.
Parameters
number_of_targets : int, optional Number of most likely target states to extract. If omitted, use the MAP cardinality estimate.
get_point_estimate(flatten_vector=False, number_of_targets=None)
Return extracted target states.
The state extraction follows a common multi-Bernoulli convention: the MAP cardinality is used, and the states of the Bernoulli components with the highest existence probabilities are returned.
Parameters
flatten_vector : bool, optional If true, flatten the output into a one-dimensional vector. number_of_targets : int, optional Number of most likely target states to extract. If omitted, use the MAP cardinality estimate.
Returns
array-like, shape (n, number_of_targets)
Extracted point estimates. If flatten_vector is true, the result
is flattened.
get_labeled_point_estimate(flatten_vector=False, number_of_targets=None)
Return extracted target states together with persistent labels.
Returns
tuple
(labels, point_estimates) where labels is a list of persistent
track labels and point estimates has shape (n, number_of_targets)
unless flatten_vector is true.
prune(pruning_threshold=None)
Remove Bernoulli components with low existence probability.
cap(maximum_number_of_components=None)
Keep only the Bernoulli components with the highest existence probability.
predict_linear(system_matrices, sys_noises, inputs=None, birth_components=None)
Predict all Bernoulli components with a linear/Gaussian model.
Parameters
system_matrices : array-like, shape (n, n) or (n, n, num_components)
Shared or per-component state-transition matrices.
sys_noises : array-like or GaussianDistribution
Shared process-noise covariance with shape (n, n),
per-component covariances with shape (n, n, num_components), or
a zero-mean :class:GaussianDistribution.
inputs : array-like, optional
Shared input with shape (n,) or per-component inputs with shape
(n, num_components).
birth_components : iterable, optional
Components appended after prediction instead of the tracker's stored
birth components.
find_association(measurements, measurement_matrix, cov_mats_meas)
Find the best measurement-to-Bernoulli association.
Parameters
measurements : array-like, shape (m,) or (m, num_measurements) Measurement vector or measurement matrix. measurement_matrix : array-like, shape (m, n) Linear map from state vectors to measurement vectors. cov_mats_meas : array-like, shape (m, m) or (m, m, num_measurements) Shared or per-measurement covariance matrices.
update_linear(measurements, measurement_matrix, cov_mats_meas)
Update the tracker with linear/Gaussian measurements.
Parameters
measurements : array-like, shape (m,) or (m, num_measurements) Measurement vector or matrix whose columns are individual measurements. measurement_matrix : array-like, shape (m, n) Linear map from state vectors to measurement vectors. cov_mats_meas : array-like, shape (m, m) or (m, m, num_measurements) Shared measurement-noise covariance or per-measurement covariance matrices.
FixedLagBuffer
Chronologically ordered buffer with optional fixed-lag trimming.
Parameters
max_lag : float, optional Maximum retained lag relative to the largest timestamp in the buffer. maxlen : int, optional Maximum number of retained entries after lag trimming. copy_values : bool, optional If true, values are deep-copied when inserted and returned.
max_lag = None if max_lag is None else float(max_lag)
instance-attribute
maxlen = None if maxlen is None else int(maxlen)
instance-attribute
copy_values = bool(copy_values)
instance-attribute
items
property
Return buffered items in chronological order.
latest_time
property
Return the largest buffered timestamp, or None if empty.
cutoff_time
property
Return the current fixed-lag cutoff, or None if unavailable.
__init__(max_lag=None, maxlen=None, *, copy_values=True)
__len__()
append(time, value)
Append value at time and return its timestamped record.
clear()
Remove all buffered entries.
is_out_of_sequence(time)
Return true if time precedes the current latest timestamp.
is_within_lag(time)
Return true if time is acceptable under the fixed-lag window.
latest_at_or_before(time)
Return the latest record with timestamp at or before time.
items_after(time)
Return all records with timestamp strictly greater than time.
items_at_or_after(time)
Return all records with timestamp greater than or equal to time.
MeasurementRecord
dataclass
Timestamped measurement plus optional model and metadata.
time
instance-attribute
measurement
instance-attribute
measurement_model = None
class-attribute
instance-attribute
metadata = None
class-attribute
instance-attribute
sequence = 0
class-attribute
instance-attribute
__init__(time, measurement, measurement_model=None, metadata=None, sequence=0)
MeasurementTimeBuffer
Small helper for tracking measurement timestamps and OOSM status.
latest_time
property
cutoff_time
property
measurements
property
Return buffered measurements in chronological order.
append = add
class-attribute
instance-attribute
__init__(max_lag=None, maxlen=None, *, copy_values=True)
__len__()
add(time, measurement, measurement_model=None, **metadata)
Store a measurement and return the resulting record.
is_out_of_sequence(time)
is_within_lag(time)
clear()
OutOfSequenceKalmanUpdater
Bases: _EventReplayMixin
Fixed-lag OOSM processor for :class:KalmanFilter.
__init__(kalman_filter, initial_time=0.0, max_lag=None)
predict_linear(time, system_matrix, sys_noise_cov, sys_input=None)
Record/apply a timestamped linear-Gaussian prediction.
predict_model(time, transition_model)
Record/apply a timestamped structural transition-model prediction.
update_linear(time, measurement, measurement_matrix, meas_noise, *, return_diagnostics=False, scale=1.0, action='updated')
Record/apply a timestamped linear-Gaussian measurement update.
update_linear_robust(time, measurement, measurement_matrix, meas_noise, *, robust_update='student-t', gate_threshold=None, student_t_dof=4.0, huber_threshold=2.0, inflation_alpha=1.0, return_diagnostics=False)
Record/apply a timestamped robust linear-Gaussian update.
update_model(time, measurement_model, measurement, *, return_diagnostics=False, scale=1.0, action='updated')
Record/apply a timestamped structural measurement-model update.
update_model_robust(time, measurement_model, measurement, **kwargs)
Record/apply a timestamped robust structural measurement-model update.
OutOfSequenceParticleUpdater
Bases: _EventReplayMixin
Fixed-lag OOSM processor for particle filters.
Replaying stochastic transition models draws new process noise. For bitwise reproducibility, use deterministic transitions or caller-controlled random seeds.
update_with_likelihood = update_nonlinear_using_likelihood
class-attribute
instance-attribute
__init__(particle_filter, initial_time=0.0, max_lag=None)
predict_model(time, transition_model)
Record/apply a timestamped particle transition-model prediction.
predict_nonlinear(time, f, noise_distribution=None, function_is_vectorized=True, shift_instead_of_add=None)
Record/apply a timestamped nonlinear particle prediction.
update_model(time, measurement_model, measurement=None)
Record/apply a timestamped particle measurement-model update.
update_nonlinear_using_likelihood(time, likelihood, measurement=None)
Record/apply a timestamped likelihood-based particle update.
OutOfSequenceResult
dataclass
Result returned by an out-of-sequence replay helper.
time
instance-attribute
final_time
instance-attribute
out_of_sequence
instance-attribute
replayed_event_count
instance-attribute
accepted = True
class-attribute
instance-attribute
diagnostics = None
class-attribute
instance-attribute
filter_state = None
class-attribute
instance-attribute
__init__(time, final_time, out_of_sequence, replayed_event_count, accepted=True, diagnostics=None, filter_state=None)
TimestampedItem
dataclass
A value stored with an ordered scalar timestamp.
time
instance-attribute
value
instance-attribute
sequence = 0
class-attribute
instance-attribute
__init__(time, value, sequence=0)
PartitionedSO3ProductParticleFilter
Bases: SO3ProductParticleFilter
Particle filter for SO(3)^K with independent partition weights.
The filter approximates the product-state posterior by a product over a
user-supplied partition of the K SO(3) components. Each block keeps its
own particle weights and may be resampled independently. This preserves
correlations inside each block while avoiding the degeneracy of a single
global weight vector in high-dimensional product spaces.
The inherited weights property remains available as a normalized average
of the block weights for API compatibility. Point estimates, block ESS, and
block resampling use the block-specific weights.
partition = self._validate_partition(partition, self.num_rotations)
instance-attribute
block_weights
property
Return normalized block weights with shape (n_blocks, n_particles).
__init__(n_particles, num_rotations, partition=None, initial_particles=None, weights=None, block_weights=None)
set_block_weights(block_weights)
Replace the block weights and refresh the compatibility weights.
set_particles(particles, weights=None, block_weights=None)
Replace particles and optionally global or block weights.
component_weights(component_idx)
Return the weight vector used for one SO(3) component.
block_effective_sample_size()
Return one effective sample size per partition block.
effective_sample_size()
Return the mean block effective sample size.
mean()
Return the component-wise chordal mean using block-specific weights.
mode()
Return a block-wise modal product particle.
Since the posterior is represented as a product over blocks, the returned point may be a hybrid assembled from different source particles.
get_point_estimate()
Return the component-wise SO(3) mean.
resample_block_systematic(block_index)
Systematically resample one partition block and reset its weights.
resample_blocks_systematic(block_indices=None)
Systematically resample selected blocks and reset their weights.
update_with_block_likelihoods(likelihood, measurement=None, resample=True, ess_threshold=None)
Update block weights from nonnegative block likelihoods.
The likelihood must evaluate to an array shaped
(n_blocks, n_particles). Each row updates the corresponding block's
weights independently.
update_with_block_log_likelihoods(log_likelihood, measurement=None, resample=True, ess_threshold=None)
Update block weights from block log-likelihoods.
The log-likelihood must evaluate to an array shaped
(n_blocks, n_particles). Each row updates the corresponding block's
weights independently using log-sum-exp normalization.
update_with_component_likelihoods(component_likelihoods, *, resample=True, ess_threshold=None)
Update from per-component likelihoods shaped (n_particles, K).
update_with_component_log_likelihoods(component_log_likelihoods, *, resample=True, ess_threshold=None)
Update from per-component log-likelihoods shaped (n_particles, K).
update_with_geodesic_log_likelihood(measurement, noise_std=None, *, component_noise_std=None, mask=None, confidence=None, max_noise_std=None, confidence_exponent=1.0, outlier_prob=0.0, resample=True, ess_threshold=None)
Update partition weights with masked component geodesic log-likelihoods.
update_with_geodesic_likelihood(measurement, noise_std, *, component_noise_std=None, mask=None, confidence=None, max_noise_std=None, confidence_exponent=1.0, outlier_prob=0.0, resample=True, ess_threshold=None)
Update with masked geodesic likelihoods per partition block.
This preserves the existing likelihood-space API while delegating to the log-likelihood implementation for numerical stability.
PiecewiseConstantFilter
Bases: AbstractFilter, CircularFilterMixin
A filter based on a piecewise constant distribution on the circle.
The state is represented as a PiecewiseConstantDistribution over L equal intervals of [0, 2*pi).
References: - Gerhard Kurz, Florian Pfaff, Uwe D. Hanebeck, Discrete Recursive Bayesian Filtering on Intervals and the Unit Circle Proceedings of the 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI 2016), Baden-Baden, Germany, September 2016.
filter_state
property
writable
Expose the parent property so we can attach a setter to it.
__init__(n)
Initialize the filter with a uniform distribution over n intervals.
Parameters
n : int Number of discretization intervals.
predict(sys_matrix)
Perform prediction step based on a transition matrix.
Parameters
sys_matrix : array_like, shape (L, L) System/transition matrix. Entry (j, i) gives the probability of transitioning from interval i to interval j.
update(meas_matrix, z)
Perform measurement update based on a measurement matrix.
Parameters
meas_matrix : array_like, shape (Lw, L) Measurement matrix. Row z_row gives the likelihoods for each state interval when the measurement falls in measurement interval z_row. z : scalar Measurement in [0, 2*pi).
update_likelihood(likelihood, z)
Perform measurement update using a likelihood function.
Parameters
likelihood : callable
Function likelihood(z, x) returning f(z | x), where z is the
measurement and x is the state value. Maps Z x [0, 2*pi) ->
[0, infinity).
z : arbitrary
Measurement.
get_point_estimate()
Return the mean direction of the filter state.
calculate_system_matrix_numerically(L, a, noise_distribution)
staticmethod
Obtain the system matrix by 2-D numerical integration from a system function.
Parameters
L : int
Number of discretization intervals.
a : callable
System function x_{k+1} = a(x_k, w_k). Must accept scalar
arguments and return a scalar.
noise_distribution : AbstractCircularDistribution
Distribution of the process noise, defined on [0, 2*pi).
Returns
A : ndarray, shape (L, L) System transition matrix. Entry (j, i) is the probability of transitioning from state interval i to state interval j.
calculate_measurement_matrix_numerically(L, l_meas, h, noise_distribution)
staticmethod
Obtain the measurement matrix by 2-D numerical integration from a measurement function.
Parameters
L : int
Number of discretization intervals for the state.
l_meas : int
Number of discretization intervals for the measurement.
h : callable
Measurement function z_k = h(x_k, v_k). Must accept scalar
arguments and return a scalar.
noise_distribution : AbstractCircularDistribution
Distribution of the measurement noise, defined on [0, 2*pi).
Returns
H : ndarray, shape (l_meas, L) Measurement matrix. Entry (i, j) is the probability that the measurement falls in measurement interval i given that the state is in state interval j.
RandomMatrixTracker
Bases: AbstractExtendedObjectTracker
kinematic_state = kinematic_state
instance-attribute
covariance = covariance
instance-attribute
extent = extent
instance-attribute
alpha = 0
instance-attribute
kinematic_state_to_pos_matrix = kinematic_state_to_pos_matrix
instance-attribute
__init__(kinematic_state, covariance, extent, kinematic_state_to_pos_matrix=None, log_prior_estimates=False, log_posterior_estimates=False, log_prior_extent=False, log_posterior_extent=False)
get_point_estimate()
get_point_estimate_kinematics()
get_point_estimate_extent(flatten_matrix=False)
predict(dt, Cw, tau, system_matrix)
update(measurements, meas_mat, meas_noise_cov)
plot_point_estimate(scaling_factor=1, color=(0, 0.447, 0.741))
get_contour_points(n)
SE2UKF
Bases: AbstractFilter, SE2FilterMixin
Unscented Kalman Filter for planar rigid-body motion on SE(2).
The state is represented as a :class:~pyrecest.distributions.GaussianDistribution
over the 4-D dual-quaternion embedding of SE(2). The first two
entries of the mean encode the rotation (and must satisfy
||mu[0:2]|| == 1); the last two entries encode the translation.
Reference: A Stochastic Filter for Planar Rigid-Body Motions, Igor Gilitschenski, Gerhard Kurz, and Uwe D. Hanebeck, IEEE MFI 2015.
filter_state
property
writable
__init__()
predict_identity(gauss_sys)
Predict with a left-multiplicative noise model on SE(2).
The motion model is::
x_{t+1} = v [⊕] x_t
where v is the system noise. The mean of gauss_sys
encodes the (dual-quaternion) noise mean; the noise is assumed
zero-mean in the manifold sense.
Parameters
gauss_sys : GaussianDistribution System noise distribution. Must have a 4-D mean (first two entries normalised) and a 4×4 covariance.
update_identity(gauss_meas, z)
Incorporate a dual-quaternion measurement.
The measurement model is::
z = x [⊕] v
where v is the measurement noise.
Parameters
gauss_meas : GaussianDistribution Measurement noise distribution. Must have a 4-D mean (first two entries normalised) and a 4×4 covariance. z : array_like, shape (4,) Measurement in dual-quaternion representation.
get_point_estimate()
Return the mean of the current state estimate.
SO3ProductParticleFilter
Bases: HyperhemisphereCartProdParticleFilter
Particle filter for states on SO(3)^K.
Particles are exposed as scalar-last unit quaternions with shape
(n_particles, num_rotations, 4). Internally, the filter stores them in
the generic hyperhemisphere Cartesian-product particle filter with
dim_hemisphere=3.
num_rotations = int(num_rotations)
instance-attribute
n_particles
property
particles
property
Return particles with shape (n_particles, num_rotations, 4).
weights
property
Return normalized particle weights.
__init__(n_particles, num_rotations, initial_particles=None, weights=None)
confidence_to_noise_std(confidence, noise_std, max_noise_std, *, confidence_exponent=1.0, mask=None)
staticmethod
Map detector confidence values in [0, 1] to SO(3) noise scales.
The mapping is
sigma(c)^2 = noise_std^2 + (1 - c)^confidence_exponent *
(max_noise_std^2 - noise_std^2).
set_particles(particles, weights=None)
Replace particles and optionally weights.
mean()
Return the component-wise chordal mean product rotation.
mode()
Return the highest-weight product particle.
get_point_estimate()
Return the component-wise SO(3) mean.
effective_sample_size()
Return the particle effective sample size.
resample_systematic()
Systematically resample particles and reset weights to uniform.
predict_with_tangent_delta(tangent_delta, tangent_noise_covariance=None)
Apply tangent-space deltas and optional tangent Gaussian noise.
predict_identity(noise_distribution=None)
Predict with identity dynamics and optional tangent Gaussian noise.
predict_nonlinear(f, noise_distribution=None, function_is_vectorized=True, shift_instead_of_add=True)
Apply a nonlinear transition on product particles.
update_with_likelihood(likelihood, measurement=None, resample=True, ess_threshold=None)
Update weights from a likelihood evaluated on product particles.
update_with_log_likelihood(log_likelihood, measurement=None, resample=True, ess_threshold=None)
Update weights from log-likelihoods evaluated on product particles.
component_geodesic_log_likelihood(measurement, noise_std=None, *, component_noise_std=None, mask=None, confidence=None, max_noise_std=None, confidence_exponent=1.0, outlier_prob=0.0)
Return per-component masked geodesic log-likelihoods.
The returned array has shape (n_particles, num_rotations). Masks
deactivate components, confidence values in [0, 1] scale each
component's log-likelihood contribution, component_noise_std supplies
heteroskedastic per-component noise, and outlier_prob adds an
unnormalized constant likelihood floor for robustness.
geodesic_log_likelihood(measurement, noise_std=None, *, component_noise_std=None, mask=None, confidence=None, max_noise_std=None, confidence_exponent=1.0, outlier_prob=0.0)
Return one masked geodesic log-likelihood per product particle.
update_with_geodesic_log_likelihood(measurement, noise_std=None, *, component_noise_std=None, mask=None, confidence=None, max_noise_std=None, confidence_exponent=1.0, outlier_prob=0.0, resample=True, ess_threshold=None)
Update with a masked, confidence-aware geodesic log-likelihood.
update_with_geodesic_likelihood(measurement, noise_std, *, component_noise_std=None, mask=None, confidence=None, max_noise_std=None, confidence_exponent=1.0, outlier_prob=0.0, resample=True, ess_threshold=None)
Update with a masked geodesic likelihood on SO(3)^K.
This method preserves the existing likelihood-space API and delegates to the log-likelihood implementation for numerical stability.
from_covariance_diagonal(n_particles, mean, covariance_diagonal)
staticmethod
Create a filter by sampling tangent noise around mean.
SphericalHarmonicsEOTTracker
Bases: AbstractExtendedObjectTracker
3-D star-convex EOT tracker with spherical-harmonic extent coefficients.
The state is [cx, cy, cz, c_00, c_1,-1, c_1,0, c_1,1, ...]. The
coefficients parameterize an unnormalized radial extent function
r(u) = sum_lm c_lm Y_lm(u) rather than a probability density. This
matches the spherical-harmonics tracker in the ICRA 2017 MATLAB code.
order = int(order)
instance-attribute
n_coefficients = (self.order + 1) ** 2
instance-attribute
state_dim = 3 + self.n_coefficients
instance-attribute
coefficients = self._as_vector(coefficients, self.n_coefficients, 'coefficients')
instance-attribute
center = self._as_vector(center, 3, 'center')
instance-attribute
covariance = self._as_square_matrix(covariance, self.state_dim, 'covariance')
instance-attribute
ukf_alpha = float(ukf_alpha)
instance-attribute
ukf_beta = float(ukf_beta)
instance-attribute
ukf_kappa = float(ukf_kappa)
instance-attribute
covariance_regularization = float(covariance_regularization)
instance-attribute
latest_innovation_covariance = None
instance-attribute
latest_predicted_measurement = None
instance-attribute
__init__(order, coefficients=None, center=None, covariance=None, coefficient_covariance=0.02, kinematic_covariance=0.3, initial_radius=1.0, ukf_alpha=1.0, ukf_beta=2.0, ukf_kappa=0.0, covariance_regularization=1e-09, log_prior_estimates=False, log_posterior_estimates=False, log_prior_extents=False, log_posterior_extents=False)
coefficients_to_matrix(coefficients)
staticmethod
Convert packed real SH coefficients to PyRecEst's coefficient matrix.
matrix_to_coefficients(coeff_mat)
staticmethod
Pack PyRecEst's real SH coefficient matrix degree by degree.
rotate_coefficients(coefficients, alpha, beta=0.0, gamma=0.0)
staticmethod
Rotate packed real SH coefficients by ZYZ Euler angles in radians.
evaluate_radius_from_coefficients(coefficients, directions)
staticmethod
Evaluate the raw radial SH extent for Cartesian unit directions.
evaluate_radius(directions)
Evaluate the current radial extent for Cartesian directions.
surface_points_for_directions(directions, center=None, coefficients=None)
Return object surface points for Cartesian rays from center.
measurement_function(state, measurements)
MATLAB-equivalent stacked point measurement equation.
get_point_estimate()
get_point_estimate_kinematics()
get_point_estimate_extent(flatten_matrix=False)
get_extents_on_grid(n=100)
get_contour_points(n=100)
predict_identity(sys_noise=None)
predict_linear(system_matrix, sys_noise=None, inputs=None)
predict_nonlinear(transition_function, sys_noise=None)
predict_rotation(alpha, beta=0.0, gamma=0.0, sys_noise=None)
predict(*args, **kwargs)
update(measurements, meas_noise_cov)
Update from one or more 3-D point measurements.
The measurement equation is the one used in the MATLAB
SphericalHarmonicsAdditiveMeasmodel: each observed point defines a
bearing from the hypothesized center, and the predicted measurement is
the surface point at the current SH radius along that bearing.
StateSpaceSubdivisionFilter
Bases: AbstractFilter, HypercylindricalFilterMixin
Filter for state spaces that are a Cartesian product of a periodic/bounded manifold (represented by a grid distribution) and a linear space (represented by per-grid-point Gaussians).
The filter state is a :class:StateSpaceSubdivisionGaussianDistribution.
This is the Python port of StateSpaceSubdivisionFilter from libDirectional.
filter_state
property
writable
__init__(initial_state)
predict_linear(transition_density=None, covariance_matrices=None, system_matrices=None, linear_input_vectors=None)
Perform the prediction step.
Parameters
transition_density : AbstractConditionalDistribution or None
Conditional grid distribution f(next | current) for the
periodic/bounded part, where grid_values[i, j] = f(next=i | current=j).
If None, the periodic transition is assumed to be a Dirac
(identity), and only the linear part is updated.
covariance_matrices : array or None
Process-noise covariance(s) for the linear part.
Shape (lin_dim, lin_dim) for a single matrix applied to all
areas, or (lin_dim, lin_dim, n_areas) for per-area matrices
(indexed by the prior area in Case 3, or the current area in
Case 2). None means no additive noise.
system_matrices : array or None
System matrix/matrices for the linear part.
Shape (lin_dim, lin_dim) for a single matrix, or
(lin_dim, lin_dim, n_areas) for per-area matrices (indexed
by the prior area). None means the identity matrix.
linear_input_vectors : array or None
Deterministic input vector(s) for the linear part.
Shape (lin_dim,) for a single vector or
(lin_dim, n_areas) for per-area vectors (indexed by the
prior area). None means zero input.
update(likelihood_periodic_grid=None, likelihoods_linear=None)
Perform the measurement update step.
Parameters
likelihood_periodic_grid : array or AbstractDistribution or None
Likelihood values on the periodic grid. If an
AbstractDistribution is given, it is evaluated at the grid
points. Must have the same shape as filter_state.gd.grid_values
or be None (uniform likelihood over the bounded domain).
likelihoods_linear : list of GaussianDistribution or None
Gaussian likelihood(s) for the linear part. Either a list with
one element (applied to all areas) or a list with as many elements
as there are grid points. None means uniform likelihood.
get_estimate()
Return the current filter state.
get_point_estimate()
Return the hybrid mean (periodic mean + linear mean).
ToroidalParticleFilter
Bases: HypertoroidalParticleFilter, ToroidalFilterMixin
__init__(n_particles)
ToroidalWrappedNormalFilter
Bases: AbstractFilter, ToroidalFilterMixin
Filter based on the bivariate wrapped normal distribution.
References
Kurz, G., Gilitschenski, I., Dolgov, M., & Hanebeck, U. D. (2014). Bivariate Angular Estimation Under Consideration of Dependencies Using Directional Statistics. Proceedings of the 53rd IEEE Conference on Decision and Control.
Kurz, G., Pfaff, F., & Hanebeck, U. D. (2017). Nonlinear Toroidal Filtering Based on Bivariate Wrapped Normal Distributions. Proceedings of the 20th International Conference on Information Fusion.
__init__()
predict_identity(twn_sys)
AssociationResult
dataclass
Output of a data-association step.
All indices refer to the active-track list passed to the associator and
the measurement list passed into :meth:TrackManager.step.
matches = field(default_factory=list)
class-attribute
instance-attribute
unmatched_track_indices = None
class-attribute
instance-attribute
unmatched_measurement_indices = None
class-attribute
instance-attribute
cost_matrix = None
class-attribute
instance-attribute
__init__(matches=list(), unmatched_track_indices=None, unmatched_measurement_indices=None, cost_matrix=None)
Track
dataclass
Container for one managed track.
track_id
instance-attribute
single_target_filter
instance-attribute
status = TrackStatus.TENTATIVE
class-attribute
instance-attribute
hits = 1
class-attribute
instance-attribute
misses = 0
class-attribute
instance-attribute
age = 1
class-attribute
instance-attribute
first_step = 0
class-attribute
instance-attribute
last_step = 0
class-attribute
instance-attribute
metadata = field(default_factory=dict)
class-attribute
instance-attribute
event_history = field(default_factory=list)
class-attribute
instance-attribute
history
property
writable
Backward-compatible alias for event_history.
dim
property
Return the state dimension of the underlying filter.
filter_state
property
Return the current filter state.
is_alive
property
Return whether the track is still active.
is_confirmed
property
Return whether the track is confirmed.
get_point_estimate()
Return the current point estimate of the track.
__init__(track_id, single_target_filter, status=TrackStatus.TENTATIVE, hits=1, misses=0, age=1, first_step=0, last_step=0, metadata=dict(), event_history=list())
TrackManager
Bases: AbstractMultitargetTracker
Explicit lifecycle manager around a bank of single-target filters.
The manager does not assume a specific measurement modality or cost model. Instead it delegates problem-specific logic to small user-supplied hooks:
predictor(track, **kwargs)
Advances the underlying filter state.
associator(tracks, measurements, **kwargs) -> AssociationResult
Returns the chosen associations.
updater(track, measurement, measurement_index=None, **kwargs)
Updates a matched track using one measurement.
initiator(measurement, measurement_index=None, **kwargs) -> filter
Creates a new single-target filter from an unmatched measurement.
predictor = predictor
instance-attribute
updater = updater
instance-attribute
initiator = initiator
instance-attribute
associator = associator
instance-attribute
n_init = int(n_init)
instance-attribute
max_misses = int(max_misses)
instance-attribute
allow_births = bool(allow_births)
instance-attribute
confirm_condition = confirm_condition
instance-attribute
delete_condition = delete_condition
instance-attribute
track_metadata_initializer = track_metadata_initializer
instance-attribute
extract_confirmed_only = bool(extract_confirmed_only)
instance-attribute
keep_history = bool(keep_history)
instance-attribute
tracks = []
instance-attribute
dim
property
Return the state dimension of the first active track.
filter_state
property
writable
Return copies of the filter states of the extracted tracks.
__init__(predictor=None, updater=None, initiator=None, associator=None, n_init=2, max_misses=1, allow_births=True, confirm_condition=None, delete_condition=None, track_metadata_initializer=None, extract_confirmed_only=True, keep_history=True, log_prior_estimates=True, log_posterior_estimates=True)
get_tracks(confirmed_only=None, include_deleted=False)
Return managed tracks matching the requested visibility flags.
get_number_of_targets(confirmed_only=None)
Return the number of extracted tracks.
get_point_estimate(flatten_vector=False, confirmed_only=None)
Return stacked point estimates of the extracted tracks.
initialize_from_states(filters_or_states, step=0, confirmed=True, metadata_list=None)
Create tracks directly from filters or filter states.
initialize_from_measurements(measurements, step=0, confirmed=False, **initiation_kwargs)
Create tracks from measurements using self.initiator.
add_track(filter_or_state, step=0, status=TrackStatus.TENTATIVE, metadata=None, history_event='born')
Add a new track and return its track id.
purge_deleted_tracks()
Physically remove deleted tracks and return the number removed.
step(measurements, step=None, predict_kwargs=None, association_kwargs=None, update_kwargs=None, initiation_kwargs=None)
Run one complete lifecycle step.
clear_history(name=None)
TrackManagerStepResult
dataclass
Summary of one :meth:TrackManager.step call.
step
instance-attribute
matches = field(default_factory=list)
class-attribute
instance-attribute
missed_track_ids = field(default_factory=list)
class-attribute
instance-attribute
born_track_ids = field(default_factory=list)
class-attribute
instance-attribute
confirmed_track_ids = field(default_factory=list)
class-attribute
instance-attribute
deleted_track_ids = field(default_factory=list)
class-attribute
instance-attribute
unmatched_measurement_indices = field(default_factory=list)
class-attribute
instance-attribute
association = None
class-attribute
instance-attribute
__init__(step, matches=list(), missed_track_ids=list(), born_track_ids=list(), confirmed_track_ids=list(), deleted_track_ids=list(), unmatched_measurement_indices=list(), association=None)
TrackStatus
Bases: str, Enum
Lifecycle status of a track.
TENTATIVE = 'tentative'
class-attribute
instance-attribute
CONFIRMED = 'confirmed'
class-attribute
instance-attribute
DELETED = 'deleted'
class-attribute
instance-attribute
UKFOnManifolds
Bases: AbstractFilter
Unscented Kalman Filter on (parallelizable) Manifolds.
Implements the UKF-M algorithm that works for states living on smooth manifolds. The uncertainty is represented as a covariance matrix in the tangent space at the current state estimate.
The state can be any Python object (e.g. a numpy array, a rotation matrix,
a tuple representing a Lie group element). All manifold-specific
operations are provided by the user via the phi, phi_inv
callables.
Parameters
f:
Propagation (process) function with signature
f(state, omega, noise, dt) -> new_state.
noise is a 1-D numpy array of length q (noise dimension).
h:
Observation function with signature h(state) -> y where y is
a 1-D numpy array of length l.
phi:
Retraction (exponential-like map) with signature
phi(state, xi) -> new_state. xi is a 1-D numpy array in
the tangent space (length d).
phi_inv:
Inverse retraction with signature
phi_inv(state_ref, state) -> xi. Returns a 1-D numpy array.
Q:
Process noise covariance matrix, shape (q, q).
R:
Measurement noise covariance matrix, shape (l, l).
alpha:
Sigma-point spread parameters. Either a scalar (same value used for
all three weight sets) or a length-3 array-like
[alpha_d, alpha_q, alpha_u] where:
* ``alpha_d`` — propagation w.r.t. state uncertainty,
* ``alpha_q`` — propagation w.r.t. noise,
* ``alpha_u`` — update.
Typical value: ``1e-3``.
state0:
Initial state estimate (manifold element).
P0:
Initial covariance matrix, shape (d, d), where d is the
dimension of the tangent space / uncertainty.
Examples
A simple Euclidean example (identity manifold, phi = state + xi):
from pyrecest.backend import array, eye, zeros from pyrecest.filters import UKFOnManifolds f = lambda s, omega, w, dt: s + omega * dt + w h = lambda s: s phi = lambda s, xi: s + xi phi_inv = lambda s_ref, s: s - s_ref Q = eye(2) * 0.1 R = eye(2) * 0.5 s0 = zeros(2) P0 = eye(2) ukf = UKFOnManifolds(f, h, phi, phi_inv, Q, R, 1e-3, s0, P0) ukf.predict(omega=zeros(2), dt=1.0) ukf.update(y=array([1.0, 0.5]))
TOL = 1e-09
class-attribute
instance-attribute
f = f
instance-attribute
h = h
instance-attribute
phi = phi
instance-attribute
phi_inv = phi_inv
instance-attribute
Q = Q
instance-attribute
R = R
instance-attribute
cholQ = linalg.cholesky(Q).T
instance-attribute
d = P0.shape[0]
instance-attribute
q = Q.shape[0]
instance-attribute
meas_dim = R.shape[0]
instance-attribute
filter_state
property
writable
Return the current filter state as a (state, P) tuple.
predict_nonlinear = predict
class-attribute
instance-attribute
__init__(f, h, phi, phi_inv, Q, R, alpha, state0, P0)
predict(omega=None, dt=1.0)
Propagate the filter state.
Parameters
omega:
Control / input passed to f. Set to None if f does
not use it (the filter passes it through).
dt:
Integration step (seconds).
update(y)
Update the filter with a new measurement.
Parameters
y:
1-D measurement vector, shape (l,).
get_point_estimate()
Return the current state estimate (manifold element).
UnscentedKalmanFilter
Bases: AbstractFilter, EuclideanFilterMixin
filter_state
property
writable
__init__(initial_state, dt=1.0, fx=lambda x, dt: x, hx=lambda x: x, points=None)
predict_nonlinear(fx, sys_noise_cov, dt=None, **fx_args)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fx
|
Function with signature fx(x, dt, **fx_args) |
required | |
sys_noise_cov
|
Process noise matrix Q |
required |
update_nonlinear(measurement, hx, cov_mat_meas, **hx_args)
predict_model(model, dt=None, **fx_args)
Run a prediction step using an additive-noise transition model.
This is an adapter around :meth:predict_nonlinear; it does not change
the UKF algorithm or deprecate the existing function/covariance API.
Parameters
model:
Reusable transition model containing the deterministic transition
function and additive process-noise covariance.
dt:
Optional time step overriding model.dt.
fx_args:
Optional transition-function keyword arguments overriding the
model's default function_args for this prediction.
update_model(model, measurement, **hx_args)
Run an update step using an additive-noise measurement model.
This is an adapter around :meth:update_nonlinear; it preserves the
existing direct nonlinear update API while allowing model-object reuse.
predict_identity(sys_noise_cov, dt=None)
predict_linear(system_matrix, sys_noise_cov, sys_input=None, dt=None)
update_identity(meas, meas_cov)
update_linear(measurement, measurement_matrix, cov_mat_meas)
get_point_estimate()
VonMisesFilter
Bases: AbstractFilter, CircularFilterMixin
A filter based on the Von Mises distribution.
References: - M. Azmani, S. Reboul, J.-B. Choquel, and M. Benjelloun, "A recursive fusion filter for angular data" in 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dec. 2009, pp. 882-887. - Gerhard Kurz, Igor Gilitschenski, Uwe D. Hanebeck, Recursive Bayesian Filtering in Circular State Spaces arXiv preprint: Systems and Control (cs.SY), January 2015.
__init__()
Constructor
predict_identity(vmSys)
Predicts assuming identity system model, i.e., x(k+1) = x(k) + w(k) mod 2*pi, where w(k) is additive noise given by vmSys.
Parameters: vmSys (VMDistribution) : distribution of additive noise
update_identity(vmMeas, z=0.0)
Updates assuming identity measurement model, i.e., z(k) = x(k) + v(k) mod 2*pi, where v(k) is additive noise given by vmMeas.
Parameters: vmMeas (VMDistribution) : distribution of additive noise z : measurement in [0, 2pi)
VonMisesFisherFilter
Bases: AbstractFilter, HypersphericalFilterMixin
Filter based on the von Mises-Fisher distribution.
References
Kurz, G., Gilitschenski, I., & Hanebeck, U. D. (2016). Unscented von Mises-Fisher Filtering. IEEE Signal Processing Letters.
filter_state
property
writable
__init__()
set_state(state)
Set the filter state.
get_estimate_mean()
Return the mean direction of the current filter state.
predict_identity(sys_noise)
State prediction via mulitiplication. Provide zonal density for update Could add support for a rotation Q
update_identity(meas_noise, z)
State update via mulitiplication. Provide zonal density for update Could add support for a rotation Q
WrappedNormalFilter
Bases: AbstractFilter, CircularFilterMixin
Filter based on the wrapped normal distribution.
References
Kurz, G., Gilitschenski, I., & Hanebeck, U. D. (2013). Recursive Nonlinear Filtering for Angular Data Based on Circular Distributions. Proceedings of the 2013 American Control Conference.
Kurz, G., Gilitschenski, I., & Hanebeck, U. D. (2015). Recursive Bayesian Filtering in Circular State Spaces. arXiv preprint.
__init__(wn=None)
Initialize the filter.
predict_identity(wn_sys)
Predicts using an identity system model.
update_identity(wn_meas, z)
update_nonlinear_particle(likelihood, z)
update_nonlinear_progressive(likelihood, z, tau=None)
student_t_covariance_scale(normalized_innovation_squared, measurement_dim, dof=4.0, min_scale=1.0)
Return Student-t measurement-covariance scaling from innovation NIS.
This helper implements the scale-mixture weight used for an approximate
Student-t Kalman measurement update. For normalized innovation squared
nis, measurement dimension d, and degrees of freedom nu, the
Student-t IRLS/EM weight is
w = (nu + d) / (nu + nis).
A Gaussian update can therefore be made heavy-tailed by replacing the
measurement covariance R with R * scale, where scale = 1 / w.
The default min_scale=1 prevents inliers from becoming more confident
than the supplied Gaussian measurement model.
Parameters
normalized_innovation_squared : scalar or array-like
Squared Mahalanobis innovation, i.e. NIS.
measurement_dim : int
Dimension d of the measurement vector. Must be positive.
dof : float, optional
Student-t degrees of freedom nu. Must be greater than two.
min_scale : float, optional
Lower bound on the returned covariance scale. Must be nonnegative.
association_result_from_hypotheses(hypotheses, *, num_tracks=None, num_measurements=None, missing_cost=np.inf, unassigned_track_cost=np.inf, unassigned_measurement_cost=None)
Solve GNN assignment from hypotheses and return AssociationResult.
build_linear_gaussian_hypothesis_associator(measurement_matrix, meas_noise, *, gates=None, missing_cost=np.inf, unassigned_track_cost=np.inf, unassigned_measurement_cost=None, measurement_axis='auto')
Create a TrackManager-compatible linear-Gaussian associator.
filter_hypotheses(hypotheses, gates=None, *, accepted_only=True)
Apply one or more gates and optionally drop rejected hypotheses.
gate_hypotheses(hypotheses, gate, *, reject_reason=None)
Apply one gate to hypotheses and preserve rejected diagnostics.
hypotheses_to_cost_matrix(hypotheses, num_tracks=None, num_measurements=None, *, missing_cost=np.inf, rejected_cost=None, include_rejected=False)
Convert hypotheses to a dense assignment cost matrix.
hypotheses_to_log_likelihood_matrix(hypotheses, num_tracks=None, num_measurements=None, *, missing_value=-np.inf, include_rejected=False)
Convert hypotheses to a dense log-likelihood matrix.
hypotheses_to_probability_matrix(hypotheses, num_tracks=None, num_measurements=None, *, missing_value=0.0, include_rejected=False)
Convert hypotheses to a dense probability-like matrix.
hypothesis_cost(hypothesis, *, missing_cost=np.inf)
Return a scalar minimization cost for a hypothesis.
infer_hypothesis_shape(hypotheses, num_tracks=None, num_measurements=None)
Infer cost-matrix shape from hypotheses unless explicit sizes are given.
linear_gaussian_association_hypotheses(tracks, measurements, measurement_matrix, meas_noise, gates=None, *, measurement_axis='auto', include_rejected=False, metadata_builder=None)
Build Gaussian innovation hypotheses for tracks and measurements.
missed_detection_hypothesis(track_index, *, cost=None, log_likelihood=None, probability=None, reason='missed_detection', metadata=None)
Create a missed-detection hypothesis for one track.
retrodict_linear_gaussian(mean, covariance, system_matrix, sys_input=None, sys_noise_cov=None, *, remove_process_noise=False)
Retrodict a linear-Gaussian state through a square transition matrix.
For x_next = F x_prev + u + w, this computes the Gaussian on
x_prev implied by the supplied Gaussian on x_next. By default the
covariance is transformed as inv(F) P_next inv(F).T. If
remove_process_noise is true, sys_noise_cov is subtracted first.
retrodict_linear_gaussian_state(state, system_matrix, sys_input=None, sys_noise_cov=None, *, remove_process_noise=False)
Return a :class:GaussianDistribution retrodicted one linear step.
quaternion_grid_transition_density(grid, orientation_increment, kappa)
Alias for :func:so3_right_multiplication_grid_transition.
so3_right_multiplication_grid_transition(grid, orientation_increment, kappa)
Build a soft grid transition for right-multiplicative SO(3) dynamics.
The returned conditional density represents
q_next = q_current * delta_q
on a scalar-last unit-quaternion grid. Columns condition on the current
grid point and rows correspond to the next grid point, i.e.
grid_values[i, j] = f(grid[i] | grid[j]). This matches
:meth:HyperhemisphericalGridFilter.predict_nonlinear_via_transition_density.
Parameters
grid : array_like or object with get_grid()
Quaternion grid of shape (n_grid, 4). Quaternions are interpreted
as scalar-last SO(3) representatives and canonicalized to the upper
S3 hemisphere.
orientation_increment : array_like
Either a tangent-vector increment of shape (3,) at the identity or
a scalar-last quaternion increment of shape (4,).
kappa : float
Positive concentration parameter. Larger values place more mass on the
grid point nearest to q_current * delta_q.
Returns
SdHalfCondSdHalfGridDistribution Normalized conditional density on the same canonicalized quaternion grid.
Notes
The unnormalized score is proportional to
exp(kappa * |<q_next, q_current * delta_q>|**2).
The columns are normalized by the hyperhemispherical grid quadrature rule
used by :class:HyperhemisphericalGridFilter, so
mean(grid_values[:, j]) * manifold_size == 1 for every column.
build_global_nearest_neighbor_associator(cost_matrix_builder, unassigned_track_cost, unassigned_measurement_cost=None, invalid_cost=1000000000000.0, dummy_dummy_cost=0.0)
Create an associator from a cost-matrix builder.
The resulting associator has the signature expected by
:class:TrackManager.
build_kalman_measurement_initiator(initial_covariance, measurement_getter=None, measurement_to_state_mapping=None)
Create a Kalman-filter initiator from measurements.
build_linear_gaussian_predictor(system_matrix, sys_noise_cov, sys_input=None)
Create a linear/Gaussian prediction hook for filters supporting it.
build_linear_gaussian_updater(measurement_matrix, measurement_covariance, measurement_getter=None)
Create a linear/Gaussian update hook for filters supporting it.
solve_global_nearest_neighbor(cost_matrix, unassigned_track_cost, unassigned_measurement_cost=None, invalid_cost=1000000000000.0, dummy_dummy_cost=0.0)
Solve a global-nearest-neighbor assignment from a cost matrix.
Parameters
cost_matrix:
(n_tracks, n_measurements) matrix of assignment costs.
unassigned_track_cost:
Scalar or length-n_tracks iterable specifying the cost of leaving a
track unmatched.
unassigned_measurement_cost:
Scalar or length-n_measurements iterable specifying the cost of
leaving a measurement unmatched. If omitted, unassigned_track_cost
is reused.
invalid_cost:
Replacement for non-finite costs.
dummy_dummy_cost:
Cost placed in the dummy-dummy block. The default 0.0 matches the
common rectangular-assignment interpretation. Some legacy trackers use a
non-zero dummy-dummy cost to reproduce their historic gating semantics.