MultiProFitSourceFitter¶
- class lsst.meas.extensions.multiprofit.fit_coadd_multiband.MultiProFitSourceFitter(wcs: ~lsst.afw.geom.SkyWcs, errors_expected: dict[str, Exception] | None = None, add_missing_errors: bool = True, *, modeller: ~lsst.multiprofit.modeller.Modeller = <factory>)¶
Bases:
CatalogSourceFitterABC
A MultiProFit source fitter.
- Parameters:
- wcs
A WCS solution that applies to all exposures.
- errors_expected
A dictionary of exceptions that are expected to sometimes be raised during processing (e.g. for missing data) keyed by the name of the flag column used to record the failure.
- add_missing_errors
Whether to add all of the standard MultiProFit errors with default column names to errors_expected, if not already present.
- **kwargs
Keyword arguments to pass to the superclass constructor.
Attributes Summary
Configuration for the model, should be a dictionary conforming to [
ConfigDict
][pydantic.config.ConfigDict].Get extra fields set during validation.
Returns the set of fields that have been explicitly set on this model instance.
Methods Summary
construct
([_fields_set])copy
(*[, include, exclude, update, deep])Returns a copy of the model.
copy_centroid_errors
(columns_cenx_err_copy, ...)Copy centroid errors from an input catalog.
dict
(*[, include, exclude, by_alias, ...])fit
(catalog_multi, catexps[, config_data, ...])Fit PSF-convolved source models with MultiProFit.
from_orm
(obj)get_channels
(catexps)get_model
(idx_row, catalog_multi, catexps[, ...])Reconstruct the model for a single row of a fit catalog.
get_model_radec
(source, cen_x, cen_y)initialize_model
(model, source, catexps[, ...])Initialize a Model for a single source row.
json
(*[, include, exclude, by_alias, ...])make_CatalogExposurePsfs
(catexp, config)model_construct
([_fields_set])Creates a new instance of the
Model
class with validated data.model_copy
(*[, update, deep])Usage docs: https://docs.pydantic.dev/2.10/concepts/serialization/#model_copy
model_dump
(*[, mode, include, exclude, ...])Usage docs: https://docs.pydantic.dev/2.10/concepts/serialization/#modelmodel_dump
model_dump_json
(*[, indent, include, ...])Usage docs: https://docs.pydantic.dev/2.10/concepts/serialization/#modelmodel_dump_json
model_json_schema
([by_alias, ref_template, ...])Generates a JSON schema for a model class.
model_parametrized_name
(params)Compute the class name for parametrizations of generic classes.
model_post_init
(_BaseModel__context)Override this method to perform additional initialization after
__init__
andmodel_construct
.model_rebuild
(*[, force, raise_errors, ...])Try to rebuild the pydantic-core schema for the model.
model_validate
(obj, *[, strict, ...])Validate a pydantic model instance.
model_validate_json
(json_data, *[, strict, ...])Usage docs: https://docs.pydantic.dev/2.10/concepts/json/#json-parsing
model_validate_strings
(obj, *[, strict, context])Validate the given object with string data against the Pydantic model.
parse_file
(path, *[, content_type, ...])parse_obj
(obj)parse_raw
(b, *[, content_type, encoding, ...])schema
([by_alias, ref_template])schema_json
(*[, by_alias, ref_template])update_forward_refs
(**localns)validate
(value)validate_fit_inputs
(catalog_multi, catexps)Validate inputs to self.fit.
Attributes Documentation
- model_config: ClassVar[pydantic.ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True}¶
Configuration for the model, should be a dictionary conforming to [
ConfigDict
][pydantic.config.ConfigDict].
- model_extra¶
Get extra fields set during validation.
- Returns:
A dictionary of extra fields, or
None
ifconfig.extra
is not set to"allow"
.
- model_fields: ClassVar[dict[str, FieldInfo]] = {'errors_expected': FieldInfo(annotation=dict[Type[Exception], str], required=False, default_factory=dict, title='A dictionary of Exceptions with the name of the flag column key to fill if raised.'), 'modeller': FieldInfo(annotation=Modeller, required=False, default_factory=Modeller, title='A Modeller instance to use for fitting.'), 'wcs': FieldInfo(annotation=SkyWcs, required=True, title='The WCS object to use to convert pixel coordinates to RA/dec')}¶
- model_fields_set¶
Returns the set of fields that have been explicitly set on this model instance.
- Returns:
- A set of strings representing the fields that have been set,
i.e. that were not filled from defaults.
Methods Documentation
- copy(*, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, update: Dict[str, Any] | None = None, deep: bool = False) Self ¶
Returns a copy of the model.
- !!! warning “Deprecated”
This method is now deprecated; use
model_copy
instead.
If you need
include
orexclude
, use:`python {test="skip" lint="skip"} data = self.model_dump(include=include, exclude=exclude, round_trip=True) data = {**data, **(update or {})} copied = self.model_validate(data) `
- Args:
include: Optional set or mapping specifying which fields to include in the copied model. exclude: Optional set or mapping specifying which fields to exclude in the copied model. update: Optional dictionary of field-value pairs to override field values in the copied model. deep: If True, the values of fields that are Pydantic models will be deep-copied.
- Returns:
A copy of the model with included, excluded and updated fields as specified.
- copy_centroid_errors(columns_cenx_err_copy: tuple[str], columns_ceny_err_copy: tuple[str], results: Table, catalog_multi: Sequence, catexps: list[lsst.multiprofit.fitting.fit_source.CatalogExposureSourcesABC], config_data: CatalogSourceFitterConfigData)¶
Copy centroid errors from an input catalog.
This method exists to support fitting models with fixed centroids derived from an input catalog. Implementers can simply copy an existing column into the results catalog or use the data as needed; however, there is no reasonable default implementation.
- Parameters:
- columns_cenx_err_copy
X-axis result centroid columns to copy errors for.
- columns_ceny_err_copy
Y-axis result centroid columns to copy errors for.
- results
The table of fit results to copy errors into.
- catalog_multi
The input multiband catalog.
- catexps
The input data.
- config_data
The fitter config and data.
- Raises:
- NotImplementedError
Raised if columns need to be copied but no implementation is available.
- dict(*, include: Set[int] | Set[str] | Mapping[int, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None = None, exclude: Set[int] | Set[str] | Mapping[int, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) Dict[str, Any] ¶
- fit(catalog_multi: Sequence, catexps: list[lsst.multiprofit.fitting.fit_source.CatalogExposureSourcesABC], config_data: CatalogSourceFitterConfigData | None = None, logger: Logger | None = None, **kwargs: Any) Table ¶
Fit PSF-convolved source models with MultiProFit.
Each source has a single PSF-convolved model fit, given PSF model parameters from a catalog, and a combination of initial source model parameters and a deconvolved source image from the CatalogExposureSources.
- Parameters:
- catalog_multi
A multi-band source catalog to fit a model to.
- catexps
A list of (source and psf) catalog-exposure pairs.
- config_data
Configuration settings and data for fitting and output.
- logger
The logger. Defaults to calling
_getlogger
.- **kwargs
Additional keyword arguments to pass to self.modeller.
- Returns:
- catalog
astropy.Table
A table with fit parameters for the PSF model at the location of each source.
- catalog
- get_channels(catexps: list[lsst.multiprofit.fitting.fit_source.CatalogExposureSourcesABC]) dict[str, lsst.gauss2d.fit._gauss2d_fit.Channel] ¶
- get_model(idx_row: int, catalog_multi: Sequence, catexps: list[lsst.multiprofit.fitting.fit_source.CatalogExposureSourcesABC], config_data: CatalogSourceFitterConfigData | None = None, results: Table | None = None, **kwargs: Any) ModelD ¶
Reconstruct the model for a single row of a fit catalog.
- Parameters:
- idx_row
The index of the row in the catalog.
- catalog_multi
The multi-band catalog originally used for initialization.
- catexps
The catalog-exposure pairs to reconstruct the model for.
- config_data
The configuration used to generate sources. Default-initialized if None.
- results
The corresponding best-fit parameter catalog to initialize parameter values from. If None, the model params will be set by
self.initialize_model
, as they would be when callingself.fit
.- **kwargs
Additional keyword arguments to pass to initialize_model. Not used during fitting.
- Returns:
- model
The reconstructed model.
- initialize_model(model: ModelD, source: Mapping[str, Any], catexps: list[lsst.multiprofit.fitting.fit_source.CatalogExposureSourcesABC], values_init: Mapping[ParameterD, float] | None = None, centroid_pixel_offset: float = 0, **kwargs)¶
Initialize a Model for a single source row.
- Parameters:
- model
The model object to initialize.
- source
A mapping with fields expected to be populated in the corresponding source catalog for initialization.
- catexps
A list of (source and psf) catalog-exposure pairs.
- values_init
Initial parameter values from the model configuration.
- centroid_pixel_offset
The value of the offset required to convert pixel centroids from MultiProFit coordinates to catalog coordinates.
- **kwargs
Additional keyword arguments that cannot be required for fitting.
- json(*, include: Set[int] | Set[str] | Mapping[int, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None = None, exclude: Set[int] | Set[str] | Mapping[int, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Callable[[Any], Any] | None = PydanticUndefined, models_as_dict: bool = PydanticUndefined, **dumps_kwargs: Any) str ¶
- make_CatalogExposurePsfs(catexp: CatalogExposureInputs, config: MultiProFitSourceConfig) CatalogExposurePsfs ¶
- classmethod model_construct(_fields_set: set[str] | None = None, **values: Any) Self ¶
Creates a new instance of the
Model
class with validated data.Creates a new model setting
__dict__
and__pydantic_fields_set__
from trusted or pre-validated data. Default values are respected, but no other validation is performed.- !!! note
model_construct()
generally respects themodel_config.extra
setting on the provided model. That is, ifmodel_config.extra == 'allow'
, then all extra passed values are added to the model instance’s__dict__
and__pydantic_extra__
fields. Ifmodel_config.extra == 'ignore'
(the default), then all extra passed values are ignored. Because no validation is performed with a call tomodel_construct()
, havingmodel_config.extra == 'forbid'
does not result in an error if extra values are passed, but they will be ignored.- Args:
- _fields_set: A set of field names that were originally explicitly set during instantiation. If provided,
this is directly used for the [
model_fields_set
][pydantic.BaseModel.model_fields_set] attribute. Otherwise, the field names from thevalues
argument will be used.
values: Trusted or pre-validated data dictionary.
- Returns:
A new instance of the
Model
class with validated data.
- model_copy(*, update: Mapping[str, Any] | None = None, deep: bool = False) Self ¶
Usage docs: https://docs.pydantic.dev/2.10/concepts/serialization/#model_copy
Returns a copy of the model.
- Args:
- update: Values to change/add in the new model. Note: the data is not validated
before creating the new model. You should trust this data.
deep: Set to
True
to make a deep copy of the model.- Returns:
New model instance.
- model_dump(*, mode: Literal['json', 'python'] | str = 'python', include: Set[int] | Set[str] | Mapping[int, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None = None, exclude: Set[int] | Set[str] | Mapping[int, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None = None, context: Any | None = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool | Literal['none', 'warn', 'error'] = True, serialize_as_any: bool = False) dict[str, Any] ¶
Usage docs: https://docs.pydantic.dev/2.10/concepts/serialization/#modelmodel_dump
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- Args:
- mode: The mode in which
to_python
should run. If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.
include: A set of fields to include in the output. exclude: A set of fields to exclude from the output. context: Additional context to pass to the serializer. by_alias: Whether to use the field’s alias in the dictionary key if defined. exclude_unset: Whether to exclude fields that have not been explicitly set. exclude_defaults: Whether to exclude fields that are set to their default value. exclude_none: Whether to exclude fields that have a value of
None
. round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T]. warnings: How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors,“error” raises a [
PydanticSerializationError
][pydantic_core.PydanticSerializationError].serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.
- mode: The mode in which
- Returns:
A dictionary representation of the model.
- model_dump_json(*, indent: int | None = None, include: Set[int] | Set[str] | Mapping[int, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None = None, exclude: Set[int] | Set[str] | Mapping[int, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, Set[int] | Set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None = None, context: Any | None = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool | Literal['none', 'warn', 'error'] = True, serialize_as_any: bool = False) str ¶
Usage docs: https://docs.pydantic.dev/2.10/concepts/serialization/#modelmodel_dump_json
Generates a JSON representation of the model using Pydantic’s
to_json
method.- Args:
indent: Indentation to use in the JSON output. If None is passed, the output will be compact. include: Field(s) to include in the JSON output. exclude: Field(s) to exclude from the JSON output. context: Additional context to pass to the serializer. by_alias: Whether to serialize using field aliases. exclude_unset: Whether to exclude fields that have not been explicitly set. exclude_defaults: Whether to exclude fields that are set to their default value. exclude_none: Whether to exclude fields that have a value of
None
. round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T]. warnings: How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors,“error” raises a [
PydanticSerializationError
][pydantic_core.PydanticSerializationError].serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.
- Returns:
A JSON string representation of the model.
- classmethod model_json_schema(by_alias: bool = True, ref_template: str = '#/$defs/{model}', schema_generator: type[pydantic.json_schema.GenerateJsonSchema] = <class 'pydantic.json_schema.GenerateJsonSchema'>, mode: ~typing.Literal['validation', 'serialization'] = 'validation') dict[str, Any] ¶
Generates a JSON schema for a model class.
- Args:
by_alias: Whether to use attribute aliases or not. ref_template: The reference template. schema_generator: To override the logic used to generate the JSON schema, as a subclass of
GenerateJsonSchema
with your desired modificationsmode: The mode in which to generate the schema.
- Returns:
The JSON schema for the given model class.
- classmethod model_parametrized_name(params: tuple[type[Any], ...]) str ¶
Compute the class name for parametrizations of generic classes.
This method can be overridden to achieve a custom naming scheme for generic BaseModels.
- Args:
- params: Tuple of types of the class. Given a generic class
Model
with 2 type variables and a concrete modelModel[str, int]
, the value(str, int)
would be passed toparams
.
- Returns:
String representing the new class where
params
are passed tocls
as type variables.- Raises:
TypeError: Raised when trying to generate concrete names for non-generic models.
- model_post_init(_BaseModel__context: Any) None ¶
Override this method to perform additional initialization after
__init__
andmodel_construct
. This is useful if you want to do some validation that requires the entire model to be initialized.
- classmethod model_rebuild(*, force: bool = False, raise_errors: bool = True, _parent_namespace_depth: int = 2, _types_namespace: MappingNamespace | None = None) bool | None ¶
Try to rebuild the pydantic-core schema for the model.
This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.
- Args:
force: Whether to force the rebuilding of the model schema, defaults to
False
. raise_errors: Whether to raise errors, defaults toTrue
. _parent_namespace_depth: The depth level of the parent namespace, defaults to 2. _types_namespace: The types namespace, defaults toNone
.- Returns:
Returns
None
if the schema is already “complete” and rebuilding was not required. If rebuilding _was_ required, returnsTrue
if rebuilding was successful, otherwiseFalse
.
- classmethod model_validate(obj: Any, *, strict: bool | None = None, from_attributes: bool | None = None, context: Any | None = None) Self ¶
Validate a pydantic model instance.
- Args:
obj: The object to validate. strict: Whether to enforce types strictly. from_attributes: Whether to extract data from object attributes. context: Additional context to pass to the validator.
- Raises:
ValidationError: If the object could not be validated.
- Returns:
The validated model instance.
- classmethod model_validate_json(json_data: str | bytes | bytearray, *, strict: bool | None = None, context: Any | None = None) Self ¶
Usage docs: https://docs.pydantic.dev/2.10/concepts/json/#json-parsing
Validate the given JSON data against the Pydantic model.
- Args:
json_data: The JSON data to validate. strict: Whether to enforce types strictly. context: Extra variables to pass to the validator.
- Returns:
The validated Pydantic model.
- Raises:
ValidationError: If
json_data
is not a JSON string or the object could not be validated.
- classmethod model_validate_strings(obj: Any, *, strict: bool | None = None, context: Any | None = None) Self ¶
Validate the given object with string data against the Pydantic model.
- Args:
obj: The object containing string data to validate. strict: Whether to enforce types strictly. context: Extra variables to pass to the validator.
- Returns:
The validated Pydantic model.
- classmethod parse_file(path: str | Path, *, content_type: str | None = None, encoding: str = 'utf8', proto: DeprecatedParseProtocol | None = None, allow_pickle: bool = False) Self ¶
- classmethod parse_raw(b: str | bytes, *, content_type: str | None = None, encoding: str = 'utf8', proto: DeprecatedParseProtocol | None = None, allow_pickle: bool = False) Self ¶
- classmethod schema_json(*, by_alias: bool = True, ref_template: str = '#/$defs/{model}', **dumps_kwargs: Any) str ¶
- validate_fit_inputs(catalog_multi: Sequence, catexps: list[lsst.meas.extensions.multiprofit.fit_coadd_multiband.CatalogExposurePsfs], config_data: CatalogSourceFitterConfigData = None, logger: Logger = None, **kwargs: Any) None ¶
Validate inputs to self.fit.
This method is called before any fitting is done. It may be used for any purpose, including checking that the inputs are a particular subclass of the base classes.
- Parameters:
- catalog_multi
A multi-band source catalog to fit a model to.
- catexps
A list of (source and psf) catalog-exposure pairs.
- config_data
Configuration settings and data for fitting and output.
- logger
The logger. Defaults to calling
_getlogger
.- **kwargs
Additional keyword arguments to pass to self.modeller.