litlogger¶
Classes
Logger that enables remote experiment tracking, logging, and artifact management on lightning.ai. |
Fabric/PyTorch Lightning logger that enables remote experiment tracking, logging, and artifact management on lightning.ai.
- class lightning.pytorch.loggers.litlogger.LitLogger(root_dir=None, name=None, teamspace=None, metadata=None, store_step=True, log_model=False, save_logs=True, checkpoint_name=None)[source]¶
Bases:
LoggerLogger that enables remote experiment tracking, logging, and artifact management on lightning.ai.
Initialize the LightningLogger.
- Parameters:
root_dir¶ (
Union[str,Path,None]) – Folder where logs and metadata are stored (default: ./lightning_logs).name¶ (
Optional[str]) – Name of your experiment (defaults to a generated name).teamspace¶ (
Optional[str]) – Teamspace name where charts and artifacts will appear.metadata¶ (
Optional[dict[str,str]]) – Extra metadata to associate with the experiment as tags.log_model¶ (
bool) – If True, automatically log model checkpoints as artifacts.save_logs¶ (
bool) – If True, capture and upload terminal logs.checkpoint_name¶ (
Optional[str]) – Override the base name for logged checkpoints.
Example:
from lightning.pytorch import Trainer from lightning.pytorch.demos.boring_classes import BoringModel, BoringDataModule from lightning.pytorch.loggers.litlogger import LitLogger class LoggingModel(BoringModel): def training_step(self, batch, batch_idx: int): loss = self.step(batch) # logging the computed loss self.log("train_loss", loss) return {"loss": loss} trainer = Trainer( max_epochs=10, enable_model_summary=False, logger=LitLogger("./lightning_logs", name="boring_model") ) model = BoringModel() data_module = BoringDataModule() trainer.fit(model, data_module) trainer.test(model, data_module)
- after_save_checkpoint(checkpoint_callback)[source]¶
Called after a checkpoint is saved.
Logs checkpoints as artifacts if enabled.
- Return type:
- get_model(staging_dir=None, verbose=False, version=None)[source]¶
Download and load a model object using litmodels.
- get_model_artifact(path, verbose=False, version=None)[source]¶
Download a model artifact file or directory from cloud storage using litmodels.
- log_file(path)[source]¶
Log a file as an artifact to the Lightning platform.
The file will be logged in the Teamspace drive, under a folder identified by the experiment name.
- Example::
logger = LitLogger(…) logger.log_file(‘config.yaml’)
- log_metrics(metrics, step=None)[source]¶
Records metrics. This method logs metrics as soon as it received them.
- log_model(model, staging_dir=None, verbose=False, version=None, metadata=None)[source]¶
Save and upload a model object to cloud storage.
- Parameters:
model¶ (
Any) – The model object to save and upload (e.g., torch.nn.Module).staging_dir¶ (
Optional[str]) – Optional local directory for staging the model before upload.verbose¶ (
bool) – Whether to show progress bar during upload.version¶ (
Optional[str]) – Optional version string for the model.metadata¶ (
Optional[dict[str,Any]]) – Optional metadata dictionary to store with the model.
- Return type:
- log_model_artifact(path, verbose=False, version=None)[source]¶
Upload a model file or directory to cloud storage using litmodels.
- property experiment: Optional[litlogger.Experiment]¶
Returns the underlying litlogger Experiment object.
- property log_dir: str¶
The directory for this run’s tensorboard checkpoint.
By default, it is named
'version_${self.version}'but it can be overridden by passing a string value for the constructor’s version parameter instead ofNoneor an int.