LitLogger¶
LitLogger enables seamless experiment tracking, logging, and artifact management on the Lightning.ai platform.
It integrates with your Fabric training loop to log metrics, hyperparameters, and model checkpoints automatically to the Lightning Experiments dashboard. View your experiments at lightning.ai with real-time charts, compare runs, and share results with your team.
Set Up LitLogger¶
First, install the litlogger package:
pip install litlogger
That’s it! LitLogger automatically detects your Lightning.ai credentials when running in a Lightning Studio or when logged in via the CLI.
Track Metrics¶
To start tracking metrics in your training loop, import the LitLogger and configure it with your settings:
from lightning.fabric import Fabric
from lightning.pytorch.loggers import LitLogger
# 1. Configure the logger
logger = LitLogger(name="my-experiment")
# 2. Pass it to Fabric
fabric = Fabric(loggers=logger)
Next, add log() calls in your code:
value = 0.5 # Python scalar or tensor scalar
fabric.log("some_value", value)
To log multiple metrics at once, use log_dict():
loss, acc, other = 0.1, 0.95, 0.5
values = {"loss": loss, "acc": acc, "other": other}
fabric.log_dict(values)
Log Hyperparameters¶
Log your model’s hyperparameters to keep track of your experiment configuration:
from lightning.pytorch.loggers import LitLogger
logger = LitLogger(name="my-experiment")
logger.log_hyperparams({
"learning_rate": 0.001,
"batch_size": 32,
"model": "resnet50",
})
You can also pass metadata directly when creating the logger:
from lightning.pytorch.loggers import LitLogger
logger = LitLogger(
name="my-experiment",
metadata={"learning_rate": "0.001", "batch_size": "32"},
)
Log Checkpoints¶
Enable automatic checkpoint logging with the log_model parameter:
from lightning.pytorch.loggers import LitLogger
logger = LitLogger(name="my-experiment", log_model=True)
Checkpoints will be automatically uploaded to the Lightning platform when saved.
You can also manually log model artifacts:
# Log a model checkpoint file
logger.log_model_artifact("/path/to/checkpoint.ckpt")
# Log a model object directly
logger.log_model(model)
Log Files¶
Log any file as an artifact:
# Log a configuration file
logger.log_file("config.yaml")
Capture Terminal Output¶
Enable terminal log capture to save your script’s output:
from lightning.pytorch.loggers import LitLogger
logger = LitLogger(name="my-experiment", save_logs=True)
Your terminal output will be captured and available in the Lightning Experiments dashboard.
View Your Experiments¶
After running your training script, view your experiments at lightning.ai. The dashboard provides:
Real-time metric charts
Hyperparameter comparison
Artifact management
Team collaboration features
Access your experiment URL programmatically:
print(logger.url)