title stringlengths 5 164 | labels list | bodyText stringlengths 0 46.7k |
|---|---|---|
Error when Implementing training_epoch_end() for GAN example | [
"bug",
"help wanted"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Go to GAN example
Install latest version of pytorch-lightning (0.9.0)
pip install pytorch-lightning==0.9.0
Implement training_epoch_end() method for GAN class
def training_epoch_end(self, outputs):
return outputs
Run training code cell
gan_model = GAN(hpara... |
TPU available: true when there are no TPUs | [
"bug",
"accelerator: tpu"
] | π Bug
I am using a DGX machine (and so, no TPUs), but on initiating Trainer, it logs TPU available: True. This ends up returning Missing XLA configuration when I run my script.
To Reproduce
Code sample
Simply running the following lines on my machine:
>> trainer = pl.Trainer(gpus=[0]) ... |
[Bug] Batch Size mismatch | [
"help wanted",
"question",
"won't fix",
"waiting on author"
] | π Bug
A batch size of 16 is used in one of the steps instead of the specified batch size.
To Reproduce
Steps to reproduce the behavior:
View collab notebook
Code sample
Minimum code reproduction: Collab Notebook
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(... |
Horovod with native 16 precision not working | [
"bug",
"help wanted"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
using precision=16 with distributed_backend=horovod
Traceback (most recent call last):
File "/workspace/main_lightning.py", line 500, in <module>
main(hyperparams)
File "/workspace/main_lightning.py", line 492, in main
trainer.fit(model)
File "/usr/l... |
Metatags are saved over and over in TensorBoardLogger when logging metrics | [
"feature",
"help wanted",
"good first issue"
] | Is it okay that the metatags are saved every time metrics are logged in TensorBoardLogger? I find that it slows down training a bit.
I found a related discussion.
This is the part I'm talking about:
pytorch-lightning/pytorch_lightning/loggers/tensorboard.py
Lines 201 to 211
in
... |
Retrieve exact number of training steps in `configure_optimizers` | [
"question"
] | β Questions and Help
What is your question?
Some schedulers may require the total number of training steps. How can I retrieve it?
Code
def configure_optimizers(self) -> (torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR):
steps = self.trainer.total_training_steps
What have you tried?
I tried computing them ... |
Add logger for Azure Machine Learning | [
"feature",
"help wanted"
] | π Feature
We already have PyTorch Lightning loggers for Comet, MLFlow, and Neptune. It would be great if we also had one for Azure Machine Learning (AML).
Motivation
A data scientist hoping to use PyTorch Lightning in AML currently has to build their own "logger adapter" to get logs from their training runs to show up... |
Logging train metrics every N steps with TrainResult | [
"question"
] | What is your question?
Is there a way to log metrics at every N training steps? Currently, I returning a TrainResult object at the end of training_step with on_step=True (see code below). However, this setup is still logging data every 50 steps (see image)
Code
def training_step(self, inputs, batch_idx):
a... |
Unable to load checkpoint; __init__'s missing positional arguments | [
"bug",
"help wanted"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Create a very simple module with named arguments (of type inspect.Parameter.POSITIONAL_OR_KEYWORD), train it with a custom model checkpoint, and then try reload it.
Code sample
import pytorch_lightning as pl
import torch
class Foo(pl.LightningModule):
def __... |
continue training | [
"question"
] | β Questions and Help
What is your question?
I have 10 training datasets. I want to train a model sequentially on these datasets i.e train on first training dataset then train the same model on the second dataset and so on.
How to do this?
model = ClassificationModel()
for dataset in training_datasets:
trainer ... |
ONNX model does not save on GPU | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
Attempting to export on ONNX after training model on GPU, throws an error is the input_sample or example_input_array is not a CUDA tensor.
To Reproduce
Steps to reproduce the behavior:
Train a model on GPU
Try to export to ONNX when self.example_input_array = torch.zeros(1, 1, 500, 500) or input_sample = torch... |
Rename train_dataloader or update docs for trainer.fit()? | [
"feature",
"help wanted",
"won't fix",
"discussion"
] | π Feature
Either rename train_dataloder arg to something else, like train_data_source.
or
Update docs to make it clear that is trainer.fit() and other modules (e.g. trainer.lr_find()) where a LightningDataModule can be passed instead of DataLoader.
Motivation
The new datamodule allows users to decouple data and mod... |
Data DistributedSampler Error when using Multi-GPU setting (with ddp). | [
"bug",
"help wanted"
] | π Bug
Hi,
I have converted my pure PyTorch code into pytorch-lightning code, however, the pl code would be crashed when using multi-gpus settings, while the code runs successfully when I set gpus=1.
My task is a binary classification task, and the error happens in AUC-ROC-score computing using sklearn:
File "/usr/lo... |
Have an example of showing explicitly how to calculate metrics in DDP | [
"feature",
"help wanted",
"example"
] | π Feature
Given the new updates in 0.9.0, it is desirable to have an example of showing exactly and explicitly how to calculate metrics in DDP. The metrics of interest are those that requires all the labels and prediction for an entire epoch, such as F1 score or average precision.
Motivation
As a big fan of this proje... |
Validation Step for Epoch Clarified | [
"feature",
"help wanted",
"won't fix"
] | π Feature
This simple change would add the num_validation_sanity_steps flag to also include fullEpoch in addition to the synonymous -1 flag, to add clarity to the end-user.
Motivation
I grew frustrated with trying to find a way to easily run an entire validation epoch before using training loops in pytorch-lightning.... |
RuntimeError: Cannot replicate if number of devices (1) is different from 8 Exception in device=TPU:2: Cannot replicate if number of devices (1) is different from 8 | [
"bug",
"help wanted"
] | I am getting the above (title) error on Kaggle when I am doing
trainer.fit(model, dm)
where trainer = Trainer(tpu_cores = 8, logger = wandblogger, max_epochs = 20)
Following is the trace:
File "/opt/conda/lib/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(ind... |
log_softmax doesn't return correct dtype within training_step in 16-bit precision | [
"bug",
"help wanted"
] | π Bug
Calling log_softmax (either torch.nn.functional.log_softmax or torch.Tensor.log_softmax, as functional calls the Tensor version) from the training_step of a LightningModule returns dtype float32, even when float16 was given.
This only happens with precision=16, precision=32 returns the correct type.
I suspect th... |
Add support for training on IPU's | [
"feature",
"help wanted"
] | π Feature
Graphcore IPU's are a new breed of processors for training machine learning models.
Pytorch support for training on Graphcore IPU's is available in Preview. Add support for training on IPU's.
Reference: https://docs.graphcore.ai/projects/poptorch-user-guide/en/latest/
Motivation
The IPU benchmarks seem prett... |
Early Stopping + result dictionary + no validation not working. | [
"bug",
"help wanted"
] | π Bug
The case where the user does not use validation and returns a dictionary (instead of a TrainResult) during training does not work in combination with early stopping.
The test case which should check this is here:
pytorch-lightning/tests/callbacks/test_early_stopping.py
Lines 136 ... |
validation_epoch_end not logging validation_step EvalResult values | [
"bug",
"help wanted",
"design"
] | π Bug
When overwriting validation_epoch_end the EvalResult values from validation_step are not logged.
For my experiments I keep track of several metrics that I only log to TensorBoard at the end of each validation epoch. For most metrics I can specify EvalResult().log(on_epoch=True), but one of the metrics I can only... |
Logging accuracy with batch accumulation | [
"question"
] | I wanted to ask how pytorch handles accuracy (and maybe even loss) logging when we have something like pl.Trainer(accumulate_grad_batches=ACCUMULATIONS).
My training looks like this:
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y, weig... |
Sync metrics between all GPUs before logging when using DDP | [
"bug",
"duplicate",
"help wanted",
"distributed"
] | Issue: When using DDP, tensorboard logger only logs for GPU0, which results in wrong overall metric shown in tensorboard. |
Log epoch as step when on_epoch=True and on_step=False | [
"feature",
"help wanted"
] | π Feature
When using the new structured Result API, it is no longer possible to force PL to report the epoch as the current step to loggers instead of the global step (or elapsed number of training steps).
Motivation
This results in confusing results when viewing results in a tool that uses step count by default (e.... |
Test all metrics against sklearn (with many input trials) | [
"feature",
"help wanted",
"good first issue",
"ci"
] | Hand-chosen values are not enough, we need to test with a large batch of inputs where possible.
Something in this style, maybe with a fixed seed:
def test_auroc_versus_sklearn():
for i in range(100):
target = torch.randint(0, 2, size=(10, ))
pred = torch.randint(0, 2, size=(10,))
score_sk = ... |
auto_scale_batch_size not working with datamodule | [
"bug",
"help wanted"
] | π Bug
The Trainer expects the LightningModule to have self.batch_size (see scale_batch_size() in training_tricks.py). However, if one is using the new LightningDataModule, that should be the class with self.batch_size defined.
To Reproduce
assert hasattr(lightning_data_module, "batch_size")
trainer = Trainer(auto_scal... |
Error using Custom DistributedSampler | [
"bug",
"help wanted"
] | π Bug
When using a custom DistributedSampler, I get the following error while requesting the train dataloader:
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 81, in train
self.tpu_train_in_process(self.trainer.tpu_id, model, self.trainer, self.mp_queue)
File "/... |
Monitor metric not found for Learning Schedulers when using Result() | [
"bug",
"duplicate",
"help wanted",
"priority: 0"
] | π Bug
If you are using Result() (TrainResult() and EvalResult()) you cannot use a Learning Scheduler that monitors a metric as it will not find the metrics logged/stored by the Result() class. The available metrics for me that are listed below in the error are not the ones that exist in my TrainResult() and EvalResult... |
Metric Aggregation | [
"feature",
"help wanted"
] | π Feature
To offer a better metric aggregation I discussed a potential way to go with @SkafteNicki .
We agreed on the following:
add a new aggregated-property that aggregates the metric over batches.
default ddp-sync to broadcasting + aggregation
add a reset function to metrics to reset the internal persistent aggre... |
Enable passing result from *_step Model Hook to corresponding *_batch_end Callback | [
"feature",
"help wanted",
"won't fix",
"design"
] | π Feature
Give users the option to pass a result at the end of *_step directly to the corresponding *_batch_end Callback.
Example:
validation_step outputs prediction and probabilities. Pass these to validation_batch_end user-defined Callback for advanced logging.
Motivation
This will remove the need for some boilerpla... |
Adding input sanitation for distributed backend and related trainer flags | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Error should be thrown or a warning raised if the passed distributed_backend flag doesn't match one of the expected types (e.g ddp, ddp-spawn, etc). This maybe applicable to other trainer flags too.
Motivation
This is really minor, but I just spent an embarrassing amount of time trying to figure out why my d... |
Cap batch size by number of training samples when using auto_scale_batch_size | [
"bug",
"help wanted"
] | π Bug
The batch size finder sets an unrealistically high batch size if all samples of the training dataset fit into one batch.
...
Batch size 8388608 succeeded, trying batch size 16777216
Batch size 16777216 succeeded, trying batch size 33554432
Batch size 33554432 succeeded, trying batch size 67108864
Finished batch ... |
Issue with multi gpu training | [
"question"
] | I get the following error on training with multi gpus:
File "training_on_each_split.py", line 271, in <module>
trainer.fit(model)
File "/home/nvarshn2/.conda/envs/pytorch_lightning/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 997, in fit
results = self.dp_train(model)
File "/hom... |
Incorrect aggregation when logging predictions into EvalResult with distributed_backend DP | [
"bug",
"help wanted",
"strategy: dp"
] | π Bug
It it possible to log some data in EvalResult like this:
result = pl.EvalResult(checkpoint_on=val_loss)
result.y_hat = y_hat.float()
Suppose I log some value with dimensions (32, 10) - batch size 32 and 10 classes. And I have N samples in total.
If distributed_backend is None, in validation_epoch_end outputs.y_... |
How to free up the CUDA memory | [
"question",
"won't fix"
] | I just wanted to build a model to see how pytorch-lightning works. I am working on jupyter notebook and I stopped the cell in the middle of training. I wanted to free up the CUDA memory and couldn't find a proper way to do that without restarting the kernel. Here I tried these:
del model # model is a pl.Light... |
Logging non-tensor scalar with result breaks subsequent epoch aggregation | [
"bug",
"help wanted",
"good first issue"
] | π Bug
Logging non-tensor scalar with result breaks subsequent epoch/tbptt aggregation
(on both 0.9 and master)
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "... |
on_fit_start not triggering (master) | [
"bug",
"help wanted",
"good first issue",
"refactor"
] | π Bug
on_fit_start is not being triggered on master as of f46318e
To Reproduce
Steps to reproduce the behavior:
Install master
Run template with on_fit_start added (included below)
(identical behavior for single gpu, ddp_spawn and ddp)
Code sample
"""
Runs a model on a single node across multiple gpus.
"""
import os... |
θ‘δΊΊζ£ζ΅ηθ―δ»·ζ ε沑ζι’ε
εε₯½η | [
"won't fix"
] | π Feature
Motivation
Pitch
Alternatives
Additional context |
Using multiple dataloaders at training time | [
"question",
"won't fix"
] | I Try to train on two dataloaders, one attached to a dataset where each __get_item__ call fetches a predefined batch of varying length (thus the batch_size I transfer to the dataloader object is 1), and one where I sample randomly from a set of sequences, thus __get_item__ call fetches one sample each time.
I'm looking... |
TrainResult erroring with deepcopy on epoch end | [
"bug",
"accelerator: tpu"
] | When detaching the Result object from the graph the code first calls deepcopy.copy() which throws an error
*** RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
as the Result object still contains the graphs. It seems like it needs to perform some sort of recursive deta... |
On the relationship between Result and Callback monitor | [
"bug",
"help wanted",
"priority: 0",
"discussion",
"design"
] | π¬ Discussion
Since I started using Lightning, I have noticed many users, as well as myself, having trouble with the way metrics are saved for callbacks by the Result class.
The current design of having only {early_stop_on,checkpoint_on} forces different callbacks to use the same monitor. I believe that wanting differe... |
Debugging docs uses the same argument for both cases of overfit_batches | [
"docs"
] | On this example of how to use the overfit_batches both options shown use the fractional argument 0.1. The method can take either a fraction of the total batches or a fixed number of batches. The argument passed in the second example should be changed to reflect that. |
Unexpected key(s) in state_dict Error when calling Trainer.test | [
"bug",
"help wanted",
"waiting on author",
"checkpointing"
] | Dear all,
I have a trainer
import torch
from torch.optim.lr_scheduler import ReduceLROnPlateau
from pytorch_lightning import LightningModule
from torch.nn import functional as F
from pytorch_lightning.metrics.functional import accuracy, f1_score, auroc
class TraversabilityModule(LightningModule):
def __init__(se... |
MulticlassAUROC: Implement a multi-class version of the AUROC metric | [
"feature",
"help wanted",
"good first issue"
] | π Feature
Create the metric MulticlassAUROC to allow for the AUROC metric to be used in multi-class problem settings. Or,
Expand the AUROC metric to support multi-class data, which would also directly solve this AUROC bug that instead gives a random value when used in multi-class problems: #3303
Motivation
AUROC is ... |
Add a callback to support opacus | [
"feature",
"help wanted",
"won't fix",
"callback"
] | Opacus is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the client, has little impact on training performance and allows the client to online track the privacy budget expended at any given moment.
https://github.com/pytorch/opacus |
Init tensors using type_as for multi gpu training error | [] | I get the following error when using type_as for multi device trainig:
TypeError: type_as() got an unexpected keyword argument 'device'
Code to reproduce:
!pip install torchvision
!pip install pytorch-lightning==0.8.3 --upgrade
class MNISTModel(pl.LightningModule):
def __init__(self):
super(MNISTModel, s... |
Example code does not run | [
"bug",
"help wanted"
] | π Bug
Official example code, only modifying # of GPUs, does not run.
To Reproduce
Steps to reproduce the behavior:
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as p... |
import pytorch_lightning [1] 2770657 illegal hardware instruction (core dumped) python3 | [
"bug",
"help wanted"
] | π Bug
Can't import pytorch_lightning.
I've seen this before with tensorflow on computers with older cpus and had to build the package from source. Is this the case here as well? PyTorch seems to run just fine on this cpu, does PyTorch Lightning not support it?
To Reproduce
Python 3.8.2 (default, Jul 16 2020, 14:00:26)... |
TypeError: cannot pickle 'socket' object | [
"question"
] | β Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
when I try to pass a socket.socket parameter to the model, trainer.fit(model) occurred an error:
File "/disks/disk1/damon/remote_src/YPlatformServer/src/handler.py", line 79, in main
trainer.fit(model)
File "/disks... |
multi-gpu training is slow in lightning | [
"help wanted",
"distributed"
] | π Bug
I have migrated my code from pytorch to lightning. However, I noticed the iteration time is almost double in lightning. I have tried with one server and two servers (each have 4 V100 GPUs). It is always slower in lightning. I am using ddp distributed backend. However, I have noticed the same with horovod as well... |
Lightning internal refactors | [
"feature",
"priority: 0",
"waiting on author",
"refactor"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Run '....'
Scroll down to '....'
See error
Code sample
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget htt... |
log_gpu_memory should use a GPUStatsMonitor | [
"feature",
"help wanted",
"good first issue",
"won't fix",
"refactor"
] | π Bug
Trainer.log_gpu_memory should use a GPUStatsMonitor callback
The only extra feature of log_gpu_memory is the ability to print the min and max memory used. We could add that to GPUStatsMemory if deemed necessary.
cc @rohitgr7
#2932 (comment) |
CPU training is broken for more than one process | [
"bug",
"help wanted",
"working as intended"
] | π Bug
open demo notebook on Colab
change MNIST trainer line to trainer = pl.Trainer(num_processes=1, progress_bar_refresh_rate=20). Observe that this works
change MNIST line to trainer = pl.Trainer(num_processes=2, progress_bar_refresh_rate=20). Observe that training does not commence. |
EarlyStopping not working / wrong keys in log | [
"bug",
"help wanted"
] | π Bug
Iβm trying to implement EarlyStopping when validation loss stops decreasing. I add callback as follows:
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.l1_loss(y_hat, y)
result = pl.EvalResult(checkpoint_on=loss)
result.log("val_loss", loss, sync_dist=True)
... |
How to determine the device from within a `LightningDataModule` | [
"question"
] | β Questions and Help
What is your question?
Is there a recommended way (or is it even at all possible) to determine which device is used from within the new LightningDataModule?
I ask the question because right now I decide how to set pin_memory when initializing the dataloaders based on the device. Prior to 0.9, the _... |
ValueError: All dicts must have the same number of keys on model evaluation output. | [
"bug",
"help wanted",
"strategy: dp"
] | Any ideas to debug this issue?
Is happening to me in many different models, after I refactored the Result logging from training_step, validation_step and test_step methods, changed the old dictionary-based return to the new Result scheme, training on two GPUs at the same time.
The error doesn't pop if i use distributed... |
Distributed training on the Sun Grid Engine (SGE) | [
"won't fix"
] | Has anyone ever managed to get PyTorch Lightning's distributed training to work on the Sun Grid Engine (or also called Oracle Grid Engine)? PyTorch Lightning only supports distributed training on multiple nodes with SLURM.
In SLURM you can also easily set the number of nodes in the submission script, I believe the clos... |
No error message when distributed_backend = "invalid choice", Trainer runs on CPU | [
"bug",
"help wanted"
] | π Bug
I'm trying to implemented and run new BERT-based model as always, used gpus option, but strangely my model is still running on CPU. I know this from 1.the training is too slow, 2.print(self.device) -> "cpu.", 3.The logs (right below). I never encountered this before so I'm confused. I'm using pytorch-lightning=0... |
Inconsistent behavior between `validation_epoch_end` and `training_epoch_end` | [
"bug",
"help wanted",
"priority: 0"
] | Inconsistent behavior between validation_epoch_end and training_epoch_end
Assignation of a variable to results in train_step (result.y =y) does not work the same as in validation. Take a look at the code in the section below to make it more clear.
For context, I'm trying to calculate AUC after both training and validat... |
val_dataloader is called twice in each worker | [
"help wanted",
"won't fix",
"docs"
] | π Bug
I'm trying a LightningDataModule class to manage the data.
Using horovod backend, if that matters.
I've noticed that each rank is calling train_dataloader once, but val_dataloader two times somehow.
To Reproduce
run LIghtning with Dataclass and horovod, add some debug print on when val_dataloader is called
soemt... |
Trainer was signaled to stop but required minimum epochs (100) or minimum steps (None) has not been met. Training will continue... | [
"feature",
"help wanted",
"won't fix"
] | When I initialise the trainer as follows:
trainer = pl.Trainer.from_argparse_args(args, early_stop_callback=True, min_epochs=100, logger=mlflow, gpus=[0])
I cannot halt training early with Ctrl-C, I get the message
Trainer was signaled to stop but required minimum epochs (100) or minimum steps (None) has not been met. ... |
TypeError: 'generator' object is not callable | [
"help wanted",
"question"
] | I'm getting the exception TypeError: 'generator' object is not callable when I train with multiple GPU's
I'm not sure where it's coming from, my datasets are subclasses of torchtext.data.Dataset and the data loaders are torchtext.data.BucketIterator.
What's the easiest way of identifying what's causing the exception?
... |
How to log epoch's accuracy using Result object | [
"question",
"won't fix"
] | What is your question?
Hi,
Before version #0.9.0, I used to log the predictions and targets at each step, and then at the "_epoch_end" method I aggregated the predication and targets and used these aggregations as input to the Accuracy metric to calculate the epoch's accuracy.
Is there a way to use the Result object in... |
mlflow training loss not reported until end of run | [
"bug",
"help wanted"
] | I think I'm logging correctly, this is my training_step
result = pl.TrainResult(loss)
result.log('loss/train', loss)
return result
and validation_step
result = pl.EvalResult(loss)
result.log('loss/validation', loss)
return result
The validation loss is updated in mlflow each epoch, however the... |
Gradient Flow Summary | [
"question",
"won't fix"
] | β Questions and Help
Before asking:
search the issues. done
search the docs. done
What is your question?
I want to add a summary where I can track the gradient flow in my model.
The out-of-the-box gradient tracking is not sufficient for me because I need a more customized behavior.
I wanted to do it using a cal... |
`before_batch_transfer` and `after_batch_transfer` hooks in LightningDataModule | [
"feature",
"data handling",
"design"
] | π Feature
Can we have two additional hooks in LightningDataModule?
Something like before_batch_transfer_to_device, after_batch_transfer_to_device, although these can be renamed to something else.
Motivation
before_batch_transfer_to_device: This can help apply transformations or augmentations to a batch before it is t... |
Specify mlflow<1.11 | [
"feature",
"help wanted"
] | π Feature
Specify mlflow<1.11 in the environment.yml file.
Motivation
As I pointed out in this issue in the mlflow repo, the most recent version of that package creates an mlruns directory at the location of any script that imports it. Importing PL also results in this directory being created. This is problematic sinc... |
Iterations completing out of order (possibly) in ddp with torchelastic? | [
"bug",
"help wanted",
"waiting on author",
"distributed"
] | This might be bug or might be expected. I'm running a pytorchlightning with torchelastic and ddp. I'm noticing the iterations are being dumped out of order (below iteration 632 preceeds iteration 574). This could be due to delays in parallel writing... or perhaps just issues in logging. Is this expected behavior?
V... |
Models defined by data | [
"docs"
] | Is this correct (https://pytorch-lightning.readthedocs.io/_/downloads/en/latest/pdf/ pg17)
The first code example shows prepare_data() and setup() being called on the dataloader, but then it is not passed to fit()?
The second box has no explanation - I assume it's an alternative to the first code example? |
Loss display during epoch | [
"question",
"won't fix"
] | During the training, I'm using the custom loss function to train my model. However the loss are displayed as 0.000, but when I display the same value to display as different variable it gives 4.73e-5 (some value in exponential format).
Epoch 80: 10%|βββ | 100/1013 [01:33<14:11, 1.07it/s, loss=0.... |
Early stopping does not work with structured result returned in training_epoch_end | [
"bug",
"help wanted"
] | π Bug
Situation: A structured result is returned in training_epoch_end and early_stop_on was set.
Illustrative example:
def training_epoch_end(self, outputs):
avg_loss = outputs['minimize'].mean().squeeze()
result = pl.TrainResult(early_stop_on=avg_loss)
return result
Expected Behaviour: e... |
turning off GPU usage monitoring | [] | I'm using pl with Neptune logger, and get the following error every few secs:
NVMLError: Not Supported - GPU usage metrics may not be reported
is there a quick way to turn off the GPU usage monitoring ?
Thanks,
Yair |
DDP doesn't work properly with CUDA_VISIBLE_DEVICE | [
"bug",
"help wanted",
"priority: 0",
"distributed"
] | π Bug
DDP trainer doesn't work properly when CUDA_VISIBLE_DEVICE is set.
To Reproduce
Steps to reproduce the behavior:
Set CUDA_VISIBLE_DEVICE=1,2
Run DDP trainer with 2 GPUs
The main process will use available_gpus[self.trainer.local_rank] that is equal to 1
The second process will use GPU process_idx that is again ... |
How to disable printings about GPU/TPU | [
"question"
] | How to disable this printings? |
Cometml Logger epoch is not set. | [
"feature",
"help wanted",
"logger"
] | π Bug
While logging using comet ml there is an argument to set epoch https://www.comet.ml/docs/python-sdk/Experiment/#experimentlog_metrics
The info is available in metrics dict, but instead of passing it as an arg, it is passed as metrics value. I will suply a PR in a moment |
TypeError: can't pickle _thread.lock objects - Error while logging model into mlflow in multi gpu scenario | [
"question",
"won't fix"
] | β Questions and Help
What is your question?
Trying to log model into mlflow using mlflow.pytorch.log_model in train end. Getting the above error only in multi gpu scenario.
Code
mnist script file -
import pytorch_lightning as pl
import torch
from argparse import ArgumentParser
#from mlflow.pytorch.pytorch_autolog impor... |
Very slow training on SLURM cluster | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
When switching from my local machine (old and slow 960m laptop GPU) to a SLURM cluster with Titan X GPU's, I see a significant drop in performance from 4.71it/s to 7s/it (!) - so basically training becomes unusably slow.
To Reproduce
Take a model and submit it as a SLURM job. Happy to provide further details if... |
Model summarize displayed twice before training starts | [
"bug",
"help wanted",
"good first issue"
] | π Bug
Model summarize is called in two spots, which results in duplication in the logs:
It's called once in the training loop: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/training_loop.py#L160
And I'm unsure how it's called again
The first time is with the logger.info... |
Missing details in the document for the random seed | [
"docs"
] | π Documentation
Even though in the documentation, lightning very thoughtfully reminds readers of setting the random seed when using distributed data parallel. However, it does not mention where should we set them. Based on my experience on PyTorch, the seed can be set after calling the torch.distributed.init_process_g... |
Enabling batch transforms | [
"feature",
"help wanted",
"won't fix",
"docs"
] | π Feature
Hi ! Well the feature is quite simple, it is to have a simple way to code batch level transforms without having to put them directly in training_step.
Motivation
Motivation comes from the fact that in a project of mine, I want to change the color space of my input images directly on the batch rather than on ... |
Logging Intervals and Evaluation Intervals | [
"feature",
"help wanted"
] | π Feature
Logging and validation intervals should be performed using the global step.
Motivation
Currently, the row_log_interval and val_check_interval do not work as intended. If my batch size is large or my training set size is small, the number of batches per epoch is also small. Row log interval and validation che... |
How optimizer frequencies works | [
"question"
] | β Questions and Help
What is your question?
I don't understand how optimizer frequencies works
#1269
Code
When I tried to work with them as a list of optimizers, I got this error
It arises after training_step()
What's your environment?
OS: [Win]
Packaging [conda]
Version [0.9.0]
P.S.
If you have a pytorch-lightning ... |
Turn off torch profilers for faster training | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Motivation
PyTorch by default does not disable the autograd and other profilers while training there are a lot of them listed in one of the talks here.
To enhance speed while training it is recommended to turn them off manually.
Pitch
Since this feature should be on by default as it gives a good stack trace ... |
wandb logger.experiment does not seem to be the same as run object in wandb API | [
"question"
] | β Questions and Help
What is your question?
I use pytorch lightning + wandb logger. I do not know how to extract history of training (training losses, validation losses...) from pytorch lightning or from the logger.
What have you tried?
In the docs, the logger has a property (experiment) returning a wandb run object
ht... |
Add an example of how to subclass base DDP to docs | [
"docs",
"priority: 0"
] | Some cases you want to customize DDP. We need to add that to our docs. |
Add missing callback hook for optimizer dicts | [
"feature",
"won't fix",
"priority: 0",
"design"
] | Need a callback hook to consolidate the state of optimizer dicts before saving the final state_dict during model checkpointing. |
Looking for an example with custom metric for early_stop | [
"question"
] | β Questions and Help
What is your question?
Hi!
I was checking the docs and early_stop_callback and the only example I could find was https://pytorch-lightning.readthedocs.io/en/0.4.9/examples/Examples/#trainer-example. Is there any sample code where this is used? Im looking for a Lightning model that returns a custom ... |
DDP TensorMetric and NumpyMetric exception | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
when trying to train with metrics inherited from tensormetric or numpy metric an exception occurs
pytorch 1.6.0
pytorch-lightning == 0.9.0
CUDA_VISIBLE_DEVICES: [0,1,2]
Traceback (most recent call last):
File "/test.py", line 38, in <module>
main()
File "/test.py", line 34, in main
trainer.test(pl_mo... |
RPC support | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Pytorch 1.6 adds rpc. When will that be added to pytorch-lightning? |
Allow auto-reduce of Result even if _epoch_end is implemented | [
"duplicate",
"feature",
"help wanted",
"design"
] | π Feature
Currently, if Result is used and *_epoch_end is implemented, auto-reduce of result object is skipped.
The code for the validation loop is linked below. The same logic applies to the train and test loops.
pytorch-lightning/pytorch_lightning/trainer/evaluation_loop.py
Lines 20... |
Logging to progress bar doesn't work when calling trainer.test() | [
"bug",
"help wanted",
"working as intended"
] | π Bug
The metric logged by EvalResult.log() doesn't show up in the progress bar when setting prog_bar=True.
Code sample
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import TensorDataset, DataLoader
import pytorch_lightning as pl
class MyModel(pl.LightningModule)... |
Enable training purely based on number of iterations instead of epochs | [
"feature",
"help wanted",
"good first issue",
"won't fix",
"design"
] | π Feature
Enable training purely based on number of iterations instead of epochs
Motivation
This can be useful for certain training runs. Without this feature, the user must set an unreachably high value for max_epochs and set max_steps to the desired iteration count. With this setup, the trainer will break from the t... |
in `child_modules` the forward pass example should use the encoder subnet | [
"docs"
] | π Documentation
In my understanding, the forward pass of the example autoencoder should return the latent representation extracted by the encoder.
If my assumption is correct, then the forward, training_step, and _shared_eval functions should be corrected to reflect that. In this case, I'd be happy to create and submi... |
Retire legacy code in torchtext | [
"feature",
"let's do it!",
"design",
"priority: 1"
] | β Questions and Help
As the maintainer of the torchtext library, we plan to retire torchtext.data.batch.Batch and torchtext.data.example.Example in the next release (by the end of October). Those two components will be still available in torchtext.legacy. To handle the compatibility issue, should we send a PR and fix t... |
add support for 1.7 nightly | [
"feature",
"let's do it!"
] | π Feature
@ydcjeff created a CI with Conda to have tested with nighly 1.7 but in #3074 some tests were failing for 1.7 some let's move it as separate work
Motivation
be continuously compatible even with new PT versions when they come
ToDo
uncomment CI and enable 1.7 testing
fix test so 1.7 passes |
Support automatic torchscript checkpointing. | [
"feature",
"help wanted",
"won't fix",
"let's do it!"
] | π Feature
Support exporting to TorchScript natively in the ModelCheckpoint callback. This would be a natural follow-up to #3080.
Motivation
It's common for users to rely on checkpoints (eg, usually last, but sometimes other intermediate models especially if using model ensembles or picking the best model the last k mo... |
Generator API for train_step with multiple optimizers | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Add a generator API version of LightningModule.train_step() when dealing with multiple losses/optimizers.
Motivation
When dealing with multiple optimizers, the current API pass an additional optimizer_idx argument to train_step(). This is cumbersome with optional steps and shared variables. For example, this... |
GPUs requested but none are available | [
"question",
"won't fix",
"waiting on author"
] | my server has 8 GPUs, but when I use the trainer class and set gpus = -1, it gets the run error GPUs requested but none are available, use torch to check the gpus , get the number of gpu is 8, and cuda.is_available is true. Does any one can tell me what's wrong ? |
on_save_checkpoint callbacks runs in rank zero only | [
"bug",
"help wanted",
"discussion",
"design"
] | π Bug
If any callback implements on_save_checkpoint, then that function runs only in the rank zero worker. I think this is suboptimal as you might want to do some communication across workers before saving state.
The lineage of calls here is:
model checkpoint callback's on_validation_end is decorated with rank_zero_o... |
WandbLogger does not seem to support model and parameter tracking | [] | I am unsure whether this issue can be fixed from PyTorch Lightning's side or whether the issue lies with Wandb. The problem is that the WandbLogger.watch() function
pytorch-lightning/pytorch_lightning/loggers/wandb.py
Lines 136 to 137
in
c46de8a
... |
Why training with lightning is 1.5x slower than vanilla pytorch | [
"question"
] | I am assuming this is due to user error but cannot figure it out
Running pytorch-lightning==0.9.0 on Linux
Converted pytorch code for training bert transformer model to pytorch lightning.
Vanilla code trains ~1.5x faster than lightning code. (4.8 it/s vs 2.7 it/s)
Running both code bases on 1x GPU, both with 32 bit pre... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.