title
stringlengths
5
164
labels
list
bodyText
stringlengths
0
46.7k
fix failing examples
[ "bug", "help wanted", "example", "ci", "strategy: dp", "priority: 1" ]
πŸ› Bug We have uncommented PL examples in #4551 but two of them turned to be failing and shall be fixed: FAILED pl_examples/test_examples.py::test_examples_dp_image_classifier[\n--max_epochs 1 --batch_size 32 --limit_train_batches 2 --limit_val_batches 2 --gpus 2 --distributed_backend dp --precision 16 ] FAILED pl_exam...
PL shouldn't override PYTHONWARNINGS
[ "bug", "help wanted", "let's do it!", "3rd party" ]
πŸ› Bug This is bad: pytorch-lightning/pytorch_lightning/trainer/trainer.py Line 71 in c208ac6 os.environ['PYTHONWARNINGS'] = 'ignore:semaphore_tracker:UserWarning' What a user to do if they need to use PYTH...
Move dev debugger directly in logging
[ "ci", "refactor" ]
@edenlightning commented on Mon Nov 16 2020
Where are the parameters passed to?
[ "question", "won't fix" ]
❓ Questions and Help What is your question? I am curious about where are the parameters passed to? In the official implementation of SimCLR, I cannot find any operations use the initialized parameters: batch_size: int, num_samples: int, warmup_epochs: int = 10, lr: float = 1e-4, opt_weight_decay: float = 1e-6, loss_tem...
How to monitor more than one quantity?
[ "question" ]
What i do if i want to monitor more than one quantity?
Accuracy calculation issue related to "+="
[]
pytorch-lightning/pytorch_lightning/metrics/classification/accuracy.py Line 98 in c208ac6 self.correct += torch.sum(preds == target) I encountered RuntimeError: Trying to pass too many CPU scalars to CUDA kernel! error wh...
GAN Domain Template: Typo in the description of Adam's beta 2
[ "docs" ]
Hi, I think Adam's beta 2 parameter was mistakenly referred to as the first order momentum of the gradient, whereas I think it should be the second order momentum? In the current domain template: parser.add_argument("--b2", type=float, default=0.999, help="Adam: decay of first order momentum of gradient") Whereas it sh...
Give users more control on paths used by ModelCheckpoint callback
[ "duplicate" ]
When initializing a ModelCheckpoint callback like this: model_checkpoint = ModelCheckpoint(monitor="val_loss", filename="{epoch}-{val_loss:.3f}") we get checkpoint files with names that look like this: epoch=1-val_loss=0.436.ckpt This looks really nice but the use of this name=value pattern is not necessarily what the...
Fix Lightning examples
[ "duplicate", "example" ]
DP/DDP is failing for all https://github.com/PyTorchLightning/pytorch-lightning/tree/master/pl_examples
Gather function in Lightning module with gradients
[ "feature", "help wanted" ]
πŸš€ Feature Add a gather function with gradients in LightningModule which can be implemented with the help of: https://github.com/PyTorchLightning/pytorch-lightning-bolts/blob/master/pl_bolts/models/self_supervised/simclr/simclr_module.py#L25 Motivation The default all_gather in pytorch breaks the gradient graph as it d...
I’m new to this
[]
❓ Questions and Help Before asking: Try to find answers to your questions in the Lightning Forum! Search for similar issues. Search the docs. What is your question? Code What have you tried? What's your environment? OS: [e.g. iOS, Linux, Win] Packaging [e.g. pip, conda] Version [e.g. 0.5.2.1]
wrong test acc because redundant data in ddp mode
[ "duplicate", "feature", "help wanted" ]
If I have 499 videos to test set, in ddp model, It will load 512 videos to test, maybe by copying videos to match the batch size. But it will cause wrong test accuracy. Now I need to save each videos' predictions and calculate acc by my own. Is there any way to solve this problem?
Python malloc error with iPython
[ "bug", "help wanted", "3rd party" ]
πŸ› Bug When in iPython, doing: from pytorch_lightning import Trainer or import pytorch_lightning gives: In [1]: from pytorch_lightning import Trainer python(82101,0x10e7f1dc0) malloc: can't allocate region :*** mach_vm_map(size=18446744071734263808, flags: 100) failed (error code=3) python(82101,0x10e7f1dc0) malloc: **...
ηˆ¬θ™«bug
[]
πŸ› Bug Please reproduce using the BoringModel and post here To Reproduce Expected behavior Environment Note: Bugs with code are solved faster ! Colab Notebook should be made public ! IDE: Please, use our python bug_report_model.py template. Colab Notebook: Please copy and paste the output from our environment c...
Add model sharding/gradient checkpointing from FairScale
[]
FairScale integration
[ "feature" ]
Getting an error when trying to use torch.load on model trained using DDP
[ "question", "checkpointing" ]
❓ Questions and Help What is your question? I'm trying to save my model with torch.save() with the trainer and logger detached and then load it with torch.load(). If I train the model using DDP on 2 P4 gpus, I get the following error when I try to load it: RuntimeError: [enforce fail at inline_container.cc:222] . file ...
[Metrics] Add Image Gradient
[ "feature", "help wanted" ]
πŸš€ Feature Implement Image-Gradients for PT Lightning. Motivation Recently I was working on a vanilla PT implementation of the DenseDepth paper. They happen to use a DepthLoss as one of their loss functions. Incidentally, DepthLoss is based on calculating Image gradients between two images, native implementation of w...
Close issues where author did not respond
[ "feature", "help wanted", "won't fix" ]
Can we automate with GH actions? https://github.com/probot/no-response @Borda what do you think about this?
pl not respecting total_val_batches
[ "bug", "help wanted" ]
It took me a while to hunt it down, If I provide: total_train_batches = 10 total_val_batches = 3 val_check_batch = 1 how do we get to total_batches 40 in tqdm: Eventually I found the culprit to be this: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/callbacks/progress.py#L326-L330 ...
Edit Profile Β· User Settings Β· GitLab
[]
ζˆ‘εˆ†δΊ«δΊ†γ€Edit Profile Β· User Settings Β· GitLab】, εΏ«ζ₯ηœ‹ε§οΌ@ε°η±³ζ΅θ§ˆε™¨ | https://gitlab.com/-/profile
Add useful links to our metrics docs
[ "docs" ]
Add wikipedia/equations links to metrics docs, or any other resources on how metrics are calculated.
How to 101
[ "question" ]
❓ Questions and Help Before asking: Try to find answers to your questions in the Lightning Forum! Search for similar issues. Search the docs. What is your question? Code What have you tried? What's your environment? OS: [e.g. iOS, Linux, Win] Packaging [e.g. pip, conda] Version [e.g. 0.5.2.1]
validation_epoch_end with DDP
[ "question", "won't fix", "waiting on author" ]
What is your question? I am trying to implement a metric which needs access to whole data. So instead of updating the metric in *_step() methods, I am trying to collect the outputs in the *_epoch_end() methods. However, the outputs contain only the output of the partition of the data each device gets. Basically if the...
Potential bug in metric when updated with a slice of tensor in DDP
[ "bug", "help wanted", "distributed", "priority: 1" ]
πŸ› Bug when a metric is updated with a slice of tensor as one of the inputs (either pred or target) with multiple GPU with DDP, it throws out an error: RuntimeError: Tensors must be non-overlapping and dense Once the slice of the tensor is clone and detach, then it works. Please reproduce using [the BoringModel and p...
Wrong AUCROC when runing training_step in DDP mod
[ "bug", "help wanted", "working as intended" ]
πŸ› Bug I run a binary classification model and compute the auc as a performance indicator. The first I ran the code on 1 single GPU and it worked well, but the second time I tried to using 4 GPUs with DDP backend, the AUC became very weird, it seemed to just sum all the AUCs of the 4 GPUs. I use pl.metrics.AUROC() to ...
`lr_finder` fails when model defines some layers in `setup`.
[ "bug", "help wanted", "won't fix", "trainer: tune", "priority: 1" ]
πŸ› Bug In lr_find, trainer.save_checkpoint(...)Β  is called before trainer has a chance to call model.setup(stage='fit') and datamodule.setup(stage='fit'). Therefore, weight restoration will fail later if extra layers are defined in setup. Please reproduce using the BoringModel and post here https://colab.research.googl...
Lower parity tests
[ "feature", "help wanted", "good first issue", "ci" ]
πŸ› Bug We have observed that the trainer reached the initial threshold for parity Note that we are still almost the same fast, but we may find some weeks spots Please reproduce using see GPU tests in the benchmark folder Additional context actual version 1.0.7 has avg diff 0.84s
Precision/Recall/Fbeta error out for floating tensor input of preds but OK for long tensor of preds
[ "help wanted", "won't fix", "working as intended", "priority: 1" ]
πŸ› Bug The current implementation of Precision/Recall/Fbeta used _input_format function to format input shapes and types. There appears to be a bug how this function deals with essential the same input of preds but different data types (long vs float) Please reproduce using the BoringModel and post here To Reproduce ...
transform module with explicit transformations coupled with torch.transforms
[ "feature", "help wanted" ]
πŸš€ Feature Thinking to file a pr for transform module which consist inhouse transformations which are not present torch.transforms. Motivation Gridmask, mosaic, mixup, cutmix, etc above augmentations are missing in most of the frameworks and they are de facto for kaggle, general computer vision solutions, etc Reference...
Apply import formatter `isort`
[ "feature", "help wanted", "good first issue", "ci", "refactor", "priority: 2" ]
πŸš€ Refactoring As isort has been added to ci in #4242, we now need to apply the formatter step by step i.e. a submodule per PR (recommended in #4242 (comment) by @Borda) Steps For each PR: choose one submodule from below list and apply isort to it remove the corresponding line in pyproject.toml make sure isort --chec...
Metrics are not reset when using self.log()
[ "help wanted", "ci" ]
πŸ› Bug Metrics are not reset when using self.log() unless user explicitly calling self.metric.compute() See MSE metric example in the colab notebook linked below. Printing internal states on epoch end shows that the metric states are not reset. Calling self.metric.compute() explicitly resolve the issue (uncomment line ...
Lightning Model module , get_model_size method
[ "feature", "help wanted", "won't fix", "discussion", "design" ]
πŸš€ Feature A get_model_size method in model module which return model size in megabyte based on precision. Motivation Just thinking out loud.. Additional context We can add this in mode summary too # ------------ # model # ------------ model = LitClassifier(args.hidden_dim, args.learning_rate) >> print(model.size) >> 1...
Repeated .fit() calls ignore max_steps iteration bound
[ "bug", "help wanted", "good first issue", "priority: 1" ]
πŸ› Bug Hello! While trying to convert my code to PL (I'm starting to become a big fan!) I came across some unexpected behavior: In an iteration-based training setup repeated calls of trainer.fit() result in ignoring the iteration bound set by the max_steps argument. The trainer will finish the entire epoch, even thoug...
Learning Rate schedulers do not follow the optimizer frequencies.
[ "bug", "help wanted", "good first issue", "priority: 1" ]
πŸ› Bug I want to use two optimizers sequentially with different LR schedulers. For the first one, I want to use OneCycleLR and for the second optimizer I do not want to use a LR scheduler. The problem with OneCycleLR is that you need to specify the exact number of steps. I have the setup like this def configure_optimiz...
How to get default checkpoint dir
[ "question" ]
In test_epoch_end, I want to save some files to default checkpoint dir, How can I access this path.
"MisconfigurationException: PyTorch XLA not installed." Even though PyTorch XLA is installed
[ "bug", "help wanted", "accelerator: tpu", "waiting on author", "3rd party" ]
πŸ› Bug I'm using exactly the same command to install pytorch xla as shown in the doc: curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py python pytorch-xla-env-setup.py --version nightly --apt-packages libomp5 libopenblas-dev However, I still got this err...
How to use pytorch-lightning to train a detectron2 model?
[ "question" ]
The trainers in pytorch-lightning are really cool, but how can I use them to train a detectron2 model? Is there any example that I can follow?
Questions about GPUStatsMonitor callback and GPU utilization
[ "question", "won't fix" ]
I'm logging the GPU activity using the default values. I'm getting the following plots on my tensorboard: A couple of questions: Is the utilization really zero at specific time points (jigsaw) or is it just plotting zero's in-between epochs? If my utilization is really zero at specific points, how can I make sure thi...
Use custom batch dataloader with DDP
[ "question", "won't fix" ]
Hello, In our project, we use a custom data loader like so: class BatchDataLoader(torch.utils.data.dataloader.DataLoader): def __init__(self, ds, batch_size, shuffle, num_workers=0): inner_sampler = RandomSampler(ds) if shuffle else SequentialSampler(ds) sampler = BatchSampler(inner_sampler, batch_s...
DDP accelorator should have 'trainer=none' as default
[ "bug", "help wanted", "design" ]
Add to DDP accelerator 'trainer=None' as a first argument (otherwise I cannot pass it to the Trainer instantiation) https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/accelerators/ddp_accelerator.py#L49 class DDPAccelerator(Accelerator): def __init__(self, trainer, cluster_environme...
`lr_finder` fails when called after training for 1 or more epochs
[ "bug", "help wanted", "trainer: tune", "priority: 1" ]
πŸ› Bug Calling lr_finder on the model after trainer.fit() has been called will fail with: LR finder stopped early due to diverging loss. Failed to compute suggesting for `lr`. There might not be enough points. , even when the default value of min_lr=1e-08 has been changed to 1e-30. Please reproduce using the BoringMo...
Metric names when using multiple dataloader for validation
[ "question" ]
Hi guys, I am using multiple dataloaders for validation. This works great so far, but I have some questions regarding the logged metrics: As far as I understand, lightning will automatically assign a metric suffix (/dataloader_idx_X). Is there any way for me to control that behaviour? Wandb assumes groups with that sla...
Training loss not convergence
[ "bug", "help wanted" ]
πŸ› Bug Running the example code below, at my pc , the loss not change: Epoch 29: 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 800/1875 [00:04<00:05, 189.48it/s, loss=1.483, v_num=87] as u see, even at epoch29, the loss is around 1.5. code: `import os import torch from torch.nn import functional as ...
DDP Logdir Multiple Runs Bug
[ "bug", "help wanted", "good first issue", "priority: 0" ]
πŸ› Bug Using accelerator="ddp" and running a pl Experiment mulitple times each time creating a new Trainer, the log version number is handled in a wrong way. Instead of creating folder "version_2" after running "version_1" a folder with name "2" is created. To fix this, after the training, os.environ.pop("PL_EXP_VERSI...
Auto-scale batch size triggers "on_train_end"
[ "bug", "help wanted", "trainer: tune", "priority: 1" ]
When running Trainer with auto_scale_batch_size set to true, the on_train_end function is called at the start, but never at the end, because there is a check to not run teardown multiple times.
How to write log info to txt file
[ "question", "logging" ]
How to save each epoch's self.log info to txt file, in the meantime keeping the tensorboard logs. I want to log some metric value of each epoch, and my debug text info. such as self.log('This is a text info')
Tensorboard log_graph does not seem to do anything
[ "bug", "help wanted" ]
πŸ› Bug While exploring Tensorboard's logging features I experimented with the log_graph function, that was added after #2915 (issue) and #3003 (PR). According to the docs the function has the following signature log_graph(model, input_array=None) So I tried to call it doing, inside my PL module: ... def __init__(self...
[Doc] Callback example raising an exception
[ "bug", "help wanted" ]
By pytorch-lightning 1.0.7, I follow Doc example as follow : import os import torch from torch import nn import torch.nn.functional as F from torchvision.datasets import MNIST from torchvision import transforms from torch.utils.data import DataLoader import pytorch_lightning as pl from torch.utils.data import random_s...
metrics.classification.ConfusionMatrix() not available
[ "help wanted", "question" ]
πŸ› Bug The ConfusionMatrix class called from metrics.classification.ConfusionMatrix is not working. When trying to import class it fails. To fix issue the init.py file of the metrics module needs to include a comma following the ConfusionMatrix line like so from pytorch_lightning.metrics.classification import ( ...
What is the module attribute in the training_loop.py?
[ "question", "waiting on author" ]
I want to train my model using pytorch_lightning but, when I pass my model into the trainer.fit(mymodel) function an error gets thrown. It says "my model" has no attribute 'module'. I searched through the pytorch_lightning files and found it in pytorch_lightning/trainer/training_loop.py on line 123 - 125. There is actu...
[LoggerCollection.log_hyperparams] [no optional argument metrics]
[ "bug", "help wanted", "priority: 0", "logger" ]
πŸ› Bug Hello ! :) The following method TensorBoardLogger.log_hyperparams has an optional argument metrics yet LoggerCollection.log_hyperparams doesn't, yielding an error when relying on several loggers including TensorBoardLogger and using this optional argument. Is that behaviour expected? Shouldn't all loggers have...
test loading legacy checkpoints
[ "feature", "help wanted", "ci", "checkpointing", "priority: 1" ]
πŸš€ Feature create persistent storage with all legacy version checkpoints and try to load and continue training
conflicts of warm-up and lr scheduler
[ "feature", "help wanted", "won't fix" ]
In docs, warm-up is done as below: # learning rate warm-up def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_idx, second_order_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False): # warm up lr if self.trainer.global_step < 500: lr_scale = min(1., float(self.traine...
WandbLogger does not mark uploaded model as 'artifact'
[ "bug", "help wanted", "won't fix", "logger", "3rd party", "priority: 1" ]
πŸ› Bug I'm using WandbLogger with the latest pytorch-lightning==1.0.8. It seems like trained checkpoint is treated as mere file not a model artifact, even I turned on log_model=True. It's much convenient to use model artifact from other script so I hope that is done by pytorch-lightning automatically. Environment * CUD...
How to resume training
[ "question" ]
❓ Questions and Help What is your question? How to resume training What have you tried? The following: trainer = pl.Trainer(gpus=1, default_root_dir=save_dir) saves the checkpoints but does not resume from the last checkpoint (it starts a new version) The following code starts the training from scratch (but I read that...
DDP+ manual optimization support
[ "bug", "duplicate", "priority: 0", "distributed" ]
Gradients aren't being synchronised if using manual_backward
Implementing a metric is getting complicated
[ "bug", "help wanted" ]
In PL 0.9.0 I had the Metric implementation bellow: class MRRMetric(TensorMetric): def similarities(self, x1, x2): """ Calculates the cosine similarity matrix for every pair (i, j), where i is an embedding from x1 and j is another embedding from x2. :param x1: a tensors with shape ...
default_root_path vs default_root_dir
[ "docs" ]
I noticed that the argument to Trainer is default_root_dir, but the docs sometimes show default_root_path. I did a grep on the codebase for default_root_path, and here are the results: ./notebooks/05-trainer-flags-overview.ipynb:2155: "trainer = pl.Trainer(default_root_path=os.getcwd())\n", ./pytorch_lightning/train...
Learning rate scheduling interval on LambdaLR scheduler cannot be set to "step".
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug Hello. I am trying to use LambdaLR learning rate scheduler with lr updates at every step with my custom function. However, I have found that even with scheduler={'interval': 'step'}, the update occurs at each epoch, not each training step. def configure_optimizers(self): def func(step: int, max_steps=max_ste...
The correct way to resume experiment with logger
[ "question" ]
From what I understand checkpoints don't save id of the logger run right? Then what is the correct way to resume training from checkpoint and resume the correct logger run at the same time? For example, I'm using Wights&Biases logger where you can pass id argument which resumes run experiment: WandbLogger(id=run_id) H...
How to log more than one metrics in logger?
[ "question" ]
I want to log two metircs.What should i do? self.log('my_loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True) This can only log one metrics.
manual_optimization does not work with ddp
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug Can't run ddp with manual optimization. Fails on the second batch with a error: RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forwardfunction. Please make sure model parameters are not shared across multip...
how to properly skip samples that cause inf/nan gradients/loss
[ "feature", "question", "won't fix" ]
tl;dr does the approach in the code snippet below look ok, or is there a better alternative for automatically skipping few "bad" samples in the data that cause inf/nan gradients/loss? (is it a good practice altogether?) details sometimes, there is a small percentage (but annoyingly large in absolute value) of "dirty" s...
`validation_epoch_end` might need current epoch
[ "feature", "help wanted" ]
Hi, I would like to save some files with suffix of filenames using current epoch on validation_epoch_end. However, it seems that it's not a parameter of validation_epoch_end and I can only get it from pl.Trainer. Is there any another way instead of just adding additional parameter to validation_epoch_end? Thanks!
manual_optimization does not work with dp
[ "bug", "help wanted", "waiting on author", "priority: 1" ]
πŸ› Bug We can run dp backend with manual optimization, but the gradients seem to be messed up hence the model can't learn anything. To Reproduce Change optimization to manual in basic gan bolt, then change the backend to dp. Set batch_size = 2, compare experiments on 1 GPU vs 2 GPUs When using 1 GPU everything is fine...
Custom Checkpoint file extension
[ "feature", "good first issue", "checkpointing" ]
πŸš€ Feature Atm, we have hardcoded .ckpt as a file extension for any checkpoint. pytorch-lightning/pytorch_lightning/callbacks/model_checkpoint.py Line 429 in db69d16 ckpt_name = f"{filename}.ckpt" Proposed ...
Precision and Recall over validation step
[ "question", "working as intended" ]
When Precision and Recall are directly computed, I get the following result: import torch from pytorch_lightning.metrics import Precision from pytorch_lightning.metrics import Recall y = torch.tensor([0, 0, 2, 2, 1, 1, 1, 2, 0, 0]) y_hat = torch.tensor([1, 1, 2, 1, 1, 1, 1, 1, 2, 1]) precision = Precision(num_classes...
Loss format from .3f to .3g in the training loop
[ "feature", "help wanted", "design" ]
πŸš€ Feature I propose to change the default format of loss during training (in the tqdm bar) from .3f to .3g Motivation When using pytorch-lightning with losses that are quite close to zero, the tqdm information during training becomes non informative, because the loss is always 0.000. For example: Epoch 41: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ...
Abstract matplotlib figure logging from individual logger API
[ "feature", "help wanted", "won't fix", "logger" ]
πŸš€ Feature Implement default log_figure for all implemented loggers Motivation Currently, logging figures relies on calling the API of the individual logger directly. This is not really convenient for a variety of reasons: It is cumbersome to change loggers It is cumbersome to disable logging (e.g. for debugging) --> ...
wandb logger problem with on_step log on validation
[ "bug", "help wanted", "question", "3rd party", "priority: 1" ]
πŸ› Bug When logging on the validation_step with on_step=True and on_epoch=False the following happens: wandb warnings are generated to alert about a step numbering problem (probably confusing the validation step number which seems cyclical with the overall step which is always increasing) wandb charts for training ...
Where is stored logs of the NeptuneLogger when I use offline mode for a logger?
[ "question", "won't fix", "logger" ]
I use Neptune Logger with the offline mode in my pipeline, but I can't find where log files are located, and I can't find parameters of NeptuneLogger to set store dir and so on from offline mode. Can someone help with it? Thanks in advance!
Metrics for object detection with bounding boxes
[ "feature", "help wanted" ]
πŸš€ Feature We could add a whole bunch of metrics for object detection (bounding boxes) by just writing a plugin that transforms the bounding boxes to multi-class label tensor representation. Motivation & Pitch Metrics for bounding box detection are basically just classification metrics (see here for example) - the only...
Add multi-task support to EarlyStopping
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Make EarlyStopping watch multiple values and only stop when all of them no longer improve. Motivation I have been training a multi-task model with multiple outputs. Each output is validated and logged by the model. As of today, early stopping can only watch one of them. Hence, the user has to choose which ta...
Use lightning with dgl
[ "feature", "help wanted", "won't fix", "3rd party" ]
def train_dataloader(self): return DataLoader(self.train_dataset, batch_size=self.hparams.batch_size, shuffle=True, num_workers=4, collate_fn=self.batcher) I use collate_fn to batch dgl graphs, but when I tr...
`LightningModule.log(..., on_epoch=True)` logs with `global_step` instead of `current_epoch`
[ "feature", "help wanted", "logging" ]
πŸ› Bug When logging using: self.log(f"some_metric", value, on_step=False, on_epoch=True) in ie. training_step, the data is logged to the tensorboard with X axis in steps instead of epochs: Expected behavior is for the x axis to be in epochs: To Reproduce (I'll try to work on reproduction example once I find some free...
Duplicate epochs when calling .fit() twice
[ "bug", "help wanted", "priority: 0", "breaking change" ]
πŸ› Bug To Reproduce def test_bug(tmpdir): epochs = [] class TestModel(BoringModel): def on_epoch_end(self): epochs.append(self.current_epoch) trainer = Trainer( max_epochs=2, limit_train_batches=1, limit_val_batches=1, default_root_dir=tmpdir, ...
Accuracy metric for preds at half precision is zero with pl=1.0.8
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug The accuracy metric is wrong if preds are given with half precision. See example. To Reproduce import torch from pytorch_lightning.metrics import Accuracy acc = Accuracy(threshold=0.5) target = torch.Tensor([1, 1, 0, 0]) preds = torch.Tensor([0.7, 0.4, 0.8, 0.4]) print(acc(preds, target)) -> 0.5 print(acc(p...
fix typing in PL codebase - multiple PRs
[ "feature", "help wanted", "good first issue", "refactor", "priority: 1" ]
πŸš€ Feature add/fix typing in all PL codebase, this will yield in multiple PR as we prefer to split the work into smaller peace Additional context in #5021 we introduced a new ignore list so after it is merged, take out a single ignored item and fix typing for that part each item + its fix = one PR Sections pytorch...
Behaviour when limit_train_batches is float is inconsistent
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug When passing limit_train_batches as a float to the trainer, the total number of steps during training is inconsistent and is dependent on accumulate_grad_batches, and independent of drop_last. Please see the tests in the notebooks for more examples. Please reproduce using the BoringModel and post here To Reprod...
The right place for an "essential" callback
[ "question" ]
❓ Questions and Help What is your question? I am currently using an ordinal loss formulation that cuts up a real-valued output space into regions using cutpoints. Each region is linked to a discrete ordered label. After the backward (optimizer step), the model requires cutpoints to be re-arranged in an ascending order....
Memory allocated on gpu:0 when using torch.cuda.empty_cache()
[ "bug", "help wanted" ]
πŸ› Bug Pytorch lightning calls torch.cuda.empty_cache() at times, e.g. at the end of the training loop. When the trainer is set to run on GPUs other than gpu:0, it still allocates memory on gpu:0 when running torch.cuda.empty_cache(). Apparently this is the initial device context, but it can be avoided. For example, wi...
Make all grad_norm logs in a single section
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Make all grad_norm logs in a single section Motivation current tensorboard logs of gradient norm looks like: e.g. for each parameter, there is a separate section that contains a single plot. It takes too much space especially in case of really deep networks or when you're experimenting with different archit...
Get correct metrics for an epoch with DDP
[ "question" ]
❓ Questions and Help What is your question? Before #2528 is fixed to allow calculating metrics for an entire epoch (for example, average precision) with DDP, I am curious if there is a workaround at the moment, given the current metrics is calculated on each GPU before syncing data from each GPU? Take average precision...
Auto-scaling batch-size not compatible with half precision training
[ "bug", "help wanted", "priority: 0" ]
Using precision=16 and auto_scale_batch_size=True yields 'NoneType' object has no attribute 'state_dict' error Expected behavior Largest batch size should be found when using 16 bit precision Environment PyTorch Version (e.g., 1.0): 1.6 OS (e.g., Linux): Windows 10 How you installed PyTorch (conda, pip, source): conda...
Incorrect default cuda device when using single gpu other than cuda:0
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug The default cuda is not set properly to the trainer.root_gpu in single-GPU mode. The tensors created with device='cuda' will be placed on the incorrect gpu, and the dataloader will acquire memory on the incorrect gpu when pin_memory=True. Maybe we'll need to add torch.cuda.set_device(self.trainer.root_gpu) to ...
Epoch counting is one-off in multiple instances
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug Two issues occur: The final epoch does not save a checkpoint during training. Resuming from a checkpoint N will start the epochs at N+2. Expected behavior Final checkpoint should save a .ckpt file, as usual. Should resume from epoch N+1. Environment * CUDA: - GPU: - Tesla V100-DGXS-16GB - Tesla V100-DGX...
Incorrect Precision/Recall/F1 score compared to sklearn
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug To Reproduce Steps to reproduce the behavior: Copy the code Run the code from top to bottom Compare print results See Difference between sklearn and Lightning Code import torch import numpy as np import pytorch_lightning as pl from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_sco...
RuntimeError: No `loss` value in the dictionary returned from `model.training_step()` with pytorch lightning
[ "question" ]
❓ Questions and Help What is your question? I am trying to run a custom dataset using Pytorch lightning but am not able to do so due to the following error. The input is an array with the shape as (n, m). Can anyone tell me what am I doing wrong? Traceback (most recent call last): File "TestNet.py", line 285, in <modul...
MLFlowLogger throws a JSONDecodeError
[ "bug", "help wanted" ]
πŸ› Bug To Reproduce Steps to reproduce the behavior: Code sample from pytorch_lightning import Trainer from pytorch_lightning.loggers import MLFlowLogger mlflow_logger = MLFlowLogger(experiment_name="test-experiment", tracking_uri="URI_HERE") t = Trainer(logger=mlflow_logger) t.logger.experiment_id throws a JSONDecod...
How to scheduler.step() after every batch
[ "question" ]
I want to make use of schedulers like torch.optim.lr_scheduler.CyclicLR & torch.optim.lr_scheduler.OneCycleLR https://pytorch.org/docs/stable/optim.html These schedulers require scheduler.step() to be callled after each batch. How to achieve this is PyTorchLightning?
load_from_checkpoint() doesn't work when a LightningModule inherits from typing.Generic
[ "bug", "help wanted" ]
πŸ› Bug When a LightningModule with saved hyperparameters inherits from typing.Generic, hyperparameters saved in the checkpoint file are not loaded automatically, causing an error. When load_from_checkpoint() calls inspect.signature() to gather the list of arguments of the LightningModule that inherits from typing.Gener...
Rename class metrics PrecisionRecall to PrecisionRecallCurve and update examples
[ "feature", "help wanted" ]
πŸš€ Feature Rename the class metric PrecisionRecall to PrecisionRecallCurve Motivation the class metric PrecisionRecall is based on the functional metric precision_recall_curve and therefore should have PrecisionRecallCurve as its proper name instead of the current name PrecisionRecall. In addition, there is need to upd...
Supporting TPU pods
[ "feature", "help wanted" ]
More context: https://cloud.google.com/tpu/docs/training-on-tpu-pods
Callbacks are not being called, gradient norm not getting logged,how to debug these scenarios?
[ "question", "won't fix" ]
❓ Questions and Help I have defined a callback for the on_epoch_end event. Also, I have defined track_grad_norm = 1, which I think is also kind of a callback. When I used --fast_dev_run or --limit_train_batches and --limit_val_batches the callback(including gradient norm plot) got executed properly. When I run without...
Support a to_torchscript function on the LightningModule
[ "feature", "help wanted", "let's do it!", "design" ]
πŸš€ Feature Support a conversion function to PyTorch JIT similar to what's available for ONNX. Motivation TorchScript is a way to create serializable and optimizable models from PyTorch code. Any TorchScript program can be saved from a Python process and loaded in a process where there is no Python dependency. TorchScri...
Checkpoint/saving a model in the middle of an epoch
[ "question", "won't fix" ]
❓ Questions and Help Hi all, I have a network that trains slowly on a large dataset (something like 1 week per epoch). In my previous pure-Pytorch version, I saved a checkpoint of the model every hour along the way without doing any sort of additional validation. I just want to make sure I didn't lose my progress. is t...
No log is recorded in debug mode, but manual operation works
[ "bug", "help wanted" ]
πŸ› Bug just as title Code sample import torch from torch.nn import functional as F from torch import nn from pytorch_lightning.core.lightning import LightningModule from pytorch_lightning.core.step_result import TrainResult from pytorch_lightning import loggers from torch.utils.data import DataLoader, random_split from...
TrainResult wrong parameters on docs
[ "help wanted", "docs" ]
While checking the code against the docs I noticed that the docs mention 3 positional parameters on pl.TrainResult class, but the code signature has 4. Also in the getting started tutorial instances this class with the first positional parameter without explaining what it uses for. This last inconsistency is key becaus...
IoU metric returns 0 score for classes not present in prediction or target
[ "bug", "help wanted" ]
πŸ› Bug The iou metric implementation always returns a score of 0 for a class that is not present in either the prediction or the target. This can lead to a deflated score even for perfectly-predicted examples. Case 1: one example of an affected case is multi-class semantic segmentation of an image that does not contai...