title
stringlengths
5
164
labels
list
bodyText
stringlengths
0
46.7k
validation loops run the partial dataset with horovod
[ "bug", "help wanted" ]
Hello, It seems to be the same issue as #1161. When I use horovod, validation_step and validation_epoch_end are called multiple times). Thank you.
auto_lr_find=True doesn't work with early_stop_callback
[ "bug", "help wanted" ]
πŸ› Bug When I use auto_lr_find=True with early_stop_callback I find errors like this. Traceback (most recent call last): File "gpu_template.py", line 92, in <module> main(hyperparams) File "gpu_template.py", line 53, in main trainer.fit(model) File "/home/hirune/anaconda3/envs/PANDA/lib/python3.7/site-pa...
Return the evaluation result of Trainer.test
[ "feature", "help wanted" ]
πŸš€ Feature This enhancement request is to let Trainer.test return the dictionary of test metrics. Motivation Currently, Trainer.test returns nothing. The user would have to open Tensorboard to see the test result or write a custom logger. For beginners, this enhancement offers simplicity to beginners. In some scena...
[Examples] The UNet model has some bugs
[ "bug", "help wanted", "example" ]
πŸ› Bug The UNet model definition has some bugs pertaining to bilinear interpolation. Code sample pytorch-lightning/pl_examples/models/unet.py Lines 35 to 37 in 2950f66 for _ in range(num_layers - 1): ...
minst multi-gpu You requested GPUs: [6, 7] But your machine only has: [0, 1]
[ "bug", "help wanted" ]
import os import torch from torch.nn import functional as F from torch.utils.data import DataLoader from torchvision.datasets import MNIST from torchvision import transforms import pytorch_lightning as pl class CoolSystem(pl.LightningModule): def __init__(self, hparams=None): super().__init__() ...
Enable Deepsource checks in CI/CD
[ "feature", "help wanted", "won't fix", "ci" ]
In addition to flake8, mypy and other checks, I use https://deepsource.io/ for my projects. It is free and from time to time it finds non-trivial bugs. When I use it for projects with Pytorch-lightning and several validation loaders it throws me an error in python style and syntax. To overcome this I need to add things...
Error running on ddp (can't pickle local object 'SummaryTopic) with comet logger
[ "bug", "help wanted" ]
I have the following problem running on ddp mode with cometlogger. When I detach the logger from the trainer (i.e deletinglogger=comet_logger) the code runs. Exception has occurred: AttributeError Can't pickle local object 'SummaryTopic.__init__.<locals>.default' File "/path/multiprocessing/reduction.py", line 60, in...
Model checkpoint and restore via GCS on Ai platform
[ "question" ]
❓ Questions and Help What is your question? I am training models on Google's ai platform and every checkpointing or restoration is done via Google Cloud Storage (not using pytorch-lightning for that yet). I noticed that pytorch-lightning uses torch.save, torch.load and io operations to work with saving and loading, whi...
example of doing simple inference with lightning
[ "question", "won't fix" ]
I have an existing model where I load some pre-trained weights and then do inference (one image at a time) in pytorch. I am trying to basically convert it to a pytorch lightning module and am confused about a few things. So currently, my __init__ method for the model looks like this: self._load_config_file(cfg_file) # ...
minist ddp You requested GPUs: [6, 7] But your machine only has: [0, 1]
[ "bug", "help wanted" ]
import os import torch from torch.nn import functional as F from torch.utils.data import DataLoader from torchvision.datasets import MNIST from torchvision import transforms import pytorch_lightning as pl class CoolSystem(pl.LightningModule): def __init__(self, hparams=None): super().__init__() # get hyperpara...
Can lightning auto-find unused GPUs
[ "question" ]
I have 2 gpus in a machine Pl always find the same gpu to use. Can it automaticly find this unused gpu and run on it ?
Get Caffe2 Error when import the pytorch-lightning.
[ "help wanted", "question" ]
πŸ› Bug Get Caffe2 Error when import the pytorch-lightning. To Reproduce Steps to reproduce the behavior: import pytorch_lightning WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode. WARNING:root:Debug message: /opt/caffe2/build/lib/libcaffe2.so: undefined symbol: dnnLRNCreateForwar...
Trainer.from_argparse_args with additional kwargs causes model to not be saved
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug When using Trainer.from_argparse_args() to initialize the trainer, there will be some specific arguments that we would like to keep constant and not send as part of hparams. If the extra arguments turn out to be an object, such as a TensorBoardLogger or a ModelCheckpoint object, the model will not be saved becau...
Progress bar \ log dict items added to outputs in training_epoch_end
[ "bug", "help wanted" ]
πŸ› Bug When running with training_step returning something along this: return { 'loss':x, 'some_value_for_epoch_end': y, 'progress_bar':{'v1':z.mean()}} you get 'v1' as part of the outputs in epoch end. This is unexpected imo. Also in case it is: return { 'loss':x, 'some_value_for_...
Uncenssary information of apex always pop up
[ "bug", "help wanted" ]
πŸ› Bug To Reproduce import pytorch_lightning as pl trainer = pl.Trainer( gpus=2, progress_bar_refresh_rate=0, precision=16, num_sanity_val_steps=0, profiler=True, gradient_clip_val=hparams.gradient_clip_val, ) output: Selected optimization level O1: Insert automatic casts around Pytorch funct...
Is `hparams` really a good practice?
[ "help wanted", "question", "discussion" ]
❓ Questions and Help I am a bit confused about good practices in PyTorchLightning, having in mind hparams in particular. I will provide some of my thoughts about this topic. The docs says: # YES model = LitModel(hparams) trainer = Trainer.from_argparse_args(hparams, early_stopping_callback=...) # NO # model = LitMode...
Track model configuration in addition to hyperparamters
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature I would like to separate the current hparams (for example used to log models) into two groups: Hyperparamters: are those hparams, which can be optimized without changing the models meaning and may change the performance. For example the amount of samples generated, the batch size or the learning rate. Confi...
loading hparams as tags_csv results in Dict not Namepsace being passed to __init__
[ "bug", "help wanted", "won't fix" ]
πŸ› Bug Doctrstring indicates: """ However, if your checkpoint weights don't have the hyperparameters saved, use this method to pass in a .csv file with the hparams you'd like to use. These will be converted into a :class:~argparse.Namespace and passed into your :class:LightningModule for use. """ This do...
Using `configargparse.ArgumentParser`?
[ "duplicate", "question" ]
❓ Questions and Help What is your question? Is it possible to use somehow configargparse.ArgumentParser in PyTorchLightning? I tried the following code, but neither patience nor any of the Trainer's parameters are changed to values from YAML file. Do I really need to create custom Trainer and override add_argparse_args...
How to use pytorch-lightning to run GAN?
[ "question" ]
I want to implement GAN by pytorch-lightning, but I not found the demo.
What is the current state of per iteration (not per epoch) schedulers support?
[ "question" ]
❓ Questions and Help What is your question? I am trying to use a scheduler that should call step() function on every iteration rather than every epoch (i.e., 1-cycle like scheduler). However, I am not sure what is the best way to implement it with the most recent version of pytorch-lightning. I've seen some discussions...
Trainer.test's dataloader argument can't replace pre-defined dataloader
[ "bug", "help wanted" ]
πŸ› Bug Trainer.test function supports test_dataloader argument. But if test dataloader defined before in trainer module, changing test dataloader with giving an argument in Trainer.test isn't working. To Reproduce ... do some train stuff with trainer trainer.test(model, dataloader1) # run dataloader 1 trainer.test(mod...
How to persist a pytorch lightning module that depends on external data?
[ "question", "won't fix" ]
❓ Questions and Help What is your question? Hi! We're using pytorch lightning to train language models and transformers from scratch. This includes training tokenizers and applying them text data, resulting in binarized data. The way we've structured the process is to train a tokenizer, apply it to text data (coupling ...
Wandb Group Argument
[ "feature", "help wanted", "logger" ]
πŸš€ Feature Wandb currently supports an argument group to its init function that is not currently exposed by the PL api. I was hoping that we could expose this argument to the user to pass into the wandb init call. Motivation This would allow users to group their experiments on the wandb platform and keep them ordered t...
native amp does not work when rewriting the optimizer_step function
[ "bug", "help wanted" ]
πŸ› Bug If one rewrites optimizer_step like suggested in the docs (ie. https://pytorch-lightning.readthedocs.io/en/latest/optimizers.html#step-optimizers-at-arbitrary-intervals), the native AMP stops working. The issue happens due to the default optimizer_step performing a few native-amp-specific actions (see https://gi...
`model.test()` can fail for `ddp` because `args` in `evaluation_forward` are malformed
[ "bug", "help wanted", "good first issue", "priority: 0" ]
πŸ› Bug model.test() can fail while training via dp because TrainerEvaluationLoopMixin.evaluation_forward doesn't handle an edge case. To Reproduce Attempt to model.test() any lightning model in dp mode (I believe it fails in any of the modes at pytorch-lightning/pytorch_lightning/trainer/evaluation_loop....
[Model saving and loading] possibility to save additional content next to the checkpoint.ckpt file
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature I want to save my variables that I need to init my model in a config file in the same folder as the checkpoint is saved. Therefore, my feature request is to make the variable filepath of save_checkpoint(self, filepath) in trainer_io.py (line 247) available in the function def on_save_checkpoint(checkpoint). ...
How can I retrieve metrics from training and testing
[ "question", "won't fix" ]
❓ Questions and Help How can I extract the metrics returned from training_step, training_epoch_end, validation_step, validation_epoch_end, test_step, test_epoch_end after a train() or a test() run? I'd like to return some dictionary (e.g. {'loss': ..., 'log': ..., 'param a: 'a', 'param b': 'b', 'param c': {...}}) from ...
Checkpoint: OverflowError: cannot serialize a string larger than 4GiB
[ "help wanted", "question", "won't fix" ]
πŸ› Bug Model checkpointing fails with the error: OverflowError: cannot serialize a string larger than 4GiB and halts training PyTorch Version (e.g., 1.0): 1.5 OS (e.g., Linux): Linux How you installed PyTorch (conda, pip, source): conda Build command you used (if compiling from source): Python version: 3.7 CUDA/cuDNN...
Allow not to freeze other optimizers' parameters
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature When using multiple optimizers, provide a flag which can control whether we should freeze other parameters or not (which is a 'FIX' in here) Motivation When implementing BicycleGAN with lightning, I'm having great trouble because it requires something like this: opt_e.zero_grad() opt_g.zero_grad() loss_eg....
Learning rate scheduler's epoch off by one when resuming from checkpoint
[ "bug", "duplicate", "help wanted" ]
πŸ› Bug Currently lr_scheduler's state is updated after the checkpoint callback, so what is being saved here is last epoch's state. Note: I think this has the same fix as #1464, but I'm posting it here because (1) I got rekt by this again, (2) in case it's not the same bug, and (3) #1464 is not fixed. To Reproduce Steps...
Tensorboard log_hyperparams(params, metrics) seems not to have effect
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug Calling self.logger.log_hyperparams(hparams_dicts, metrics_dicts) in test_epoch_end doesn't have the desired effect. It should show the entries in the Hparams section with hparams and metrics specified but it shows nothing instead. Looking at the code it seems to be caused by self.hparams and the pre-logging of ...
Test demo error
[ "bug", "help wanted" ]
Traceback (most recent call last): File "/home/zgz/github_code/pytorch-lightning/tests/base/models.py", line 15, in <module> from test_tube import HyperOptArgumentParser ModuleNotFoundError: No module named 'test_tube' During handling of the above exception, another exception occurred: Traceback (most recent ca...
Need a grammar check in README.md file.
[ "bug", "help wanted" ]
There are some grammar issues in README.md file. Can I work on this issue?
Add a basic logger which does not require ports
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature It would be great to have a basic logger that does not require opening ports. Motivation At this time, all loggers supported by Lightning requires to use ports. There are some environments where I don't have total control and I cannot open ports. It would be great to implement a logger that could dump the me...
LightningModule with yacs CfgNode in __init__ will failed with some training settings
[ "question" ]
What is your question? My custom LightningModule has a yacs.config.CfgNode parameter in its __init__: from yacs.config import CfgNode class MyModule(pl.LightningModule): def __init__(self, cfg: CfgNode): super(MyModule, self).__init__() self.cfg = cfg ... When I use auto_lr_find=True in th...
How to save the model after certain steps instead of epoch?
[ "question", "won't fix" ]
❓ Questions and Help Before asking: search the issues. search the docs. What is your question? I am trying to train a NN model on a super-big tabular data(about half billion), and I am wondering if I can save the data every certain steps(a million for example) in an epoch instead of every epoch because it indeed sp...
Use .comet.config file for CometLogger
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature When creating a CometML experiment normally, the API key will be read from the file ~/.comet.config or from an environment variable if it isn't passed in directly. It would be nice if the CometLogger supported these uses as well. Motivation Putting the API key in code is certainly a bad practice, and it's a ...
add torchelastic docs
[ "docs" ]
Set precision=16 (using apex) would cause early stopping break
[ "bug", "help wanted" ]
πŸ› Bug The current early stopping monitor initilize by comparing if the function monitor_op is equal to torch.lt. self.best = torch_inf if self.monitor_op == torch.lt else -torch_inf pytorch-lightning/pytorch_lightning/callbacks/early_stopping.py Line 110 in 12138ce ...
Automatic batch-size scaling is missing properties
[ "bug", "help wanted" ]
File "envs/demo2/lib/python3.7/site-packages/pytorch_lightning/trainer/training_tricks.py", line 267, in _run_power_scaling trainer.fit(model) File "envs/demo2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 839, in fit self.single_gpu_train(model) File "/lib/python3.7/site-packages/...
Allow boolean flags to work without passing True
[ "bug", "help wanted", "priority: 0" ]
We tried to fix this but it's still broken This fails when adding args to argparse automatically... --auto_lr_find Instead we have to do: --auto_lr_find True which is not great
DDP breaks LR finder
[ "bug", "help wanted" ]
πŸ› Bug DDP breaks LR finder To Reproduce finder = trainer.lr_find(model) print(finder.suggestion()) Traceback (most recent call last): File "./training.py", line 107, in <module> main(hparam_trial) File "./training.py", line 97, in main finder = trainer.lr_find(model) File "/opt/conda/lib/python3.6/site-...
Isuue with Colab TPU: module 'torch_xla.core.xla_model' has no attribute 'rendezvous'
[ "help wanted", "question" ]
Whenever I setup my colab runtime with the code provided in PyTorch-Lightning Documentation, Release 0.7.6rc1 on page 190: TPU support I get the follwing exception: INFO:lightning:training on 8 TPU cores INFO:lightning:INIT TPU local core: 0, global rank: 0 INFO:lightning:INIT TPU local core: 2, global rank: 2 Excepti...
Summing multiple losses with single machine ddp
[ "bug" ]
Hi, I'm summing together multiple different losses using ddp on a single machine, 2 gpus. I've been struggling to reduce my loss to zero as a sanity check on a subset of my images. Is there something I should be calling to synchronise loss across gpus? I've done this with MNIST no worries. My model output is a dictiona...
Question about custom backward
[ "question", "won't fix" ]
❓ Questions and Help Before asking: search the issues. search the docs. What is your question? I want to write my own .backward. In the specification of method there are 4 arguments: self, use_amp, loss and optimizer: def backward(self, use_amp, loss, optimizer) But to do my backward I need additional tensors from t...
Can not use Trainer.test() if train and val dataloaders are not defined
[ "bug", "help wanted" ]
πŸ› Bug When the model does not define train_dataloader and no val_dataloader, we can not use trainer.test(model, test_dataloaders=test_dl). The configuration checks fail with a MisconfigurationException. Code sample model = ... # a model with no `train_dataloader`, `val_dataloader` defined test_dl = ... # a dataloade...
distributed training crashes with dp (list comprehension issue from torch?)
[ "bug", "help wanted", "won't fix" ]
πŸ› Bug I ran a distributed GPU template and get an error with data parallel and scatter_gather from torch nn parallel in particular. To Reproduce Steps to reproduce the behavior: install packages git clone from master run basic example gpu job with distributed Validation sanity check: 0it [00:00, ?it/s]Traceback (most...
Replace val_loss with monitor_metric
[ "feature", "help wanted", "won't fix" ]
Validation epoch end has a special key called 'val_loss' which enables early stopping and checkpoint. However, a better name is likely monitor_metric def validation_epoch_end(self, outputs): return {'monitor_metric': whatever_thing_i_want} This makes it clear to the user that they can pass in anything (epo...
load_from_checkpoint(): hparam_overrides only works with hparams_file
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug Using the hparam_overrides argument for load_from_checkpoint() without the hparams_file argument gives an UnboundLocalError. To Reproduce Code sample Do: MyLightningModule.load_from_checkpoint(ckpt_path, hparam_overrides={'param': 0}) and the result will be UnboundLocalError: local variable 'hparams' referenced ...
Lightweight Hyperparameter Datastructure
[ "feature", "help wanted" ]
πŸš€ Feature A simple and flexible way to store hyperparameters in a dict/Namespace-like object. Motivation Pitch An object that behaves like this: # just like Namespace hparams = Hyperparameters(x=1, y=2) # or from a dict hparams = Hyperparameters({"x": 1, "y": 2}) # it could support nesting hparams = Hyperparameter...
Training time estimation when max_epochs is given
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Time estimation of training until max_epochs are reached. Motivation Right now I don't see how long I am training for already and how long it is going to take to train e.g. 500 epochs. I usually have a tqdm progress bar for number of epochs to get an estimate how long my training will run maximally. Pitch If...
prepare_data called multiple times per node for slurm and elastic training
[ "bug", "help wanted" ]
πŸ› Bug Slurm and elastic training create the training processes per node outside of the lightning context. This means that when the fit function calls prepare_data, the assumption that it's only being called on proc 0 is broken and it gets called for each process. This is an issue computational reasons (e.g. downloadin...
Accumulate Metrics for K Batches
[ "question" ]
What is your question? I would like to aggregate metrics for k minibatches before logging to Tensorboard. How can I accomplish this? For example, I would like to average my loss and accuracy for 10 minibatches and report that value to TensorBoard, not just report the statistics on the 10th minibatch. What have you trie...
lightning_mnist_tpu crashes
[ "bug", "accelerator: tpu", "working as intended" ]
πŸ› Bug To Reproduce Run https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3 with VERSION = "1.5" There were other error when trying to run with other versions from ["1.5" , "20200325", "nightly"] neither of them seem to work properly, there's always some sort of bug that pops up. at this one mome...
hparam_overrides not working
[ "bug", "help wanted" ]
πŸ› Bug Error when using hparam_overrides which was recently introduced by: #1797. This is the log: Traceback (most recent call last): File "main.py", line 211, in <module> model = RelationEmbeddingModelLit.load_from_checkpoint( File "/home/ubuntu/anaconda3/envs/py38/lib/python3.8/site-packages/pytorch_lightning...
ddp_cpu crashing on SLURM cluster because of save_spawn_weights()
[ "bug", "help wanted" ]
πŸ› Bug I'm seeing a seemingly similar problem as issues #1335 and #1637 on current master when using ddp_cpu on my universities SLURM cluster. It's failing at a certain epoch (not the same for every job, but in the same range) for all jobs in a job array, on all nodes. But it's not happening when load_spawn_weights() ...
Add stochastic weight averaging
[ "feature", "help wanted" ]
Looks like we need to keep two copies of the model. Let $m_r$ define the root model and $m_c$ the current model. Then at the end of each epoch $n$, we update the weights of $m_r$ using a weighted average: Anyone interested in implementing? maybe enable as a callback? not sure this needs a flag? @PyTorchLightning/core-...
Bug in Early stopping and `check_val_every_n_epoch`
[ "bug", "help wanted" ]
πŸ› Bug To Reproduce Steps to reproduce the behavior: Install PyTorch and PyTorch-Lightning Cloning master branch Edit the example by removing these lines In the same place, add early_stop = pl.callbacks.EarlyStopping(monitor="val_loss", patience=3, mode="min") trainer = pl.Trainer( max_epochs=hparams.epochs, ...
When using dp mode, only torch.Tensor can be used as the return value of the *_step function.
[ "help wanted", "won't fix" ]
πŸ› Bug To Reproduce Steps to reproduce the behavior: Go to ./pl_examples/basic_examples Run python gpu_template.py --gpus 2 --distributed_backend dp See error Code sample Error is below. Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last): File "/root/workdir/pytorch-lightning/pl_examples...
How to log performance metrics in the validation step/loop?
[ "question", "won't fix" ]
❓ How to log performance metrics in the validation step/loop? Before asking: search the issues. Done search the docs. Done What is your question? I would like to log the performance metrics of each validation batch using TestTubeLogger (I am open to using alternative loggers) as well as the loss function of each...
How to save raw predictions?
[ "question", "won't fix" ]
I have it hard coded in my logging callback not to print them, but trainer.test still prints them. I'm wondering the canonical way to save them with wandb_logger or another callback. (It's an NLP summarization task). Thx!
options for run only sanity check without training in 'run_pretrain_routine'
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Motivation for pytest pytorch-lightning based project, the training phase is not essential in some cases (like testing only inference). So adding an argument for running sanity-check without the training phase could be reasonable. Pitch run_pretrain_routine(self, model) => run_pretrain_routine(self, model, r...
Trainer.parse_argparser does not yield sensible default for default_root_dir
[ "bug", "help wanted" ]
πŸ› Bug Using Trainer.parse_argparser returns True for default_root_dir, however, a string is expected. To Reproduce Steps to reproduce the behavior: >>> from pytorch_lightning import Trainer >>> from argparse import ArgumentParser, Namespace >>> parser = ArgumentParser(add_help=False) >>> parser = Trainer.add_argparse_...
Allow passing dict object as batch
[ "feature", "help wanted" ]
πŸš€ Feature Allow the batch variable to be a (nested) dict object (in training_step(), etc.) Motivation I was porting my code from my own implementation of a Trainer to Pytorch Lightning, as I believe this is an asset for reproducibility and clarity. However I stumbled across a problem, it is not possible for the variab...
Images not being logged after using auto_lr_find
[ "help wanted", "won't fix" ]
πŸ› Bug When I use the auto LR finder, the images that I logged are no longer being displayed in Tensorboard. The logging works normally when setting it to False or when not specifying it. To Reproduce Steps to reproduce the behavior: Set auto_lr_find = True for the trainer Log image during test/validation step as sel...
Multi GPU training (ddp) gets very slow when using list of tensors in Dataset
[ "bug", "help wanted" ]
πŸ› Bug We are migrating to PyTorch Lightning from a custom implementation using Torchbearer before. Our dataset stores a list of PyTorch tensors in memory, because the tensors are all of different dimensions. When migrating to PyTorch Lightning from a custom implementation, this seems to slow our training down in the m...
val_dataloader participate in training?
[ "question", "won't fix" ]
❓ Questions and Help It's very strange. I just use trainer.fit(net, train_dataloader=data.train_loader, val_dataloaders=data.val_loader) trainer.test(test_dataloaders=data.test_loader) to train, validate and test. I am sure train_loader, val_loader and test_loader are from independent dataset. But the test result turn...
Trainer.from_argparse_args fails on unknown kwargs
[ "help wanted" ]
πŸ› Bug Since **kwargs was removed from Trainer's init in #1820, initializing Trainer objects fails if you have any non Trainer specific arguments in your parser. If this is the expected behavior, the docs should be updated to reflect the workaround I mention below, as a few of them would currently fail. To Reproduce T...
How to access training and validation losses from callbacks?
[ "question" ]
For example, if my validation_epoch_end in the trainer returns {'avg_loss':loss, 'log':logs}, how to get the loss value from a callback method like:def on_validation_end(trainer, pl_module)?
Setting of PYTHONHASHSEED has no effect
[ "duplicate" ]
Problem pytorch-lightning/pytorch_lightning/trainer/seed.py Line 32 in caa9c67 os.environ["PYTHONHASHSEED"] = str(seed) It is not possible to change PYTHONHASHSEED inside the current program. To see this, r...
How to return a final val loss in trainer?
[ "feature", "question", "priority: 0" ]
What is your question? Most optimisation packages ie Ray Tune / Hyperopt return the train loop to return a final accuracy for the optimiser to decide what to try next. How do I do this with the Trainer module for Pytorch Lightning? What's your environment? OS: Linux Packaging pip Version 0.7.6
Seeming disparity in calling `on_load_checkpoint` in hpc load/save
[ "question", "won't fix" ]
I noticed that before calling model.on_hpc_save(checkpoint) on L464, .hpc_save() gives the model a chance to add a few things by calling .dump_checkpoint() which later calls model.on_save_checkpoint(checkpoint) on L369. However .hpc_load() calls only model.on_hpc_load(checkpoint) at training_io.py#L502, and does not se...
Support torch.jit.script on LightningModules
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature There are a number of advantages to converting a model with TorchScript (e.g. static optimizations, better saving / loading, especially into non-Python environments for deployment). However, no LightningModules can be converted using torch.jit.script. Here's a simple example with the error produced (note tha...
Images take a lot of GPU space
[ "question" ]
Right now my dataset is about 15 gigs of png images, with each image taking about 0.15Mb. I'm working on a V100 GPU with 32G of memory. My model takes only about 1G on a GPU, however, I'm not able to use big batches and the maximum that I can use is 32 images per batch. Is this a normal situation? Or there is a possibi...
Separate *_percent_check for each *_dataloader
[ "feature", "help wanted", "won't fix", "discussion" ]
πŸš€ Feature Can we have *_percent_check to be a list too where len(*_percent_check) == len(*_dataloaders)? In case if it is int then it will be same for all the dataloaders passed. Don't know how this can be useful in any case, just a thought. Motivation Pitch For each val_dataloader or test_dataloader we can have an...
ModelCheckpointCallback.on_validation_end does not listen to val_check_interval?
[ "help wanted" ]
Evidence: I have a unit test where I train for 2 epochs with val_check_interval=0.5 and I only get two model checkpoints. Set breakpoints and indeed it is only hit twice. Would expect 4 times.
Simple NCCL example with TCP/IP backend
[ "question", "won't fix" ]
Simple NCCL with TCP/IP example What is your question? I can't find out how to "add all_reduce" so lightning would be usable in a very simple NCCL TCP/IP environment. I have a very simple pytorch program that does exactly that; I can't see how I could integrate this with lightning (even though it might be that there is...
DDP is incompatible with large datasets
[ "help wanted" ]
I'm trying to stream a large data file instead of loading it so it doesn't have to be pickled for multi-processing. However, open-file objects give a TypeError: cannot serialize '_io.TextIOWrapper' object error, so I have to open it within a subprocess instead-- but train_dataloader and val_dataloader methods get calle...
Lr_finder based on the validation loss rather than training loss
[ "question", "won't fix" ]
What is your question? For the default LR Range Test in PyTorch lightning, i.e., "lr_finder", is the reported loss curve based on training loss, test loss, or generalization loss? For me, it would be more reasonable to select the learning rate based on the test loss rather than training loss. I noticed that there is a ...
Crash trying to construct module_arguments when module is created in a method
[ "help wanted" ]
Last line here crashes: import pytorch_lightning as pl class Module(pl.LightningModule): def forward(self): return 0 def test_outside(): a = Module() print(a.module_arguments) class A: def test(self): a = Module() print(a.module_arguments) def test2(self): test_out...
What the NeptuneLogger records as params or properties
[ "question", "won't fix", "logger" ]
❓ Questions and Help What is your question? Currently, NeptuneLogger seems to be logging property what is passed as a hparams. pytorch-lightning/pytorch_lightning/loggers/neptune.py Line 233 in c3cf33d def log_hyperparams(sel...
Dynamically change optimizer frequency
[ "question" ]
❓ Questions and Help What is your question? I have a WGAN and the ratio between iterations on the discriminator and on the generator is fixed at 5:1. I accomplished this by passing the frequency parameter in the configure_optimizers method res_1 = { 'optimizer': optimizer_d, 'frequency': 5, ...
How to store dataset in shared memory for ddp?
[ "feature", "help wanted" ]
How can I share memory across my processes in ddp? I'm getting OOM errors with 2 gpus and a 6gb dataset. My script would also load faster if it wasn't pickling the dataset and copying to other processes.
Stopping the code along with a graceful shutdown.
[ "question" ]
Is there a way to stop the training in the model when some criteria are satisfied. Something along the lines: class myCallback(Callback): def __init__(self): ... def on_epoch_end(self, trainer, pl_module): if criteria: model.stop_training = True # stops the training; need help here ...
Add Docs Demo'ing Test-Set Loaders With Trainer.Test()
[ "docs" ]
πŸ“š Documentation I was wondering the best practice way to specify a new test_loader when using trainer.test(), similar to how we can for trainer.fit(). The code was already written, the docs just needed to be updated to sync with it.
Transfer learning
[ "feature", "help wanted", "won't fix", "discussion" ]
The problem The standard flow for transfer learning is as follows: Start from a pretrained model and create a new head Freeze all layers but the head Train Unfreeze all (or some) layers Train (with differential learning rates) Let's call each "freeze/unfreeze -> train" step a phase. What can change between phases? T...
Log a warning/ raise an error when lightning replaces an existing Sampler
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature First of all, thanks for this awesome project! Really enjoying using it! Feature Request: Log a warning or raise a MisconfigurationException when lightning replaces an existing sampler with DistributedSampler. Even though this behaviour is documented, it's not intuitive. Also, if someone has defined a sample...
TestTube version attribute does not perform as documented
[ "bug", "help wanted", "good first issue" ]
πŸ› Bug The docs for TestTube logger state: If version is not specified the logger inspects the save for existing versions, then automatically assigns the next available version. However, this does not happen. To Reproduce >>> from pytorch_lightning.loggers import TestTubeLogger >>> logger = TestTubeLogger('tb_logs', ...
Is it ok to translate the Docs into Chinese?
[ "feature", "help wanted", "docs" ]
πŸš€ Feature Motivation Let more Chinese Deep Learning lovers to know Pytorch-lightning
Error using Hydra
[ "bug", "question" ]
Hi all, Thanks a lot for the awesome library. I'm trying to use Hydra with pytorch-lightning. I'm using the last release of pytorch-lightning. However, I got the following error after my training step: ... File "/home/.local/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 241, in on...
Proposal for help
[]
Hi @williamFalcon ! I saw your project and I am very pleased by the idea. I wish to help you writing production level code. PLease let me know in what way can I help!
Thanks for sharing!
[]
Good repo! Thanks for sharing!
Adding visualization module
[]
Do you consider adding visualization ability? For example adding TensorBoard utility to visualize validation curve, or scalar changes, etc.
AttributeError: 'Experiment' object has no attribute 'get_meta_copy'
[]
https://github.com/williamFalcon/pytorch-lightning/blob/b8c7baa8acce9e363c33d2580eb1abcca322a211/pytorch_lightning/models/trainer.py#L419 I encountered this problem when I used ddp.
Allow optimizers to alternate at arbitrary intervals
[ "feature", "help wanted" ]
For GANs or similar approaches, we may want optimizer A to step every batch while optimizer B might step every k batches. This feature will enable this behavior. Approach still needs to be scoped out. Open to suggestions here.
AttributeError: 'TTNamespace' object has no attribute 'drop_prob'
[]
Got the following error while running the Demo examples: python single_gpu_node_template.py --gpus "0,1,2,3" Traceback (most recent call last): File "single_gpu_node_template.py", line 112, in main(hyperparams) File "single_gpu_node_template.py", line 33, in main model = LightningTemplateModel(hparams) File "/home/dg...
Evaluate reduce removal from validation_step
[ "feature", "help wanted" ]
Possible that the reduce function after validation_step might make it hard to save outputs such as videos, audios, etc... Need to evaluate whether it makes sense to remove.
Consider: ability to set seed
[]
I dunno if this is in scope (feel free to close if not), but when experimenting, setting a fixed seed is handy since you can remove one source of randomness (Karpathy's recipe even includes it as an important beginning step). Basically, being able to set the seeds for the random, numpy, torch, and other common modules ...
Typo in module's overview
[]
Hi, Thanks for developing this module. There is a small typo in the Lighting module's overview.