title stringlengths 5 164 | labels list | bodyText stringlengths 0 46.7k |
|---|---|---|
Tpye error: 'int' Object is not callable when pytorch_lighning is upgraded to version 0.8.1 or aboveใHowever, in version 0.7.1, it works normally | [
"question",
"won't fix"
] | Tpye error: 'int' Object is not callable when pytorch_lighning is upgraded to version 0.8.1 or aboveใHowever, in version 0.7.1, it works normally
Traceback (most recent call last):
File "/home/zwx/pointNet_family/pointnet.pytorch-master/utils/mytrain.py", line 210, in
triner.fit(model)
File "/home/zwx/anaconda3/envs/P... |
LightningModule and Pytorch Models should really be decoupled. | [
"feature",
"help wanted",
"discussion"
] | Not a feature per se, but a design suggestion for the LightningModule to discuss.
I think a lot of people already using PL the way so that they build the model in other classes. For example filling a Generator and Discriminator class and only initialize them within the LightningModule init method. The motivation is to ... |
Incorrect "Saving latest checkpoint" warning | [
"bug",
"help wanted",
"checkpointing"
] | ๐ Bug
"Saving latest checkpoint..." warning appears regardless of whether a ModelCheckpoint exists or save_last is set to True
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Lines 167 to 169
in
a71d62d
# Save lat... |
Out of memory when trainer.save_checkpoint("example.ckpt") | [
"bug",
"help wanted",
"checkpointing"
] | The sbatch session crashes and I get the following error when I include trainer.save_checkpoint("example.ckpt") in my code.
/var/spool/slurmd/job220424/slurm_script: line 15: 39865 Killed python roco_train_mlm_lightning.py --run_name debug --precision 16 --mlm_prob 0.15
slurmstepd: error: Detected 3 o... |
Trainer: Separate framework options from backend options | [
"feature",
"help wanted",
"won't fix"
] | ๐ Feature
Stop mixing framework and backend options in the Trainer's constructor.
Motivation
I find it confusing both as a user and a backend implementer because it's not obvious which options affect which backend.
Pitch
The backend options could be passed as a separate, specific object:
trainer = Trainer( default_... |
distributed training: ModelCheckpoint is receiving bad data | [
"bug",
"help wanted",
"checkpointing"
] | You can reproduce in 4 minutes on 0.9.0.
I tried master and got an unrelated wandb error and gave up trying to reproduce there.
you must be on a machine with multiple gpus
git clone git@github.com:huggingface/transformers.git
cd transformers
pip install -e .
pip install -e .[examples] # installs pytorch-lightning==0.8... |
Infinite hang when running `Trainer.test` after `Trainer.fit` with DDP | [
"bug",
"duplicate",
"help wanted",
"working as intended"
] | ๐ Bug
If I run Trainer.test after running Trainer.fit with distributed_backend='ddp' then the system hangs.
To Reproduce
Steps to reproduce the behavior:
Run the following script
# main.py
import os
from argparse import ArgumentParser
from pl_examples.models.lightning_template import LightningTemplateModel
from pytorc... |
LightningDataModule seems to do some dataloader operations on CPU, which was not the case with LightningModule loader methods | [
"bug",
"help wanted",
"won't fix",
"data handling",
"priority: 2"
] | ๐ Bug
While using LightningDataModule as lit_model(datamodule=datamodule) the models waits for some time using 1 CPU core before beginging training, and periodically stops training (every 50 train steps) GPU util goes 0% and 1 CPU core is in use. This behaviour continues till training finishes.
To Reproduce
Steps to r... |
When training in GPU the model does not decrease the loss, in CPU it does | [
"bug",
"help wanted"
] | ๐ Bug
When a toy model is trained in GPU error rate does not seem to go down, but if i use the CPU it does. I have just use 0.9 version.
To Reproduce
Steps to reproduce the behavior:
Based on the following model
mean_1 = [0, 0]
cov_1 = [[1, 0], [0, 100]]
mean_2 = [5,-7]
cov_2 = [[16, 70], [1000, 0.1]]
class ToyDatas... |
Log validation metrics before training | [
"question",
"won't fix"
] | โ Questions and Help
Is there an easy way to run a full evaluation on the the validation set before starting training. I would like this as a kind of benchmark to see where I'm starting from and if the network learns anything at all.
While #1715 allows running the sanity check on the complete validation set, this does ... |
#3598 does not allow monitoring tensors logged via `TrainResult` | [
"bug",
"help wanted"
] | ๐ Bug
Code sample
@pytest.mark.parametrize("monitor", ["tr_foo", "tr_bar", "va_foo", "va_bar"])
def test(tmpdir, monitor):
model = DeterministicModel()
def training_step(batch, batch_idx):
acc = model.step(batch, batch_idx)
result = TrainResult(minimize=acc)
result.log("tr_foo", torch.... |
automatically copy state-dict when using ddp | [
"feature",
"help wanted"
] | ๐ Feature
Copy model state-dict from rank 0 process to other processes when using ddp
Motivation
This would mean that the user does not need to worry about initializing models with the same weights
Alternatives
Alternatively lightning could at least check if the weights are the same and if not warn the user / thrown a... |
Unexpected key(s) in state_dict Error when calling `load_from_checkpoint` | [
"question"
] | โ Questions and Help
What is your question?
Unexpected key(s) in state_dict Error when calling load_from_checkpoint
Code
class smallCNN(pl.LightningModule):
def __init__(self, out_class) -> None:
super().__init__()
self.net_1 = nn.Sequential(nn.Conv2d(3, out_channels=8,kernel_size=3,stride=1,padding... |
DDP is not working for me... | [
"help wanted",
"working as intended"
] | ๐ Bug
I run mnist example code in this repo.
When i excuted mnist.py, I could see one gpu's usage is 100% and this process is not finished.
After change ddp to ddp_spawn, It works.
But I want to use ddp!
To Reproduce
Steps to reproduce the behavior:
Run mnist.py.
Environment
script : mnist.py.
You can get the script a... |
Support checkpointing for Sub-Epoch period | [
"feature",
"help wanted"
] | Question
When setting period to a fractional value, checkpointing doesnโt trigger correctly. Additionally I think period should default to val_check_interval, if it doesnโt already.
To Reproduce
Steps to reproduce the behavior:
Run any model and set checkpoint to run at a fractional value. Only the first checkpoint wil... |
Incorrect progress bar during validation | [
"help wanted",
"working as intended"
] | ๐ Bug
when running for multiple epochs, the progress bar doesn't look right.
While during training the progress bar looks fine (percentage increases and rewrites over) - in the example below this goes on fine until 67% of the epoch. However, during validation instead of switching to "Validating", the "training" progr... |
How to use a custom Callback with Trainer.from_argparse_args | [
"question"
] | โ Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
I'd like specify a custom Callback while passing argparse paramaters using Trainer.from_argparse_args
Code
For example, I've tried something like this with no success:
trainer = Trainer(callbacks=[CustomCallback()]).from_a... |
Creation of many data module instances incurs RecursionError | [
"bug",
"help wanted"
] | ๐ Bug
Thank you for a nice framework!
When I repeated hundreds of experiments, each time with a new instance of a single LightningDataModule class, RecursionError was raised. I also found that creating data modules and calling setup() were enough to reproduce the issue.
To Reproduce
Please look at the following code ... |
TrainResult.log dosen't work as log_dit | [
"bug",
"help wanted"
] | ๐ Bug
When using TrainResult as the return of LightningModule.training_step(),TrainRsult.log() can not add metrics to Trainer.logged_metrics and this make Checkpoint.format_checkpoint_name() works not well.
To Reproduce
class Resnet18(pl.LightningModule):
def __init__(self, input_dim=40, numclass=1211, learning_ra... |
EvalResult.write() should use self.logger | [
"feature",
"help wanted",
"won't fix"
] | ๐ Feature
I quite like using EvalResult.write() to generate a report from the test_step. However, I feel this should be integrated with self.logger
Motivation
By using self.logger for all logging, consistency is maintained - files end up in the same location and it's easy to enable / disable logging. In particular, I'... |
switch from LBFGS to ADAM optimizer during the training loop | [
"question"
] | Is possible to show how we should write the "configure_optimizers" and "training_step" functions for the following code.
The purpose of the code is to switch the optimizer from LBFGS to Adam when the loss_SUM<0.3
optimizer = optim.LBFGS(model.parameters(), lr=0.003)
Use_Adam_optim_FirstTime=True
Use_LBFGS_optim=True
f... |
How to use LBFGS in Pytorch-Lightening | [
"question"
] | how to use the LBFGS in lightening. the loss in the following code does not change and it seems that LBFGS is not be
being used correctly
def configure_optimizers(self):
optimizer = optim.LBFGS(self.parameters(), lr=self.hparams.lr_LBFGS)
return optimizer
def training_step(self,train_batch,batch_idx):
x,t=... |
Accuracy metric return tuple of length num_gpus on ddp in 0.9.1rc4 | [
"bug",
"help wanted"
] | code:
vid_acc = self.accuracy_video(video_labels_hat, video_labels)
print(len(vid_acc), vid_acc)
monitor = 0-vid_acc
In 0.9.1rc3, vid acc is a tensor, but in rc4, it changes to a tuple. I want to use -vid_acc as monitor, and I think it should be a tensor.
Using rc4, in macos's cpu mode, it's a ... |
remove value field from Result objects 'meta' key | [
"feature",
"help wanted",
"won't fix"
] | ๐ Feature
remove value field from Result objects 'meta' key
Motivation
The value field in the Result obj's meta data is
a duplicate of the raw data in the Result obj,
not being used,
not updated in the gathered results presented to the user in [training,validation,test]_epoch_end.
Especially this last point can l... |
Fix exception chaining | [
"feature",
"help wanted"
] | I recently went over PyTorch and Detectron2, suggesting a fix in the way that Python 3's exception chaining is used.
As described in detail in this article, exception chaining (PEP 3134) can be used to make exceptions more user-friendly, and in that case, the syntax raise new_exception from old_exception needs to be us... |
Support launching ddp job as module python -m ... | [
"feature",
"help wanted",
"good first issue",
"distributed"
] | ๐ Feature
Motivation
Some users wish to launch their training program as a module with python -m some.module.py
Pitch
We should evalute whether this is possible for ddp and support this option when possible.
We need to strip the -m argument and append it to the command with which we launch the child processes.
Altern... |
Rename row_log_interval and log_save_interval | [
"feature",
"priority: 0"
] | row_log_interval -> log_every_n_steps
log_save_interval -> flush_logs_every_n_steps |
Missing attribute "training_step_output_for_epoch_end" | [
"bug",
"help wanted"
] | I used the documentation way of stopping the training (https://pytorch-lightning.readthedocs.io/en/latest/early_stopping.html#enable-early-stopping-using-callbacks-on-epoch-end).
If on_bath_start method returns -1 at the very beginning of an epoch, the titled AttributeError exception.
The problem is in training_loop.py... |
How to set default EarlyStopping patience? | [
"question"
] | Is it possible to set the default EarlyStopping patience without creating a custom early stopping callback?
Instead of writing:
trainer = pl.Trainer(early_stop_callback=EarlyStopping(patience=XXX))
I'd like to overwrite the default patience directly and then use EvalResult(early_stop_on=...). |
Loads args from .evn files some type of config file. | [
"feature",
"help wanted",
"won't fix"
] | ๐ Feature / Motivation
I often run my training scripts on different systems with different setups. It would be nice to be able to read args from a configuration file. Like If i on one node have a default directory somewhere other than root, i just put it in the config file. And the trainer loads the args from the file... |
Checkpoints based on validation_step or validation_epoch_end | [
"question",
"won't fix"
] | Somewhere I found an example for
def validation_step(self, batch, batch_idx):
....
return {'val_loss': loss, ....}
def validation_epoch_end(self, batch):
avg_val_loss = torch.tensor([ x['val_loss'] for x in batch] ).mean()
.....
return {'val_loss': avg_val_loss,....... |
How to store test_step outputs to file? | [
"question",
"won't fix"
] | Is there an approach to save to one file all the outputs during test_step?
def test_step(self, batch, batch_idx):
x1, x2 = batch["x1"], batch["x2"]
r1, r2 = self(x1, x2)
test_loss = self.loss_fn(predict)
test_mrr = self.mrr(r1, r2)
return {'test_loss': test_loss, 't... |
Change the tensorboard run-names to | [
"question"
] | What is your question?
What is the easiest and most pythonic way to change the Tensorboard runs names from "version_{n}" to something like "{hostname}{time}{lr}_{batch_size}" etc? Do i have to manually create a tensorboard logger and send it to the trainer? What happens with the checkpoint folder and stuff?
What's your... |
AccuracyMetric automatically do ReduceOp.SUM in test_epoch_end | [
"bug"
] | my code:
def test_step(self, batch, batch_idx):
...
# self.accuracy_video = Accuracy()
vid_acc = self.accuracy_video(video_labels_hat, video_labels)
print("test_step, ", vid_acc)
return {'test_loss': loss, "test_pacc": part_acc, "test_vacc": vid_acc}
def test_epoch_en... |
Argparse usability issues | [
"won't fix",
"discussion"
] | ๐ Feature
Improve the usability of the argparse functionality by allowing to use extensions of argparse.
Motivation
Currently in pytorch-lightning argparse is used as parser = ArgumentParser(parents=[parent_parser]) which prevents many behaviors that users might want. The most basic example, someone implements a train... |
Hydra Hyperparameter Optimization | [
"bug",
"help wanted"
] | ๐ Bug
After update PL from 0.8.x to 0.9.0, I started to face the following error when passing a configuration file via hydra:
Running in fast_dev_run mode: will run a full train, val and test loop using a single batch
[2020-09-29 17:24:44,547][lightning][INFO] - Running in fast_dev_run mode: will run a full train, val... |
Avoid storing a list of outputs to compute aggregated metrics at the end of the epoch. | [
"question"
] | Hi,
to my understanding, the current way of logging an aggregated metric at the end of an epoch requires implicitly to store the outputs of all steps in the epoch. There are tasks like semantic segmentation requiring to accumulate a confusion matrix over steps, e.g. in order to compute the mIoU metric. However, storing... |
test_step hangs after one iteration when on multiple GPUs | [
"bug",
"help wanted",
"distributed"
] | ๐ Bug
When running the same code on a computer with 1 gpu, test_step runs as normal and logs what it should.
How ever on a node with 4 gpus, it hangs after 1 iteration!
Code sample
images, masks = batch["image"], batch["mask"]
if images.shape[1] != self.hparams.n_channels:
raise AssertionError(
... |
Support best model checkpoint path even if save_top_k=-1 | [
"feature",
"help wanted"
] | ๐ Feature
Support best model checkpoint path even if save_top_k=-1
Motivation
For the model checkpoint callback, the callback could still track the best checkpoint path even if save_top_k=-1. The only case where we couldn't track the best checkpoint is if the monitor metric isn't specified. What do you think?
Pitch
Up... |
RuntimeError: Input and hidden tensors are not at the same device, found | [
"bug",
"help wanted"
] | ๐ Bug
I train LSTM for character level text generation. At first I initialize hidden and cell with zeros using torch.zeros. Unfortunately this tensors are defaultly assigned to the cpu so I get the following error while training
RuntimeError: Input and hidden tensors are not at the same device, found input tensor at ... |
How to use more than one optimizer at each step (jointly train multiple modules within one model)? | [
"question",
"won't fix"
] | โ Questions and Help
What is your question?
I have a model which consists of two blocks, let's call them first_module and second_module.
Code (simplified)
Training Step
def training_step(self, batch, batch_idx, optimizer_idx):
out = self.first_module(batch)
out = self.second_module(out)
loss = criterion(ou... |
Current batch loss and mean reduced loss | [
"question"
] | Over training_step and validation_step I am logging the losses (train_loss and val_loss) and metrics (train_mrr and val_mrr), both in the logger and in the progress bar:
def training_step(self, batch, batch_idx):
x1, x2 = batch["x1"], batch["x2"]
r1, r2 = self(x1, x2)
train_loss = self.loss_... |
type object got multiple values for keyword argument 'loss' | [
"bug",
"help wanted"
] | ๐ Bug
The error appears when TrainReport has minimize param set and loss log added at the same time with prog_bar=True
Code sample
def training_step(self, batch, batch_idx):
loss = self(batch)
result = pl.TrainResult(minimize=loss)
result.log("loss", loss, prog_bar=True)
return result
... |
Tenorboard, logs either don't appear or have prepended 'epoch_' names | [
"question"
] | I have two kinds of problems with Tensorboard.
Either logs don't appear when I create them inside training_step.
Code:
def training_step(self, batch, batch_idx):
type = "train"
loss, acc, y_true, y_pred, name = self.step(batch)
result = pl.TrainResult(minimize=loss)
result.log(type + ... |
on_step logging not working as expected/described | [
"docs"
] | ๐ Bug
When training a model with the MLFlowLogger, on_step logging in training_step() does not appear to log metrics as frequently as expected. See complete example below.
To Reproduce
Code sample
Here is a complete working example that generates the described behavior. This example is derived from the code in Light... |
Handling AttributeErrors while cleaning params namespace while setting up fit | [
"bug",
"help wanted"
] | ๐ Bug
is_picklable in parsing.py does not handle AttributeError thrown by pickle.dumps() - specifically, the following :
AttributeError: Can't pickle local object 'ArgumentParser.__init__.<locals>.identity'
To Reproduce
Here's a stack trace:
Traceback (most recent call last):
File "/home/chirag/miniconda3/envs/ml/... |
auto_scale_batch_size doesnt use 'binsearch' | [
"bug",
"docs"
] | I tried to following and it's still using power:
#####################
# 1. Init Model
#####################
model = LitAutoEncoder()
#####################
# 2. Init Trainer
#####################
trainer = pl.Trainer(auto_scale_batch_size='binsearch')
#####################
# 3. Tune
#####################
trainer.f... |
Checkpointing and Early Stopping fail to work correctly when increasing number of train batches (in some cases) | [
"bug",
"help wanted",
"priority: 0"
] | ๐ Bug
( Preface: I created a complete minimal example for this bug report that unfortunately didn't end up reproducing the behavior, but I still think it might be useful to mention this nevertheless ).
The symptom is that when I leave everything else the same but increase the number of my training batches from 1000 to... |
User Deprecation Warning thrown even if user does not override `validation_epoch_end` | [
"bug",
"help wanted"
] | ๐ Bug
From #3789 additional context. If the user does not override validation_epoch_end a warning is still thrown reading: UserWarning: The validation_epoch_end should not return anything as of 9.1.to log, use self.log(...) or self.write(...) directly in the LightningModule
I tracked this down to this snippet:
... |
Fix docs for auto_lr_find | [
"docs",
"priority: 0"
] | This is the correct way to run
trainer = pl.Trainer(auto_lr_find=True)
lr_finder = trainer.tuner.lr_find(model) # Run learning rate finder
fig = lr_finder.plot(suggest=True) # Plot
fig.show()
model.hparams.learning_rate = lr_finder.suggestion() |
Access metrics in custom callbacks | [
"question"
] | โ Questions and Help
I have found it useful/helpful to sometimes access metrics in custom callbacks. In v0.9.0 this works using something like this:
def training_step(self, batch, batch_idx):
return {"loss": self._step(batch)}
def validation_step(self, batch, batch_idx):
return {"val_loss": self._step(batch)}
... |
ModelCheckpoint not picking up metrics logged from lightning module | [
"bug",
"help wanted"
] | ๐ Bug
The Model Checkpoint raises a misconfiguration error because metrics logged from validation epoch end are mysteriously unavailable to the callback
To Reproduce
from typing import Optional
import torch
from pytorch_lightning import Trainer, LightningModule
from pytorch_lightning.callbacks import ModelCheckpoint
... |
Calling module.log(...) within a callback fails | [
"bug",
"feature"
] | ๐ Bug
Calling pl_module.log(...) within a Callback fails, even though this is recommended by the documentation here: https://pytorch-lightning.readthedocs.io/en/latest/loggers.html#logging-from-a-callback
Error
File "my_callback_file.py", line XX, in on_validation_epoch_end
pl_module.log_dict(my_metrics_dict)
... |
PyTorch Lightning throws error when used on TPU | [
"help wanted",
"waiting on author"
] | I'm having this error just after the validation sanity check
GPU available: False, used: False
TPU available: True, using: 8 TPU cores
training on 8 TPU cores
INIT TPU local core: 0, global rank: 0 with XLA_USE_BF16=None
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: ... |
Deprecate EvalModelTemplatein favor of BoringModel and another simple model that does actually learn | [
"feature",
"help wanted",
"good first issue",
"ci",
"design"
] | ๐ Feature
correct actual EvalModelTemplate to use new API unless it is testing other purposes or deprecated API
Motivation
better testing of the actual API |
use docker image for GH action testing | [
"feature",
"help wanted",
"good first issue",
"ci"
] | ๐ Feature
Check options to use a docker image to run Conda testing with our base images
https://stackoverflow.com/questions/57549439/how-do-i-use-docker-with-github-actions
Motivation
setting Conda for each run takes about 8min |
Incorrect batch size tracking in training and validation steps | [
"bug",
"help wanted"
] | ๐ Bug
Batch sizes are tracked both in training and evaluation loops to reduce the Train/Eval results on epoch end.
In both cases len(batch) is used to find the current batch_size which is incorrect, for example MNIST loader will return 2 since batch = batch_data, batch_target.
Training loop:
pytorch-lig... |
NCCL error when using ddp with 2 gpus | [
"bug",
"priority: 0",
"distributed"
] | ๐ Bug
I try to run pytorch lighting using ddp with 2 gpus. Running with one gpu works fine. Using fp16 vs not results in the same error. See the stacktrace at the end of the post to see the error. I also tried ddp2 and dp, but both of those fail with a different error.
To Reproduce
Not sure. Let me know what I can do ... |
Unusual printing statements after 90% epoch completition | [
"bug",
"help wanted"
] | i've encountered this unusual print statements while training
It seems that this printing starts when epoch is 90% complete and both loss and train_loss is same until 100% completition
This behaviour is same on TPU's as well as on GPU's
2020-10-05 11:32:55.426605: I tensorflow/stream_executor/platform/default/dso_loade... |
Consider making the docs default to the latest stable version instead of the latest | [
"docs"
] | ๐ Documentation
Hi,
I just started using PyTorch Lightning and got a bit confused by the fact that pytorch-lightning.readthedocs.io defaults to the latest version (including release candidates), while running pip install pytorch-lightning (without specifying a version) will (correctly) default to the latest stable ver... |
Enable .write and .write_dict from LM | [
"feature"
] | Enable .write and .write_dict from LM |
Convert step_ and epoch_ prefixes to postfix | [
"feature"
] | Convert step_ and epoch_ prefixes to postfix |
enable passing in connectors | [
"feature",
"won't fix"
] | apex, slurm, etc can all be configured via connectors
Trainer(connectors=[...])
Alternative, call them plug-ins
Trainer(plugins=[...]) |
enable test loop in fast_dev_run | [
"feature",
"won't fix"
] | check the test step during fast_dev_run |
merge new metrics API | [
"feature",
"priority: 0"
] | |
Lightning Module's to_disk should use fsspec to write reusults | [
"feature",
"help wanted"
] | ๐ Feature
use fsspec here to support more storage backends besides local disk:
pytorch-lightning/pytorch_lightning/trainer/supporters.py
Lines 138 to 165
in
cea5f1f
def to_disk(self):
... |
[Tensorboard] Storing arrays, lists and more complicated structures | [
"question"
] | Fast question.
Is there any way to store array, list or more complicated structures in Tensorboard than just scalars, images, grids etc.? Or will I need to implement a simple text file saving method on my own? |
Accessing logger's data at the end of a training | [
"question"
] | I'd know if it is possible to access all logs that were created during the training process at its end? I'd like to do something with the data. How do I access logger's data?
Is it even possible with the code that exists or should I create a new data structure in my class to store it along with the logger's actions?
I'... |
Multi-GPU training. learning rate is all zero in tensorboard . | [
"bug",
"help wanted",
"3rd party"
] | ๐ Bug
I used LearningrateLogger to log learning rate. But in tensorboard learning rate is all zero.
To Reproduce
Steps to reproduce the behavior:
install 0.10.0rc1
set gpus in Trainer bigger than 1
use lr_logger = LearningRateLogger(logging_interval='step') to log learning rate
Code sample
pseudocode
class BartFine... |
copy badges to release package | [
"feature",
"help wanted",
"good first issue"
] | ๐ Feature
Parse the Readme and replace generated badges by downloaded ones
process in setup.py
parse all badges from online CI and save them as png (svg is problematic, does not work for all plaforms)
2 replace badges in Readme with the downloaded ones
Motivation
pipy page does not work well with generated badges an... |
UserWarning for testing_epoch_end in 0.9.1rc4 | [
"bug",
"help wanted"
] | Just informing about the user warning I was displayed:
YYY\anaconda3\envs\pt_cpu\lib\site-packages\pytorch_lightning\utilities\distributed.py:37: UserWarning: The testing_epoch_end should not return anything as of 9.1.to log, use self.log(...) or self.write(...) directly in the LightningModule warnings.warn(*args, **kw... |
Broken links in README.md. | [
"docs"
] | ๐ Documentation
Hello,
There are broken links in README.md on Dueling-DQN and Reinforce sections. |
Plotting multiple metrics in a single graph | [
"feature",
"help wanted"
] | ๐ Feature
Can we have multiple metrics plotted on the same graph in Tensorboard logging done by lightning?
That is plotting the dictionary values returned in log_dict in the same graph.
Motivation
Pitch
Alternatives
Additional context |
limit builds for Docs | [
"feature",
"ci"
] | ๐ Feature
limit build for PR with are strictly related to docs, so skip:
Conda & Dockers & Full testing [GH actions]
TPU testing [CircleCI]
GPU testing [Drone CI]
Motivation
lower the resources requirements
Additional context
Btw, if you need skip GPU testing use magic work in commit or PR name [CI SKIP]
https://doc... |
non intuitive batch_size in ddp | [
"bug",
"help wanted"
] | Is there a way in PyTorchLightning to set your desired batch size, say 512 and then have the effective batch size per processor (which is normally batch_size*num_gpus) be computed automatically? Right now your effective batch size scales with the number of gpus so these calculations must be computed outside of pytorch... |
mlflow logger complains about missing run_id | [
"bug",
"help wanted"
] | ๐ Bug
When using MLflow logger, log_param() function require run_id
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-23-d048545e1854> in <module>
9 trainer.fit(model=experiment,
10 ... |
How to break a single large input among different GPUs? | [
"question",
"won't fix"
] | Please check out more details here.
OS: [Ubuntu 18.04]
Packaging [pip]
PyTorch Version [e.g. 1.6] |
Metrics return unexpected results in 0.10.0rc1 | [
"bug",
"help wanted"
] | ๐ Bug
There is a chance i dont understand well how it works, but both sklearn, functional and tensor metrics seem to not behave expectedly, specifically precision and recall
To Reproduce
I used a small dummy example for y_true and y_pred for a 2 class classification problem
Code sample
import pytorch_lightning.metri... |
How trainer figures out number of batches per epoch. | [
"question",
"won't fix"
] | @JorDikk and I recently found out that Trainer figures out the total number of batches per epoch though the Sampler __len__ and not Dataset __len__.
While for most cases the size of sampler would correspond to the total number of indices in the dataset (train and val),
we were using a hierarchical dataset, where each i... |
The use of save_hyperparameters() is currently confusing (due to name and docs) | [
"feature",
"docs",
"discussion",
"design"
] | ๐ Documentation & function name change
The following documentation page is relevant here: https://pytorch-lightning.readthedocs.io/en/stable/weights_loading.html
The use of self.save_hyperparameters() is currently confusing for the following 3 reasons:
The role of this function is unclear. In the documentation this f... |
what's the default checkpoint monitor in 0.10.0? | [
"question"
] | what's the default checkpoint monitor in 0.10.0? loss or val_loss returned in validation_step? |
Add Aim logger | [
"help wanted",
"won't fix",
"working as intended",
"logger"
] | ๐ Feature
Implement AimLogger to integrate with Aim.
Motivation
Gor from Aim here. I am helping build Aim โ an open source project that helps to easily track and explore 100s of AI experiments in minutes. I figured it would be good for both parties to integrate Aim with PL.
Solution/Pitch
It appears the following need... |
Mismatch between docstring and code regarding when `on_load_checkpoint` hook is called | [
"bug",
"help wanted",
"docs"
] | ๐ Bug
The docstring of on_load_checkpointย hook says that it is called before trying to load_state_dict:
pytorch-lightning/pytorch_lightning/core/saving.py
Lines 203 to 206
in
cea5f1f
def on_load_checkpoint(self, checkpoint... |
A model interpretability feature - visualize losses and data samples | [
"feature",
"help wanted",
"won't fix"
] | ๐ Feature
An interpretability feature that allows you to log top model losses and visualize examples with the losses.
Motivation
To better understand the working of a trained model, it can be useful to analyze the examples in which your losses are doing well/bad. It would work particularly well with data that's interp... |
tensorboard two value every step | [
"bug",
"help wanted"
] | ๐ Bug
two loss logged every step
To Reproduce
https://colab.research.google.com/drive/1d7a3fwzZOQobFk58QXEqmzziQf1-GJYS?usp=sharing
Code sample
https://colab.research.google.com/drive/1d7a3fwzZOQobFk58QXEqmzziQf1-GJYS?usp=sharing
import os
import torch
from torch.utils.data import Dataset
from pytorch_lightning impor... |
on_train_epoch_end and on_epoch_end are out of order | [
"bug",
"help wanted"
] | ๐ Bug
Consider the following order in which the LightningModule hooks are called from #2816 (I have confirmed that in PytorchLightning version 0.10 this is still an issue):
on_epoch_start
on_train_epoch_start
on_validation_start
on_validation_epoch_start
on_validation_epoch_end
on_validation_end
on_epoch_end
on_train_... |
Can't reproduce logistic regression example | [
"bug",
"help wanted"
] | ๐ Bug
I am unable to run the logistic regression example . At training, I get error which ends in:
/usr/local/lib/python3.6/dist-packages/pl_bolts/models/regression/logistic_regression.py in validation_step(self, batch, batch_idx)
81 x = x.view(x.size(0), -1)
82 y_hat = self(x)
---> 83 ... |
Broken link in Documentation | [
"bug",
"docs"
] | ๐ Documentation
The Module Index link at the bottom of the main page of the Lightning documentation is broken. This seems to be because the make html command does not create a py-modindex.html file (not sure why).
If the Module Index page is not required a solution is to remove * :ref: modindex from the index.rst file... |
log_save_interval doesn't have the intended effect | [
"help wanted",
"docs"
] | ๐ Bug
I'm using the MLFlowLogger class for logging, and initially, I noticed my training loop slowed down immensely when changing the tracking URI from my local file system to a remote mlflow server (which makes sense). To fix this, I saw in the pytorch lightning docs that log_save_interval can be used to change the f... |
Bug in GAN example | [
"help wanted"
] | Bug in https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/domain_templates/generative_adversarial_net.py
When I run python generative_adversarial_net.py
I get
Traceback (most recent call last):
File "generative_adversarial_net.py", line 218, in <module>
main(hparams)
File "generative... |
RAM not correctly released when training a pl module multiple times | [
"help wanted",
"won't fix"
] | ๐ Bug
When I use the pl.Trainer multiple times (for instance when doing cross-validation), it seems that the ram is not completely released, as the ram memory usage increases over runs, in a strange way.
To Reproduce
Steps to reproduce the behavior:
Define a pl module and a pl.trainer inside a function, let's call i... |
specifying the tpu_core speed-up TPU training | [
"feature",
"help wanted"
] | ๐ Bug
I am getting a huge time difference between training a model on a specific tpu core tpu_cores=[1] and training a model on just 1 tpu core tpu_cores=1. What's the reason for that? Aren't both the conditions the same with just the difference that I am assigning a specific tpu_core in the first case and assigning ... |
Data parallel (dp) distributes the loss computation across devices separately, unlike pytorch | [
"help wanted"
] | [please remove] |
Error using TrainsLogger with Trainer in 'ddp' | [
"help wanted",
"won't fix"
] | ๐ Bug
Got the following error when using TrainsLogger with 'ddp' backend during run_pretrain_routine.
Doesn't happen with 'dp' backend
The attribute self._metrics_to_agg still exists up to the point where spawn._wrap is called
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File... |
Broken link | [
"help wanted",
"good first issue",
"docs"
] | In the documentation logger where it says "Read more in the Experiment Logging use case", the link is broken. |
Support DictConfig | [
"bug",
"feature",
"help wanted",
"priority: 0"
] | We need to add DictConfig support for Omegaconf @Borda to the auto hparam save |
DDP Trainer's `test` method -> TypeError: can't pickle SwigPyObject objects | [
"help wanted"
] | I call
my code (roughly)
module = pl.Module(...)
trainer = pl.Trainer(module, distributed_backend='ddp', n_gpu=2,...)
trainer.fit() # works fine uses all GPUs
trainer.test(model) # code works only with n_gpu=1 or n_gpu=0.
Traceback:
trainer.test(model)
../miniconda3/envs/nb/lib/python3.7/site-packages/pytorch_light... |
Docs are missing the anchor links | [
"help wanted",
"good first issue",
"docs"
] | ๐ Documentation
As pointed out by @oplatek the docs suddenly miss the anchor button that allows one to generate a link that points to a particular resource within a page.
This was working before, but now there is a 404 when accessing some assets (js, css, ...)
EDIT: seems not related to the 404 seen in the JS console. |
Early Stopping stops too early when using SLURM | [
"help wanted"
] | ๐ Bug
I have a really strange bug where the Early Stopping Callback seems to fire too early, but only when using my unis Slurm cluster. When I train the same model on my laptop locally this does not happen. Sadly I can't run the code directly on the login node to see if happens on all of their systems or only when Sl... |
Trainer should run the test loop with the best weights when ModelCheckpoint is used | [
"feature",
"help wanted"
] | ๐ Feature
Motivation
I noticed that even when ModelCheckpoint is used, Trainer by default runs the test loop with the last weights, not the best weights saved by ModelCheckpoint. I believe the sensible default here is to run the test loop with the best weights saved by ModelCheckpoint.
Pitch
Now that ModelCheckpoin... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.