repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
ets-labs/python-dependency-injector
|
asyncio
| 146
|
Change name of version variable to make it follow PEP8
|
According to the PEP8, ```VERSION``` should be changed to ```__version__```.
|
closed
|
2016-12-04T11:15:18Z
|
2016-12-04T11:26:38Z
|
https://github.com/ets-labs/python-dependency-injector/issues/146
|
[
"enhancement",
"docs"
] |
rmk135
| 0
|
python-restx/flask-restx
|
flask
| 340
|
Would like to have major versions pinned of important packages to avoid breaking changes
|
**Is your feature request related to a problem? Please describe.**
The problem is that we had version 0.3.0 of this library for our Python microservices and it suddenly broke without any changes. This has been fixed in the version 0.4.0 release, but it would be better if we could adhere to SW best practices (semantic versioning) to make a new *minor* update when changing Flask or werkzeug major versions.
**Describe the solution you'd like**
Would like that all packages in install.pip have their major version pinned.
**Describe alternatives you've considered**
Not available.
|
open
|
2021-06-15T13:55:34Z
|
2021-06-15T13:55:34Z
|
https://github.com/python-restx/flask-restx/issues/340
|
[
"enhancement"
] |
gro1m
| 0
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 172
|
🔥能否提供训练自定义数据的教程和代码
|
* 感谢作者的分享,可以看到从原始的llama训练得到支持中文的模型,其过程还是比较复杂的,希望可以出个详细的训练教程。
* 此外更希望提供一个教程,可以在你的中文模型基础上,针对某个领域的数据(如医疗、法律等)进行训练,从而得到在这个领域更专业的“专属”模型
|
closed
|
2023-04-18T03:27:21Z
|
2023-05-10T00:52:45Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/172
|
[] |
yfq512
| 6
|
PaddlePaddle/PaddleHub
|
nlp
| 1,569
|
安装paddle-hub报错
|
Traceback (most recent call last):
File "/usr/local/python3.7.0/bin/hub", line 5, in <module>
from paddlehub.commands.utils import execute
File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddlehub/__init__.py", line 35, in <module>
from paddlehub.utils.paddlex import download, ResourceNotFoundError
File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddlehub/utils/paddlex.py", line 19, in <module>
from paddlehub.server.server import module_server, CacheUpdater
File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddlehub/server/__init__.py", line 17, in <module>
from paddlehub.server.git_source import GitSource
File "/usr/local/python3.7.0/lib/python3.7/site-packages/paddlehub/server/git_source.py", line 27, in <module>
import git
File "/usr/local/python3.7.0/lib/python3.7/site-packages/git/__init__.py", line 42, in <module>
from git.config import GitConfigParser # @NoMove @IgnorePep8
File "/usr/local/python3.7.0/lib/python3.7/site-packages/git/config.py", line 48, in <module>
from typing import OrderedDict
ImportError: cannot import name 'OrderedDict' from 'typing' (/usr/local/python3.7.0/lib/python3.7/typing.py)
|
open
|
2021-08-12T11:19:03Z
|
2021-08-13T02:55:26Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/1569
|
[
"installation"
] |
huangcao008
| 1
|
nvbn/thefuck
|
python
| 728
|
`fuck` should suggest specific heroku apps when there are multiple
|
Here's an example of current behavior, as of https://github.com/nvbn/thefuck/commit/8fb5ddefb60c59f8bd90a72698c1a384f1433c5e:
```
$ heroku pg
▸ Multiple apps in git remotes
▸ Usage: --remote heroku-dev
▸ or: --app myapp-dev
▸ Your local git repository has more than 1 app referenced in git remotes.
▸ Because of this, we can't determine which app you want to run this command against.
▸ Specify the app you want with --app or --remote.
▸ Heroku remotes in repo:
▸ myapp (heroku)
▸ myapp-dev (heroku-dev)
▸
▸ https://devcenter.heroku.com/articles/multiple-environments
$ fuck
heroku open [enter/↑/↓/ctrl+c]
```
Here's an example of behavior that would be more useful:
```
$ heroku pg
▸ Multiple apps in git remotes
▸ Usage: --remote heroku-dev
▸ or: --app myapp-dev
▸ Your local git repository has more than 1 app referenced in git remotes.
▸ Because of this, we can't determine which app you want to run this command against.
▸ Specify the app you want with --app or --remote.
▸ Heroku remotes in repo:
▸ myapp (heroku)
▸ myapp-dev (heroku-dev)
▸
▸ https://devcenter.heroku.com/articles/multiple-environments
$ fuck
heroku pg --app myapp-dev [enter/↑/↓/ctrl+c]
```
It would be even better if all of the apps were suggested separately:
```
heroku pg --app myapp-dev
heroku pg --app myapp
```
These can be derived from the initial error message, in the lines following ` ▸ Heroku remotes in repo:`
---
I'll try to make time to submit a patch, but I wanted to go ahead and create an issue for it.
|
closed
|
2017-11-06T23:27:41Z
|
2017-11-09T23:42:24Z
|
https://github.com/nvbn/thefuck/issues/728
|
[] |
josephfrazier
| 0
|
scikit-learn/scikit-learn
|
python
| 30,921
|
Persistent UserWarning about KMeans Memory Leak on Windows Despite Applying Suggested Fixes
|
### Describe the bug
Issue Description
When running code involving GaussianMixture (or KMeans), a UserWarning about a known memory leak on Windows with MKL is raised, even after implementing the suggested workaround (OMP_NUM_THREADS=1 or 2). The warning persists across multiple environments and configurations, indicating the issue may require further investigation.
Warning Message:
```
C:\ProgramData\anaconda3\Lib\site-packages\sklearn\cluster_kmeans.py:1429: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=2.
warnings.warn(
```
Steps to Reproduce
1-Code Example:
```python
import os
os.environ["OMP_NUM_THREADS"] = "1" # Also tested with "2"
os.environ["MKL_NUM_THREADS"] = "1"
import numpy as np
from sklearn.datasets import make_blobs
from sklearn.mixture import GaussianMixture
# Generate synthetic 3D data
X, _ = make_blobs(n_samples=300, n_features=3, centers=3, random_state=42)
# Train GMM model
gmm = GaussianMixture(n_components=3, random_state=42)
gmm.fit(X) # Warning triggered here
```
## Environment:
OS: Windows 11
Python: 3.10.12
scikit-learn: 1.3.2
numpy: 1.26.0 (linked to MKL via Anaconda)
Installation Method: Anaconda (conda install scikit-learn).
## Expected vs. Actual Behavior
Expected: Setting OMP_NUM_THREADS should suppress the warning and resolve the memory leak.
Actual: The warning persists despite environment variable configurations, reinstalls, and thread-limiting methods.
## Attempted Fixes
Set OMP_NUM_THREADS=1 or 2 in code and system environment variables.
Limited threads via threadpoolctl:
code:
```python
from threadpoolctl import threadpool_limits
with threadpool_limits(limits=1, user_api='blas'):
gmm.fit(X)
```
Reinstalled numpy and scipy with OpenBLAS instead of MKL.
Tested in fresh conda environments.
Updated all packages to latest versions.
None of these resolved the warning.
Additional Context:
The warning appears even when using GaussianMixture, which indirectly relies on KMeans-related code.
The issue is specific to Windows + MKL. No warnings on Linux/Mac.
Full error log: [Attach log if available].
Questions for Maintainers:
Is there a deeper configuration or bug causing this warning to persist?
Are there alternative workarounds for Windows users?
Is this issue being tracked in ongoing development?
Thank you for your time and support!
Let me know if further details are needed.
### Steps/Code to Reproduce
```python
import os
os.environ["OMP_NUM_THREADS"] = "1" # Also tested with "2"
os.environ["MKL_NUM_THREADS"] = "1"
import numpy as np
from sklearn.datasets import make_blobs
from sklearn.mixture import GaussianMixture
# Generate synthetic 3D data
X, _ = make_blobs(n_samples=300, n_features=3, centers=3, random_state=42)
# Train GMM model
gmm = GaussianMixture(n_components=3, random_state=42)
gmm.fit(X) # Warning triggered here
```
### Expected Results
```
C:\ProgramData\anaconda3\Lib\site-packages\sklearn\cluster\_kmeans.py:1429: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=2.
warnings.warn(
```
### Actual Results
```
C:\ProgramData\anaconda3\Lib\site-packages\sklearn\cluster\_kmeans.py:1429: UserWarning: KMeans is known to have a memory leak on Windows with MKL, when there are less chunks than available threads. You can avoid it by setting the environment variable OMP_NUM_THREADS=2.
warnings.warn(
```
### Versions
```shell
scikit-learn: 1.3.2
numpy: 1.26.0 (linked to MKL via Anaconda)
```
|
open
|
2025-03-01T19:34:29Z
|
2025-03-04T16:07:02Z
|
https://github.com/scikit-learn/scikit-learn/issues/30921
|
[
"Bug",
"Needs Info"
] |
rahimHub
| 1
|
graphdeco-inria/gaussian-splatting
|
computer-vision
| 540
|
A perhaps great, perhaps stupid idea: more than one ideas to extract mesh from GS.
|
1. add regulirization items, force: a) algin points to surface, b)flatten every point。 (refer: SuGaR https://arxiv.org/abs/2311.12775)
2. learn a mapping fucction, mesh = f_gs_to_mesh(gs_points_convergent, [cameras_in_trainset, target_images_in_trainset])
on this direction, I suppose we can learn the regular pattern(intuitively, there should be), that is better than a hard-code algorithm(maybe difficult) which do the mapping.i
great-masters, any comment?
|
open
|
2023-12-11T06:44:19Z
|
2024-01-08T10:48:00Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/540
|
[] |
yuedajiong
| 2
|
aio-libs-abandoned/aioredis-py
|
asyncio
| 571
|
Mixins return different types of asyncio objects
|
It has come to my attention that the redis mixins do not all return the same type of asyncio objects. Take the following code example:
```python
import aioredis
redis = await aioredis.create_redis_pool(('localhost', 6379), loop=asyncio.get_running_loop())
fut1 = asyncio.create_task(redis.set('thing', 'hi'))
fut2 = asyncio.create_task(redis.get('anotherThing'))
results = await asyncio.gather(fut1, fut2)
```
Running this will result in the following error on the `redis.get` line (before running the gather)
```
Future exception was never retrieved
future: <Future finished exception=RuntimeError('this is unexpected')>
RuntimeError: this is unexpected
```
I was a tad confused by this, until I ran this:
```python
print(type(redis.set('thing', 'hi')))
print(type(redis.get('anotherThing')))
```
Which results in:
```
<class 'coroutine'>
<class '_asyncio.Future'>
```
I understand that for the above use-case, I could easily use a `multi_exec`, but it seems very strange that some mixins return coroutines, while others return literal futures. I would expect all the mixins to be similar. This irregularity makes the use case of creating, then gathering futures, like above, extremely difficult. In order to accomplish something like this, you would have to keep track of which mixins return coroutines, and which return futures in order to accomplish this, which makes no sense.
All mixins should be standardized to return either a coroutine (preferable), or a Future, not a combination of the two.
|
closed
|
2019-04-09T01:27:22Z
|
2021-03-18T23:55:31Z
|
https://github.com/aio-libs-abandoned/aioredis-py/issues/571
|
[
"resolved-via-latest"
] |
cheeseandcereal
| 0
|
RobertCraigie/prisma-client-py
|
pydantic
| 273
|
Add support for PyPy
|
- [ ] Run tests using PyPy
- [ ] Add PyPi classifiers mentioning support
|
open
|
2022-02-05T17:33:37Z
|
2022-02-05T17:36:32Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/273
|
[
"kind/improvement",
"level/intermediate",
"priority/high",
"topic: interpreter"
] |
RobertCraigie
| 0
|
tensorly/tensorly
|
numpy
| 264
|
Normalization in CP algorithms
|
#### Describe the bug
In tensorly 0.5.1 installed from the Anaconda channel, non-negative PARAFAC with normalization returns NaNs as a result when run on GPU using PyTorch 1.8.1 as the backend. Non-negative PARAFAC without normalization flag works fine on GPU.
#### Steps or Code to Reproduce
```python
import numpy as np
import tensorly as tl
from tensorly.decomposition import non_negative_parafac
tl.set_backend('pytorch')
p1_g = np.array([1,0,2,0], dtype=float)
p1_c = np.array([1,1,0,0], dtype=float)
p2_g = np.array([0,1,2,0], dtype=float)
p2_c = np.array([0,1,1,0], dtype=float)
p3_g = np.array([1,0,0,2], dtype=float)
p3_c = np.array([0,0,1,1], dtype=float)
t = np.outer(p1_c,p1_g)
t = np.append(t,np.outer(p2_c,p2_g),axis=0)
t = np.append(t,np.outer(p3_c,p3_g),axis=0)
t = t.reshape(3,4,4)
t = tl.tensor(t, device='cuda:0')
factors = non_negative_parafac(t, rank=3, normalize_factors=True)
print(factors[1])
factors = non_negative_parafac(t, rank=3)
print(factors[1])
```
#### Expected behavior
PARAFAC should give meaningful results with and without the normalization flag when run on GPU with PyTorch as the backend.
#### Actual result
[tensor([[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]], device='cuda:0'), tensor([[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]], device='cuda:0'), tensor([[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan],
[nan, nan, nan]], device='cuda:0')]
[tensor([[1.9754e-02, 0.0000e+00, 2.0000e+00],
[2.9770e+00, 0.0000e+00, 4.4415e-04],
[0.0000e+00, 2.1693e+00, 0.0000e+00]], device='cuda:0'), tensor([[3.7141e-05, 2.0000e-12, 1.1050e+00],
[7.6445e-01, 2.0000e-12, 1.0991e+00],
[7.6456e-01, 1.0000e+00, 0.0000e+00],
[4.0791e-13, 1.0000e+00, 2.4172e-12]], device='cuda:0'), tensor([[1.4972e-05, 4.6098e-01, 4.5371e-01],
[4.3936e-01, 1.0625e-12, 0.0000e+00],
[8.7864e-01, 1.0625e-12, 9.0441e-01],
[2.2771e-14, 9.2195e-01, 1.0292e-12]], device='cuda:0')]
#### Versions
Linux-5.4.0-72-generic-x86_64-with-debian-bullseye-sid
Python 3.6.13 |Anaconda, Inc.| (default, Feb 23 2021, 21:15:04)
NumPy 1.19.2
SciPy 1.3.1
TensorLy 0.5.1
CUDA 11.0
PyTorch 1.8.1
|
open
|
2021-04-29T04:04:43Z
|
2022-07-11T13:55:38Z
|
https://github.com/tensorly/tensorly/issues/264
|
[] |
rusillini
| 8
|
2noise/ChatTTS
|
python
| 771
|
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
|
一开始没有用GPU,从这里解决了,
> https://github.com/jianchang512/ChatTTS-ui/issues/35
我的CUDA是11.x,所以按照操作:
> pip uninstall -y torch torchaudio
> 如果cuda是11.x 执行这个
> pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118
> 如果是 12.x 执行这个
> pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu121
----------------------------
现在出现了新的问题:
> RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
貌似是因为GPU太小,网上说要把batch_size调小,但是我不知道如何调?谁知道怎么解决吗?
|
closed
|
2024-10-07T06:22:30Z
|
2024-10-09T15:07:01Z
|
https://github.com/2noise/ChatTTS/issues/771
|
[
"wontfix"
] |
benojan
| 2
|
aleju/imgaug
|
deep-learning
| 76
|
StochasticParameter should support __radd__, __rpow__ etc.
|
Currently only `__add__,` `__pow__` etc. are supported, which means expressions where the first operand is a number (such as `3/ia.Normal(0,1)`) don't work. This is especially a problem for expressions such as `2**ia.Uniform(-1,1)`. These can only be written as `ia.Deterministic(2)**ia.Uniform(-1,1)`, which is much less straightforward. This could be solved by implementing `__rpow__,` `__rdiv__` etc. on `StochasticParameter`.
Basically something like this could be added into StochasticParameter:
```
@staticmethod
def deterministic_if_single_number(x):
if ia.is_single_number(x):
return Deterministic(x)
return x
def __radd__(self, other):
return self.deterministic_if_single_number(other) + self
def __rsub__(self, other):
return self.deterministic_if_single_number(other) - self
def __rmul__(self, other):
return self.deterministic_if_single_number(other) * self
def __rpow__(self, other):
return self.deterministic_if_single_number(other) ** self
def __rdiv__(self, other):
return self.deterministic_if_single_number(other) / self
def __rtruediv__(self, other):
return self.deterministic_if_single_number(other) / self
```
|
closed
|
2017-11-15T12:09:32Z
|
2021-03-18T20:27:14Z
|
https://github.com/aleju/imgaug/issues/76
|
[] |
isarandi
| 1
|
tox-dev/tox
|
automation
| 2,442
|
Use external package builder with --installpkg
|
Hey,
When using `tox==4.0.0b2` with one of our projects that has an [external package env](https://github.com/snowflakedb/snowflake-sqlalchemy/blob/5d17bfb3dbfb1a9b29d1156c1da538ecf61847e9/tox.ini#L25) it appears as we cannot supply a wheel file through `--installpkg` if we have already built one.
```console
$ tox -rvv -e py39 --installpkg dist/snowflake_sqlalchemy-1.3.4-py2.py3-none-any.whl
ROOT: 330 E HandledError| .pkg_external is already defined as a virtualenv-cmd-builder, cannot be virtualenv-pep-517 too [tox/run.py:21]
```
You can repro the issue with the following commands:
```console
cd /tmp
git clone https://github.com/snowflakedb/snowflake-sqlalchemy
cd snowflake-sqlalchemy
virtualenv -p python3.9 venv
. venv/bin/activate
pip install build
pip install --pre tox
python -m build -w -o dist
tox -rvv -e py39 --installpkg dist/snowflake_sqlalchemy-1.3.4-py2.py3-none-any.whl
```
Thanks for the help!
|
closed
|
2022-06-17T17:45:45Z
|
2023-06-17T01:18:12Z
|
https://github.com/tox-dev/tox/issues/2442
|
[
"bug:normal",
"help:wanted",
"tox4"
] |
sfc-gh-mkeller
| 14
|
JaidedAI/EasyOCR
|
deep-learning
| 830
|
ModuleNotFoundError: No module named 'easyocr/DBNet'
|
Hi everyone :)
First I want to thank you for this great library! Great job! :)
I wanted to try the new detector DBNet, but I currently get an error:
```
import easyocr
reader = easyocr.Reader(["en"], detect_network="dbnet18")
```
witch is producing following Exception:
```
Traceback (most recent call last):
File "C:\Users\b30768\PycharmProjects\ocr-service\sandbox.py", line 10, in <module>
reader = easyocr.Reader(["en"], detect_network="dbnet18")
File "C:\Users\b30768\PycharmProjects\ocr-service\venv\lib\site-packages\easyocr\easyocr.py", line 210, in __init__
self.detector = self.initDetector(detector_path)
File "C:\Users\b30768\PycharmProjects\ocr-service\venv\lib\site-packages\easyocr\easyocr.py", line 272, in initDetector
return self.get_detector(detector_path, self.device, self.quantize, cudnn_benchmark=self.cudnn_benchmark)
File "C:\Users\b30768\PycharmProjects\ocr-service\venv\lib\site-packages\easyocr\detection_db.py", line 128, in get_detector
dbnet.construct_model(dbnet.configs['resnet18']['model'])
File "C:\Users\b30768\PycharmProjects\ocr-service\venv\lib\site-packages\easyocr\DBNet\DBNet.py", line 168, in construct_model
self.model = Configurable.construct_class_from_config(config).structure.builder.build(self.device)
File "C:\Users\b30768\PycharmProjects\ocr-service\venv\lib\site-packages\easyocr\DBNet\model\constructor.py", line 40, in construct_class_from_config
cls = Configurable.extract_class_from_args(args)
File "C:\Users\b30768\PycharmProjects\ocr-service\venv\lib\site-packages\easyocr\DBNet\model\constructor.py", line 47, in extract_class_from_args
module = importlib.import_module(package)
File "C:\Users\b30768\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'easyocr/DBNet'
```
I tried several things but I don't get it work, I thing there is a missing __init__.py or an import but I don't get it.
Could someone try to reproduce it ?
|
closed
|
2022-08-25T08:14:53Z
|
2022-09-05T06:57:10Z
|
https://github.com/JaidedAI/EasyOCR/issues/830
|
[] |
gizmo84
| 8
|
tradingstrategy-ai/web3-ethereum-defi
|
pytest
| 15
|
Get revert reason of any tx and especially for failed trades
|
Uniswap trade analyzer should be able to tell why the trade failed
- Too much slippage
- Some internal Uniswap error
- (There should be no other reasons if the tokens are not scam tokens)
As a bonus, trade analyzer should able to tell if the trade was reverted because of slippage. Though not sure how we can pick up this from the transaction receipt, I believe is going to be quite hard. The ”real” EVM nodes do not store the revert reason for very long time (few hundreds of blocks) and one might need to replay the transaction.
https://snakecharmers.ethereum.org/web3py-revert-reason-parsing/
This would be a research task of doing some little trades with Ganache and see what kind of data we can get out of JSON-RPC for the revert reason -if any. And then do the transaction replay trick.
|
closed
|
2022-03-21T07:43:29Z
|
2022-03-25T23:03:17Z
|
https://github.com/tradingstrategy-ai/web3-ethereum-defi/issues/15
|
[
"priority: P2"
] |
miohtama
| 3
|
explosion/spaCy
|
data-science
| 13,343
|
Sharding Warning
|
When i run a deployed GPT 3.5 model from Azure, i get this warning,
"UserWarning: Task supports sharding, but model does not provide context length. Data won't be sharded, prompt might exceed the model's context length. Set context length in your config"
What is the way to set the context_length as i am not able to find properly in the docs anywhere
## How to reproduce the behaviour
Config:
```
[nlp]
lang = "en"
pipeline = ["llm"]
batch_size = 128
[components]
[components.llm]
factory = "llm"
[components.llm.task]
@llm_tasks = "spacy.TextCat.v2"
labels = ["COMPLIMENT", "INSULT"]
[components.llm.model]
@llm_models = "spacy.Azure.v1"
model_type = "chat"
deployment_name = "gpt-35"
name = "gpt-35"
config = {"temperature": 0.0}
base_url = "https://****.openai.azure.com/"
```
Code:
```
from spacy_llm.util import assemble
nlp = assemble("config.cfg")
doc = nlp("You look beautiful!")
print(doc.cats)
```
## Your Environment
- **spaCy version:** 3.7.4
- **Platform:** macOS-14.3.1-arm64-arm-64bit
- **Python version:** 3.9.6
|
closed
|
2024-02-22T04:33:38Z
|
2024-02-22T20:54:37Z
|
https://github.com/explosion/spaCy/issues/13343
|
[
"feat/llm"
] |
AbinashSankaran
| 1
|
google-deepmind/sonnet
|
tensorflow
| 232
|
Is there a convenient way to print output shape of every layer?
|
closed
|
2022-02-19T12:53:04Z
|
2022-02-20T08:05:34Z
|
https://github.com/google-deepmind/sonnet/issues/232
|
[] |
RlChen0
| 2
|
|
opengeos/streamlit-geospatial
|
streamlit
| 124
|
Earth Engine Authenticate
|
I am cloning the repository and try to host this on streamlit app and everything works fine except the Earth Engine authentication
<img width="398" alt="image" src="https://github.com/giswqs/geemap-streamlit/assets/120152624/0785e120-ffe7-4ad5-b452-9c8c4ed03712">
The screen always stuck and here even if I have included the secret key by clicking on the authentication link on streamlit app secret.
To ensure the secret key can be detected, on the timelapse page I also put in a line of code as follow:
`os.environ["EARTHENGINE_TOKEN"] == st.secrets["EARTHENGINE_TOKEN"]`
But things does not change. I hope you can help me to look into this issue, my app link is here: https://sr2wzfkohavxdtmxglzrih.streamlit.app/
|
closed
|
2023-08-03T06:02:26Z
|
2024-01-29T13:16:19Z
|
https://github.com/opengeos/streamlit-geospatial/issues/124
|
[] |
keanteng
| 1
|
koxudaxi/datamodel-code-generator
|
fastapi
| 1,952
|
Support scientific notation (YeX)
|
**Describe the bug**
Scientific notation in json converts to str in model:
```python
class Model(BaseModel):
hello: str
```
**To Reproduce**
Example json:
```json
{
"hello": 1e-9
}
```
Used commandline:
```
$ datamodel-codegen --input-file-type json --input gen/test-meta.json --output gen/test_scientific_notation.py
```
**Expected behavior**
Expected float.
**Version:**
- OS: macOS
- Python version: 3.11.9
- datamodel-code-generator version: 0.25.6
|
open
|
2024-05-09T14:30:06Z
|
2024-05-09T14:30:06Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/1952
|
[] |
swelborn
| 0
|
tflearn/tflearn
|
data-science
| 969
|
SequenceGenerator has inconsistent interface between fit() and generate()
|
Hi, I'm trying to generate sequences from input word sequences (not character sequences). I use TF's VocabularyProcessor to produce a word -> integer mapping.
SequenceGenerator.fit() expects one-hot encoded inputs, so I initially apply my vocabulary to the input sequence and then one-hot encode that to generate the inputs X and Y.
On the other hand, SequenceGenerator.generate() expects an enumerable seed and applies the vocabulary supplied to the constructor to transform the input sequence and produce the one-hot vectors internally.
These two methods should match one another in terms of input data types. Either ask fit() to perform the one-hot encoding using the input vocabulary, or require the seed input to be one-hot encoded.
In my particular case I'm forced to know exactly how TF's VocabularyProcessor splits and normalizes strings in order to pass a valid seed to generate(), and in addition at construction time I have to export the private vocabulary from VocabularyProcessor.
I can supply a code sample if I'm not being completely clear. :-)
|
open
|
2017-12-01T04:57:03Z
|
2017-12-01T04:57:03Z
|
https://github.com/tflearn/tflearn/issues/969
|
[] |
simra
| 0
|
ploomber/ploomber
|
jupyter
| 219
|
Create pkg with utility functions for sample projects
|
See functions in parametrized/nb.md
|
closed
|
2020-08-11T06:06:44Z
|
2020-08-12T01:32:56Z
|
https://github.com/ploomber/ploomber/issues/219
|
[] |
edublancas
| 1
|
flairNLP/flair
|
pytorch
| 2,890
|
Loading a model creates warning; its annoying.
|
In version `flair == 0.11.3` and `huggingface_hub == 0.8.1` loading a HuggingFace model creates a warning. To reproduce:
```
>>> from flair.models import MultiTagger
>>> models = ["flair/chunk-english"]
>>> MultiTagger.load(models)
/home/ubuntu/.local/lib/python3.9/site-packages/huggingface_hub/file_download.py:560: FutureWarning: `cached_download` is the legacy way to download files from the HF hub, please consider upgrading to `hf_hub_download`
warnings.warn(
...
```
This is because `flair.models.sequence_tagger_model` caches models using `from huggingface_hub import hf_hub_download, hf_hub_url`. This is not recommended as of `huggingface_hub == 0.8.1`, see https://github.com/huggingface/huggingface_hub/releases. Warning is annoying.
|
closed
|
2022-08-06T12:50:50Z
|
2022-08-06T16:25:40Z
|
https://github.com/flairNLP/flair/issues/2890
|
[] |
tbachlechner
| 4
|
holoviz/panel
|
jupyter
| 7,264
|
Bokeh: BokehJS was loaded multiple times but one version failed to initialize.
|
Hi team, thanks for your hard work. If possible, can we put a high priority on this fix? It's quite damaging to user experience.
#### ALL software version info
(this library, plus any other relevant software, e.g. bokeh, python, notebook, OS, browser, etc should be added within the dropdown below.)
<details>
<summary>Software Version Info</summary>
```plaintext
acryl-datahub==0.10.5.5
aiohappyeyeballs==2.4.0
aiohttp==3.10.5
aiosignal==1.3.1
alembic==1.13.2
ansi2html==1.9.2
anyio==4.4.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==2.4.1
async-generator==1.10
async-lru==2.0.4
attrs==24.2.0
autograd==1.7.0
autograd-gamma==0.5.0
avro==1.10.2
avro-gen3==0.7.10
awscli==1.33.27
babel==2.16.0
backports.tarfile==1.2.0
beautifulsoup4==4.12.3
black==24.8.0
bleach==6.1.0
blinker==1.8.2
bokeh==3.4.2
bokehtools==0.46.2
boto3==1.34.76
botocore==1.34.145
bouncer-client==0.4.1
cached-property==1.5.2
certifi==2024.7.4
certipy==0.1.3
cffi==1.17.0
charset-normalizer==3.3.2
click==8.1.7
click-default-group==1.2.4
click-spinner==0.1.10
cloudpickle==3.0.0
colorama==0.4.6
colorcet==3.0.1
comm==0.2.2
contourpy==1.3.0
cryptography==43.0.0
cycler==0.12.1
dash==2.17.1
dash-core-components==2.0.0
dash-html-components==2.0.0
dash-table==5.0.0
dask==2024.8.1
datashader==0.16.3
datatank-client==2.1.10.post12049
dataworks-common==2.1.10.post12049
debugpy==1.8.5
decorator==5.1.1
defusedxml==0.7.1
Deprecated==1.2.14
directives-client==0.4.4
docker==7.1.0
docutils==0.16
entrypoints==0.4
executing==2.0.1
expandvars==0.12.0
fastjsonschema==2.20.0
Flask==3.0.3
fonttools==4.53.1
formulaic==1.0.2
fqdn==1.5.1
frozenlist==1.4.1
fsspec==2024.6.1
future==1.0.0
gitdb==4.0.11
GitPython==3.1.43
greenlet==3.0.3
h11==0.14.0
holoviews==1.19.0
httpcore==1.0.5
httpx==0.27.2
humanfriendly==10.0
hvplot==0.10.0
idna==3.8
ijson==3.3.0
importlib-metadata==4.13.0
interface-meta==1.3.0
ipykernel==6.29.5
ipython==8.18.0
ipython-genutils==0.2.0
ipywidgets==8.1.5
isoduration==20.11.0
isort==5.13.2
itsdangerous==2.2.0
jaraco.classes==3.4.0
jaraco.context==6.0.1
jaraco.functools==4.0.2
jedi==0.19.1
jeepney==0.8.0
Jinja2==3.1.4
jira==3.2.0
jmespath==1.0.1
json5==0.9.25
jsonpointer==3.0.0
jsonref==1.1.0
jsonschema==4.17.3
jsonschema-specifications==2023.12.1
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-dash==0.4.2
jupyter-events==0.10.0
jupyter-lsp==2.2.5
jupyter-resource-usage==1.1.0
jupyter-server-mathjax==0.2.6
jupyter-telemetry==0.1.0
jupyter_bokeh==4.0.5
jupyter_client==8.6.2
jupyter_core==5.7.2
jupyter_server==2.14.2
jupyter_server_proxy==4.3.0
jupyter_server_terminals==0.5.3
jupyterhub==4.1.4
jupyterlab==4.2.5
jupyterlab-vim==4.1.3
jupyterlab_code_formatter==3.0.2
jupyterlab_git==0.50.1
jupyterlab_pygments==0.3.0
jupyterlab_server==2.27.3
jupyterlab_templates==0.5.2
jupyterlab_widgets==3.0.13
keyring==25.3.0
kiwisolver==1.4.5
lckr_jupyterlab_variableinspector==3.2.1
lifelines==0.29.0
linkify-it-py==2.0.3
llvmlite==0.43.0
locket==1.0.0
Mako==1.3.5
Markdown==3.3.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.9.2
matplotlib-inline==0.1.7
mdit-py-plugins==0.4.1
mdurl==0.1.2
mistune==3.0.2
mixpanel==4.10.1
more-itertools==10.4.0
multidict==6.0.5
multipledispatch==1.0.0
mypy-extensions==1.0.0
nbclassic==1.1.0
nbclient==0.10.0
nbconvert==7.16.4
nbdime==4.0.1
nbformat==5.10.4
nbgitpuller==1.2.1
nest-asyncio==1.6.0
notebook==7.2.2
notebook_shim==0.2.4
numba==0.60.0
numpy==1.26.4
oauthlib==3.2.2
overrides==7.7.0
packaging==24.1
pamela==1.2.0
pandas==2.1.4
pandocfilters==1.5.1
panel==1.4.4
param==2.1.1
parso==0.8.4
partd==1.4.2
pathspec==0.12.1
pexpect==4.9.0
pillow==10.4.0
platformdirs==4.2.2
plotly==5.23.0
progressbar2==4.5.0
prometheus_client==0.20.0
prompt-toolkit==3.0.38
psutil==5.9.8
psycopg2-binary==2.9.9
ptyprocess==0.7.0
pure_eval==0.2.3
pyarrow==15.0.2
pyasn1==0.6.0
pycparser==2.22
pyct==0.5.0
pydantic==1.10.18
Pygments==2.18.0
PyHive==0.7.0
PyJWT==2.9.0
pymssql==2.3.0
PyMySQL==1.1.1
pyodbc==5.1.0
pyOpenSSL==24.2.1
pyparsing==3.1.4
pyrsistent==0.20.0
pyspork==2.24.0
python-dateutil==2.9.0.post0
python-json-logger==2.0.7
python-utils==3.8.2
pytz==2024.1
pyviz_comms==3.0.3
PyYAML==6.0.1
pyzmq==26.2.0
qtconsole==5.5.2
QtPy==2.4.1
ratelimiter==1.2.0.post0
redis==3.5.3
referencing==0.35.1
requests==2.32.3
requests-file==2.1.0
requests-oauthlib==2.0.0
requests-toolbelt==1.0.0
retrying==1.3.4
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rpds-py==0.20.0
rsa==4.7.2
ruamel.yaml==0.17.40
ruamel.yaml.clib==0.2.8
ruff==0.6.2
s3transfer==0.10.2
scipy==1.13.0
SecretStorage==3.3.3
Send2Trash==1.8.3
sentry-sdk==2.13.0
simpervisor==1.0.0
six==1.16.0
smmap==5.0.1
sniffio==1.3.1
soupsieve==2.6
SQLAlchemy==1.4.52
sqlparse==0.4.4
stack-data==0.6.3
structlog==22.1.0
tabulate==0.9.0
tenacity==9.0.0
termcolor==2.4.0
terminado==0.18.1
tesladex-client==0.9.0
tinycss2==1.3.0
toml==0.10.2
toolz==0.12.1
tornado==6.4.1
tqdm==4.66.4
traitlets==5.14.3
types-python-dateutil==2.9.0.20240821
typing-inspect==0.9.0
typing_extensions==4.5.0
tzdata==2024.1
tzlocal==5.2
uc-micro-py==1.0.3
uri-template==1.3.0
urllib3==1.26.19
wcwidth==0.2.13
webcolors==24.8.0
webencodings==0.5.1
websocket-client==1.8.0
Werkzeug==3.0.4
widgetsnbextension==4.0.13
wrapt==1.16.0
xarray==2024.7.0
xyzservices==2024.6.0
yapf==0.32.0
yarl==1.9.4
zipp==3.20.1
```
</details>
#### Description of expected behavior and the observed behavior
I should be able to use panel in 2 notebooks simultaneously, but if I save my changes and reload the page, the error will show.
#### Complete, minimal, self-contained example code that reproduces the issue
Steps to reproduce:
1. create 2 notebooks with the following content
```python
# notebook 1
import panel as pn
pn.extension()
pn.Column('hi')
```
```python
# notebook 2 (open in another jupyterlab tab)
import panel as pn
pn.extension()
pn.Column('hi')
```
2. Run both notebooks
3. Save both notebooks
4. Reload your page
5. Try to run either of the notebooks and you'll see the error.
#### Stack traceback and/or browser JavaScript console output
(Ignore 'set_log_level` error. I think it's unrelated.)

|
closed
|
2024-09-12T16:03:33Z
|
2024-09-13T17:34:46Z
|
https://github.com/holoviz/panel/issues/7264
|
[] |
tomascsantos
| 4
|
xzkostyan/clickhouse-sqlalchemy
|
sqlalchemy
| 106
|
Is it possible to make a join across tables from different schemas(databases)?
|
I'm new at your dialect (and orm as well)
let's say there are two tables from two databases
`first_db_uri = 'clickhouse://default:@localhost/test_1'`
`second_db_uri = 'clickhouse://default:@localhost/test_2'`
as it's said two separated engines need to be created:
`first_engine = create_engine(first_db_uri)`
`second_engine = create_engine(second_db_uri)`
and sessions:
`first_sessions = make_session(first_engine)`
`second_sessions = make_session(second_engine)`
lest's suppose we got two qqs
`q_1 = first_session.query()`
`q_2 = second_sessions.query()`
How can I make a join between'em if they are corresponded to different sessions->engines so on?
thank you ahead
|
closed
|
2020-10-21T09:09:52Z
|
2020-12-14T18:31:36Z
|
https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/106
|
[] |
justtoreply
| 3
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 475
|
Tokenizer在编码时,会加上空字符的token 29871
|
### 详细描述问题
我在使用tokenizer的时候发现如下令人迷惑的点。
```
from transformers import LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained('models/chinese-llama-7b/')
>>> tokenizer.encode('\n', add_special_tokens=False)
[29871, 13]
>>> tokenizer.decode([13])
'\n'
>>> tokenizer.decode([29871])
''
>>> tokenizer.decode([29871, 13])
'\n'
>>> tokenizer.decode([29871, 13]) == tokenizer.decode([13])
True
>>> tokenizer.encode('哈哈哈', add_special_tokens=False)
[29871, 37296]
>>> tokenizer.encode('嘿嘿嘿', add_special_tokens=False)
[29871, 45040, 46816]
```
这里tokenizer会在回车符前加一个空字符的token`29871`,但是实际解码中,单token`13`也能解码出回车符。
另外对于直接输入中文,也经常会无意义地加一个`29871`,所以感到很困惑,为什么要加一个`29871`的token
```
>>> tokenizer.encode('\n ', add_special_tokens=False)
[29871, 13, 29871]
```
另外是如果是回车+空格的组合,会编码出`[29871, 13, 29871]`这样的组合,实际单独做解码,`[13, 29871]`就可以解码出回车+空格。
总之,是一个很令人困惑的点,想问问这个有什么说法吗?
### 运行截图或日志
### 必查项目(前三项只保留你要问的)
- [x] **基础模型**:LLaMA
- [x] **运行系统**:Linux
- [x] **问题分类**:其他问题
- [x] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [x] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [x] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
|
closed
|
2023-05-31T08:29:03Z
|
2023-05-31T09:03:19Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/475
|
[] |
chenhk-chn
| 5
|
aleju/imgaug
|
machine-learning
| 704
|
bad return order of augment
|
bad return order of augment
|
open
|
2020-07-29T12:09:51Z
|
2020-09-18T03:42:35Z
|
https://github.com/aleju/imgaug/issues/704
|
[] |
vanpersie32
| 1
|
roboflow/supervision
|
pytorch
| 1,190
|
the update of supervision
|
closed
|
2024-05-13T03:47:03Z
|
2024-05-13T10:43:23Z
|
https://github.com/roboflow/supervision/issues/1190
|
[
"question"
] |
OliverLam7725
| 3
|
|
pyqtgraph/pyqtgraph
|
numpy
| 2,695
|
100x Increased speed for ´getArrayRegion()´
|
As I explained in the [discussion](https://github.com/pyqtgraph/pyqtgraph/discussions/2690) I recently opened and self-answered, I found a way to obtain the ROI array region ~100x faster.
The `getArrayRegion()` method is quite slow. for small ROI sizes, it's not noticeable. As you increase the size of the ROI, it becomes almost exponentially more noticeable.
Below is a speed comparison of the process of obtaining the ROI region array using `getArrayRegion()` and the process I implemented. The ROI bounding rectangle has a size of 3888 x 5184:

For a ROI bounding rectangle of size 1000 x 1000:

The approach I implemented uses open-cv's `bitwise_and()` and a modified version of the `renderShapeMask()` method of the ROI class
---
I just noticed I made a typo in the prints, it should read `getArrayRegion()`, not `getArraySlices()`.
|
open
|
2023-04-17T08:11:20Z
|
2023-04-17T09:14:06Z
|
https://github.com/pyqtgraph/pyqtgraph/issues/2695
|
[] |
ghylander
| 0
|
MilesCranmer/PySR
|
scikit-learn
| 666
|
AttributeError: module 'pysr' has no attribute 'Problem'
|
### Discussed in https://github.com/MilesCranmer/PySR/discussions/665
<div type='discussions-op-text'>
<sup>Originally posted by **wkharold** July 10, 2024</sup>
I'm just getting started with PySR. Walking through the Toy Examples with Code. When I do
```python
from pysr import *
```
I get the error in the title:
```
AttributeError: module 'pysr' has no attribute 'Problem'
```
I am running IPython in an Apptainer container built from the `continuumio/miniconda3` Docker image.
When I switch to
```python
import pysr
```
and preface things with `pysr.` as required everything works as expected.</div>
|
closed
|
2024-07-11T01:41:11Z
|
2024-07-15T14:38:05Z
|
https://github.com/MilesCranmer/PySR/issues/666
|
[] |
MilesCranmer
| 1
|
huggingface/datasets
|
nlp
| 7,444
|
Excessive warnings when resuming an IterableDataset+buffered shuffle+DDP.
|
### Describe the bug
I have a large dataset that I shared into 1024 shards and save on the disk during pre-processing. During training, I load the dataset using load_from_disk() and convert it into an iterable dataset, shuffle it and split the shards to different DDP nodes using the recommended method.
However, when the training is resumed mid-epoch, I get thousands of identical warning messages:
```
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
```
### Steps to reproduce the bug
1. Run a multi-node training job using the following python script and interrupt the training after a few seconds to save a mid-epoch checkpoint.
```python
#!/usr/bin/env python
import os
import time
from typing import Dict, List
import torch
import lightning as pl
from torch.utils.data import DataLoader
from datasets import Dataset
from datasets.distributed import split_dataset_by_node
import datasets
from transformers import AutoTokenizer
from more_itertools import flatten, chunked
from torchdata.stateful_dataloader import StatefulDataLoader
from lightning.pytorch.callbacks.on_exception_checkpoint import (
OnExceptionCheckpoint,
)
datasets.logging.set_verbosity_debug()
def dummy_generator():
# Generate 60 examples: integers from $0$ to $59$
# 64 sequences of different lengths
dataset = [
list(range(3, 10)),
list(range(10, 15)),
list(range(15, 21)),
list(range(21, 27)),
list(range(27, 31)),
list(range(31, 36)),
list(range(36, 45)),
list(range(45, 50)),
]
for i in range(8):
for j, ids in enumerate(dataset):
yield {"token_ids": [idx + i * 50 for idx in ids]}
def group_texts(
examples: Dict[str, List[List[int]]],
block_size: int,
eos_token_id: int,
bos_token_id: int,
pad_token_id: int,
) -> Dict[str, List[List[int]]]:
real_block_size = block_size - 2 # make space for bos and eos
# colapse the sequences into a single list of tokens and then create blocks of real_block_size
input_ids = []
attention_mask = []
for block in chunked(flatten(examples["token_ids"]), real_block_size):
s = [bos_token_id] + list(block) + [eos_token_id]
ls = len(s)
attn = [True] * ls
s += [pad_token_id] * (block_size - ls)
attn += [False] * (block_size - ls)
input_ids.append(s)
attention_mask.append(attn)
return {"input_ids": input_ids, "attention_mask": attention_mask}
def collate_fn(batch):
return {
"input_ids": torch.tensor(
[item["input_ids"] for item in batch], dtype=torch.long
),
"attention_mask": torch.tensor(
[item["attention_mask"] for item in batch], dtype=torch.long
),
}
class DummyModule(pl.LightningModule):
def __init__(self):
super().__init__()
# A dummy linear layer (not used for actual computation)
self.layer = torch.nn.Linear(1, 1)
self.ds = None
self.prepare_data_per_node = False
def on_train_start(self):
# This hook is called once training begins on each process.
print(f"[Rank {self.global_rank}] Training started.", flush=True)
self.data_file = open(f"data_{self.global_rank}.txt", "w")
def on_train_end(self):
self.data_file.close()
def training_step(self, batch, batch_idx):
# Print batch information to verify data loading.
time.sleep(5)
# print("batch", batch, flush=True)
print(
f"\n[Rank {self.global_rank}] Training step, epoch {self.trainer.current_epoch}, batch {batch_idx}: {batch['input_ids']}",
flush=True,
)
self.data_file.write(
f"[Rank {self.global_rank}] Training step, epoch {self.trainer.current_epoch}, batch {batch_idx}: {batch['input_ids']}\n"
)
# Compute a dummy loss (here, simply a constant tensor)
loss = torch.tensor(0.0, requires_grad=True)
return loss
def on_train_epoch_start(self):
epoch = self.trainer.current_epoch
print(
f"[Rank {self.global_rank}] Training epoch {epoch} started.",
flush=True,
)
self.data_file.write(
f"[Rank {self.global_rank}] Training epoch {epoch} started.\n"
)
def configure_optimizers(self):
# Return a dummy optimizer.
return torch.optim.SGD(self.parameters(), lr=0.001)
class DM(pl.LightningDataModule):
def __init__(self):
super().__init__()
self.ds = None
self.prepare_data_per_node = False
def set_epoch(self, epoch: int):
self.ds.set_epoch(epoch)
def prepare_data(self):
# download the dataset
dataset = Dataset.from_generator(dummy_generator)
# save the dataset
dataset.save_to_disk("dataset", num_shards=4)
def setup(self, stage: str):
# load the dataset
ds = datasets.load_from_disk("dataset").to_iterable_dataset(
num_shards=4
)
ds = ds.map(
group_texts,
batched=True,
batch_size=5,
fn_kwargs={
"block_size": 5,
"eos_token_id": 1,
"bos_token_id": 0,
"pad_token_id": 2,
},
remove_columns=["token_ids"],
).shuffle(seed=42, buffer_size=8)
ds = split_dataset_by_node(
ds,
rank=self.trainer.global_rank,
world_size=self.trainer.world_size,
)
self.ds = ds
def train_dataloader(self):
print(
f"[Rank {self.trainer.global_rank}] Preparing train_dataloader...",
flush=True,
)
rank = self.trainer.global_rank
print(
f"[Rank {rank}] Global rank: {self.trainer.global_rank}",
flush=True,
)
world_size = self.trainer.world_size
print(f"[Rank {rank}] World size: {world_size}", flush=True)
return StatefulDataLoader(
self.ds,
batch_size=2,
num_workers=2,
collate_fn=collate_fn,
drop_last=True,
persistent_workers=True,
)
if __name__ == "__main__":
print("Starting Lightning training", flush=True)
# Optionally, print some SLURM environment info for debugging.
print(f"SLURM_NNODES: {os.environ.get('SLURM_NNODES', '1')}", flush=True)
# Determine the number of nodes from SLURM (defaulting to 1 if not set)
num_nodes = int(os.environ.get("SLURM_NNODES", "1"))
model = DummyModule()
dm = DM()
on_exception = OnExceptionCheckpoint(
dirpath="checkpoints",
filename="on_exception",
)
# Configure the Trainer to use distributed data parallel (DDP).
trainer = pl.Trainer(
accelerator="gpu" if torch.cuda.is_available() else "cpu",
devices=1,
strategy=(
"ddp" if num_nodes > 1 else "auto"
), # Use DDP strategy for multi-node training.
num_nodes=num_nodes,
max_epochs=2,
logger=False,
enable_checkpointing=True,
num_sanity_val_steps=0,
enable_progress_bar=False,
callbacks=[on_exception],
)
# resume (uncomment to resume)
# trainer.fit(model, datamodule=dm, ckpt_path="checkpoints/on_exception.ckpt")
# train
trainer.fit(model, datamodule=dm)
```
```bash
#!/bin/bash
#SBATCH --job-name=pl_ddp_test
#SBATCH --nodes=2 # Adjust number of nodes as needed
#SBATCH --ntasks-per-node=1 # One GPU (process) per node
#SBATCH --cpus-per-task=3 # At least as many dataloader workers as required
#SBATCH --gres=gpu:1 # Request one GPU per node
#SBATCH --time=00:10:00 # Job runtime (adjust as needed)
#SBATCH --partition=gpu-preempt # Partition or queue name
#SBATCH -o script.out
# Disable Python output buffering.
export PYTHONUNBUFFERED=1
echo "SLURM job starting on $(date)"
echo "Running on nodes: $SLURM_NODELIST"
echo "Current directory: $(pwd)"
ls -l
# Launch the script using srun so that each process starts the Lightning module.
srun script.py
```
2. Uncomment the "resume" line (second to last) and comment the original `trainer.fit` call (last line).
It will produce the following log.
```
[Rank 0] Preparing train_dataloader...
[Rank 0] Global rank: 0
[Rank 0] World size: 2
[Rank 1] Preparing train_dataloader...
[Rank 1] Global rank: 1
[Rank 1] World size: 2
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Assigning 2 shards (or data sources) of the dataset to each node.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#0 dataloader worker#1, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#0 dataloader worker#0, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
node#0 dataloader worker#1, ': Finished iterating over 1/1 shards.
node#0 dataloader worker#0, ': Finished iterating over 1/1 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
[Rank 0] Training started.
[Rank 0] Training epoch 0 started.
[Rank 0] Training epoch 1 started.
Assigning 2 shards (or data sources) of the dataset to each node.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#0 dataloader worker#1, ': Starting to iterate over 1/2 shards.
node#0 dataloader worker#0, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#1 dataloader worker#0, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
node#0 dataloader worker#1, ': Finished iterating over 1/1 shards.
node#0 dataloader worker#0, ': Finished iterating over 1/1 shards.
`Trainer.fit` stopped: `max_epochs=2` reached.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
node#1 dataloader worker#1, ': Finished iterating over 1/1 shards.
node#1 dataloader worker#0, ': Finished iterating over 1/1 shards.
[Rank 1] Training started.
[Rank 1] Training epoch 0 started.
[Rank 1] Training epoch 1 started.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards.
node#1 dataloader worker#0, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
node#1 dataloader worker#0, ': Finished iterating over 1/1 shards.
node#1 dataloader worker#1, ': Finished iterating over 1/1 shards.
```
I'm also attaching the relevant state_dict to make sure that the state is being checkpointed as expected.
```
{'_iterator_finished': True,
'_snapshot': {'_last_yielded_worker_id': 1,
'_main_snapshot': {'_IterableDataset_len_called': None,
'_base_seed': 3992758080362545099,
'_index_sampler_state': {'samples_yielded': 64},
'_num_workers': 2,
'_sampler_iter_state': None,
'_sampler_iter_yielded': 32,
'_shared_seed': None},
'_snapshot_step': 32,
'_worker_snapshots': {'worker_0': {'dataset_state': {'ex_iterable': {'shard_example_idx': 0,
'shard_idx': 1},
'num_examples_since_previous_state': 0,
'previous_state': {'shard_example_idx': 0,
'shard_idx': 1},
'previous_state_example_idx': 33},
'fetcher_state': {'dataset_iter_state': None,
'fetcher_ended': False},
'worker_id': 0},
'worker_1': {'dataset_state': {'ex_iterable': {'shard_example_idx': 0,
'shard_idx': 1},
'num_examples_since_previous_state': 0,
'previous_state': {'shard_example_idx': 0,
'shard_idx': 1},
'previous_state_example_idx': 33},
'fetcher_state': {'dataset_iter_state': None,
'fetcher_ended': False},
'worker_id': 1}}},
'_steps_since_snapshot': 0}
```
### Expected behavior
Since I'm following all the recommended steps, I don't expect to see any warning when resuming. Am I doing something wrong? Also, can someone explain why I'm seeing 20 identical messages in the log in this reproduction setting? I'm trying to understand why I see thousands of these messages with the actual dataset.
One more surprising thing I noticed in the logs is the change in a number of shards per worker. In the following messages, the denominator changes from 2 to 1.
```
node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards.
...
node#1 dataloader worker#1, ': Finished iterating over 1/1 shards.
```
### Environment info
python: 3.11.10
datasets: 3.3.2
lightning: 2.3.1
|
open
|
2025-03-11T16:34:39Z
|
2025-03-11T16:36:01Z
|
https://github.com/huggingface/datasets/issues/7444
|
[] |
dhruvdcoder
| 0
|
predict-idlab/plotly-resampler
|
plotly
| 270
|
[BUG] Hover data does not match the displayed resampled point in scatter plot
|
**Describe the bug** :crayon:
When using `register_plotly_resampler`, hover data specified through `hover_data` argument to `px.scatter` sometimes shows up with incorrect values when hovering over a point with the mouse. The correct value is displayed when not using `register_plotly_resampler`.
**Reproducing the bug** :mag:
```python
import plotly.express as px
from plotly_resampler import register_plotly_resampler
import pandas as pd
import numpy as np
labels = list(range(0, 3))
np.random.seed(0xdeadbeef)
x = np.random.normal(size=100000)
y = np.random.normal(size=100000)
label = np.random.randint(low=labels[0], high=labels[-1]+1, size=100000).astype(str)
description = np.random.randint(low=3, high=5, size=100000)
df = pd.DataFrame.from_dict({"x": x, "y": y, "label": label, "description": description})
x_label = 'x'
y_label = 'y'
label_label = "label"
df = df.sort_values(by=[x_label])
print("Highlighted point on screenshot:")
print(df[np.isclose(df[x_label], 3.864907)])
# Without resampler, shows correct hover data
fig = px.scatter(df, x=x_label, y=y_label, color=label_label, title=f"Without resampler", hover_data=["description"])
fig.show()
# With resampler, shows incorrect hover data
register_plotly_resampler(mode="auto", default_n_shown_samples=10000)
fig2 = px.scatter(df, x=x_label, y=y_label, color=label_label, title=f"With resampler", hover_data=["description"])
fig2.show()
```
Printing the highlighted point in the screenshots:
```
Highlighted point on screenshot:
x y label description
51820 3.864907 0.705485 0 3
```
**Expected behavior** :wrench:
I expect the hover data to be correct for the displayed resampled point.
**Screenshots** :camera_flash:
Without resampler - shows correct hover data

With resampler - shows incorrect hover data

**Environment information**:
- OS: Ubuntu 20.04
- Python environment:
- Python version: 3.8.10
- plotly-resampler environment: No virtual environments, installed though pip on host environment. Figures are displayed in Firefox.
- plotly-resampler version: 0.9.1
- plotly version: 5.18.0
- pandas version: 2.0.3
- numpy version: 1.24.4
|
closed
|
2023-11-08T11:35:07Z
|
2023-11-20T13:12:02Z
|
https://github.com/predict-idlab/plotly-resampler/issues/270
|
[
"bug"
] |
aasmune
| 1
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 27
|
How to use a miner with CrossBatchMemory?
|
Hi,
I was interested in combining the `MaximumLossMiner` with the `CrossBatchMemory`, but cant seem to get it to work, here is what I try:
```python
import torch
from pytorch_metric_learning.losses import CrossBatchMemory, NTXentLoss
from pytorch_metric_learning.miners import MaximumLossMiner
batch_size = 16
embedding_dim = 128
# Generate a dummy batch
anchor_embeddings = torch.randn(batch_size, embedding_dim)
positive_embeddings = torch.randn(batch_size, embedding_dim)
embeddings = torch.cat((anchor_embeddings, positive_embeddings),)
indices = torch.arange(0, anchor_embeddings.size(0))
labels = torch.cat((indices, indices))
# Create loss and miner
_loss = NTXentLoss(0.1)
_miner = MaximumLossMiner(_loss, output_batch_size=16)
loss = CrossBatchMemory(_loss, 128, 1024, _miner)
```
but trying to propagate through the loss
```
loss(embeddings, labels)
```
triggers the following error
```
ValueError Traceback (most recent call last)
<ipython-input-13-6d656e3bd79f> in <module>
----> 1 loss(embeddings, labels)
~/miniconda3/envs/t2t/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/miniconda3/envs/t2t/lib/python3.7/site-packages/pytorch_metric_learning/losses/cross_batch_memory.py in forward(self, embeddings, labels, input_indices_tuple)
29 combined_embeddings = torch.cat([embeddings, E_mem], dim=0)
30 combined_labels = torch.cat([labels, L_mem], dim=0)
---> 31 loss = self.loss(combined_embeddings, combined_labels, indices_tuple)
32 self.add_to_memory(embeddings, labels, batch_size)
33 return loss
~/miniconda3/envs/t2t/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/miniconda3/envs/t2t/lib/python3.7/site-packages/pytorch_metric_learning/losses/base_metric_loss_function.py in forward(self, embeddings, labels, indices_tuple)
51 self.avg_embedding_norm = torch.mean(self.embedding_norms)
52
---> 53 loss = self.compute_loss(embeddings, labels, indices_tuple)
54 if loss == 0:
55 loss = torch.sum(embeddings*0)
~/miniconda3/envs/t2t/lib/python3.7/site-packages/pytorch_metric_learning/losses/ntxent_loss.py in compute_loss(self, embeddings, labels, indices_tuple)
16 cosine_similarity = cosine_similarity / self.temperature
17
---> 18 a1, p, a2, n = lmu.convert_to_pairs(indices_tuple, labels)
19
20 if len(a1) > 0 and len(a2) > 0:
~/miniconda3/envs/t2t/lib/python3.7/site-packages/pytorch_metric_learning/utils/loss_and_miner_utils.py in convert_to_pairs(indices_tuple, labels)
98 return indices_tuple
99 else:
--> 100 a, p, n = indices_tuple
101 return a, p, a, n
102
ValueError: too many values to unpack (expected 3)
```
Any ideas? I figure I am just doing something silly / misusing one of these classes. Thanks in advance!
|
closed
|
2020-03-20T04:17:11Z
|
2020-03-23T21:38:53Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/27
|
[
"bug"
] |
JohnGiorgi
| 5
|
onnx/onnx
|
scikit-learn
| 5,870
|
If it is not continuous inference, the running speed will be slower, how to improve?
|
following code is used to test the onnx inferance speed with GPU
the first time is very long, which is reasonable, the first 10 times is continuouse infer, the speed is normal, ~10ms
but the last 10 times, when put a interval (2s) between each inference, the speed becomes lower, ~100ms, seems need some time to innitialize CUDA.
how to solve this problem? because interval inference is what we usually meet in real situation. too long inferance time is unacceptable
code:
```
sess = ort.InferenceSession(onnx_path, providers=[provider])
input = np.random.rand(*input_shape).astype(np_type)
epoch = 20
for i in range(epoch):
start_time = time.time()
output = sess.run(["output"], {"input": input})[0]
end_time = time.time()
elapsed_time = (end_time - start_time) * 1000
print(f"ONNX Runtime: {provider}. Running time: {elapsed_time:.0f} ms")
times.append(elapsed_time)
print(i)
if i>10:
time.sleep(2)
```
result:
ONNX Runtime: CUDAExecutionProvider. Running time: 9465 ms
0
ONNX Runtime: CUDAExecutionProvider. Running time: 10 ms
1
ONNX Runtime: CUDAExecutionProvider. Running time: 9 ms
2
ONNX Runtime: CUDAExecutionProvider. Running time: 9 ms
3
ONNX Runtime: CUDAExecutionProvider. Running time: 9 ms
4
ONNX Runtime: CUDAExecutionProvider. Running time: 10 ms
5
ONNX Runtime: CUDAExecutionProvider. Running time: 10 ms
6
ONNX Runtime: CUDAExecutionProvider. Running time: 9 ms
7
ONNX Runtime: CUDAExecutionProvider. Running time: 9 ms
8
ONNX Runtime: CUDAExecutionProvider. Running time: 10 ms
9
ONNX Runtime: CUDAExecutionProvider. Running time: 11 ms
10
ONNX Runtime: CUDAExecutionProvider. Running time: 10 ms
11
ONNX Runtime: CUDAExecutionProvider. Running time: 30 ms
12
ONNX Runtime: CUDAExecutionProvider. Running time: 72 ms
13
ONNX Runtime: CUDAExecutionProvider. Running time: 123 ms
14
ONNX Runtime: CUDAExecutionProvider. Running time: 123 ms
15
ONNX Runtime: CUDAExecutionProvider. Running time: 124 ms
16
ONNX Runtime: CUDAExecutionProvider. Running time: 124 ms
17
ONNX Runtime: CUDAExecutionProvider. Running time: 119 ms
18
ONNX Runtime: CUDAExecutionProvider. Running time: 123 ms
|
closed
|
2024-01-20T10:11:55Z
|
2024-01-22T18:21:27Z
|
https://github.com/onnx/onnx/issues/5870
|
[
"question"
] |
hurri2000
| 1
|
apache/airflow
|
automation
| 47,576
|
AIP-38 | Merge Backfill and Trigger Dag Run modals
|
### Body
Problem:
- Create backfill is too hidden away, missing dag run conf and ultimately, is just a version of triggering a run
Solution:
- Add an option to toggle between a regular dag run trigger or a backfill trigger in the same modal. Kind of like how we have materialize asset and create asset event in the same modal.
<img width="927" alt="Image" src="https://github.com/user-attachments/assets/10cb715c-4ceb-467c-8626-00e17d55e86c" />
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
|
open
|
2025-03-10T15:40:03Z
|
2025-03-10T15:40:14Z
|
https://github.com/apache/airflow/issues/47576
|
[
"area:UI",
"area:backfill"
] |
bbovenzi
| 0
|
pydantic/pydantic
|
pydantic
| 10,494
|
(🐞) false positive serialize error with union of `list`s
|
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
```
pydantic_core._pydantic_core.PydanticSerializationError: Pydantic serializer warnings:
PydanticSerializationUnexpectedValue: Expected `bool` but got `int` with value `2` - serialized value may not be as expected
PydanticSerializationUnexpectedValue: Expected `str` but got `int` with value `2` - serialized value may not be as expected
```
### Example Code
```Python
from pydantic import TypeAdapter, BaseModel
TypeAdapter(list[bool | str] | list[int]).dump_python([2], warnings="error")
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: C:\Users\AMONGUS\projects\test\.venv\Lib\site-packages\pydantic
python version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
platform: Windows-10-10.0.19045-SP0
related packages: mypy-1.11.2 typing_extensions-4.12.2
commit: unknown
```
|
closed
|
2024-09-26T03:01:25Z
|
2024-11-15T13:02:11Z
|
https://github.com/pydantic/pydantic/issues/10494
|
[
"bug V2"
] |
KotlinIsland
| 11
|
aleju/imgaug
|
deep-learning
| 64
|
Rotation by 90 degrees results in a black border
|
Running ``` iaa.Affine(rotate=90)``` results in a black border for a square image.

|
closed
|
2017-09-22T08:59:23Z
|
2017-09-22T20:10:37Z
|
https://github.com/aleju/imgaug/issues/64
|
[] |
BAILOOL
| 1
|
microsoft/hummingbird
|
scikit-learn
| 702
|
PyTorch SGDClassifier `predict` result does not match Sklearn model
|
# Bug report
When I convert a `SGDClassifier` model initialised with loss function `modified huber`, found:
```python3
import numpy as np
from sklearn.linear_model import SGDClassifier
import hummingbird.ml
# prepare data
np.random.seed(0)
train_x = np.random.rand(200, 50)
train_y = np.random.randint(10, size=200)
test_x = np.random.rand(100, 50)
# convert
model = SGDClassifier(loss='modified_huber')
model.fit(train_x, train_y)
hb_model = hummingbird.ml.convert(model, 'torch')
```
```python3
# expected result
model.predict(test_x)
array([2, 6, 2, 8, 2, 2, 9, 8, 7, 8, 8, 9, 2, 7, 2, 4, 2, 4, 9, 8, 7, 8,
2, 9, 2, 9, 6, 8, 8, 0, 2, 9, 9, 2, 8, 9, 4, 8, 0, 7, 9, 5, 7, 9,
7, 0, 2, 8, 2, 3, 6, 2, 8, 9, 2, 8, 9, 2, 2, 7, 8, 8, 9, 2, 8, 6,
4, 9, 0, 8, 9, 7, 9, 2, 6, 2, 8, 8, 4, 9, 2, 8, 9, 6, 4, 2, 9, 8,
7, 2, 7, 9, 8, 3, 9, 1, 8, 2, 2, 9])
# In fact
hb_model.predict(test_x)
array([2, 1, 2, 6, 0, 2, 6, 0, 2, 8, 8, 9, 2, 0, 2, 0, 2, 2, 2, 8, 2, 7,
2, 9, 2, 9, 0, 8, 0, 0, 2, 7, 9, 2, 8, 2, 2, 8, 0, 1, 1, 5, 7, 9,
7, 0, 2, 8, 2, 0, 0, 2, 8, 2, 2, 2, 6, 1, 2, 0, 8, 8, 2, 2, 8, 2,
2, 9, 0, 2, 7, 7, 9, 0, 0, 2, 4, 8, 4, 9, 2, 8, 9, 6, 4, 2, 0, 0,
2, 2, 7, 6, 8, 3, 2, 1, 7, 2, 2, 2])
```
The result does not match.
# Environment
- Hummingbird version tested on: 0.4.8
- numpy version: 1.24.3
- Operating System: MacOS Ventura 13.3.1 (a)
|
closed
|
2023-05-15T14:23:08Z
|
2023-05-15T23:02:08Z
|
https://github.com/microsoft/hummingbird/issues/702
|
[] |
ytwei3
| 0
|
jupyter-incubator/sparkmagic
|
jupyter
| 503
|
Unable to start Spark or PySpark kernels - No module named winkerberos
|
On creating a new notebook with either Spark or PySpark kernels on a CentOS 7 machine, this error appears and kernel never starts,
```[I 11:16:51.162 NotebookApp] Kernel started: 82b4f257-29f9-46bb-82db-4057701ff819
/export/apps/python/2.7/bin/python: No module named winkerberos
[I 11:16:54.163 NotebookApp] KernelRestarter: restarting kernel (1/5), new random ports
/export/apps/python/2.7/bin/python: No module named winkerberos
[I 11:16:57.171 NotebookApp] KernelRestarter: restarting kernel (2/5), new random ports
/export/apps/python/2.7/bin/python: No module named winkerberos
[I 11:17:00.181 NotebookApp] KernelRestarter: restarting kernel (3/5), new random ports
/export/apps/python/2.7/bin/python: No module named winkerberos
[I 11:17:03.189 NotebookApp] KernelRestarter: restarting kernel (4/5), new random ports
/export/apps/python/2.7/bin/python: No module named winkerberos
[W 11:17:06.198 NotebookApp] KernelRestarter: restart failed
[W 11:17:06.199 NotebookApp] Kernel 82b4f257-29f9-46bb-82db-4057701ff819 died, removing from map.
```
I'm not sure if `winkerberos` can be installed on a linux machine though it was mentioned as an optional step in the README.
|
open
|
2019-01-19T11:21:25Z
|
2020-06-01T21:12:09Z
|
https://github.com/jupyter-incubator/sparkmagic/issues/503
|
[
"kind:bug",
"awaiting-submitter-response"
] |
Nithanaroy
| 9
|
cvat-ai/cvat
|
tensorflow
| 8,707
|
how to set a shorchut key for AI interactor?
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
_No response_
### Describe the solution you'd like
The AI interactor is very useful for image segmentation, but I did not find any shortcut key for this function. I have to re-select it after annotating each instance. How can I set up a hotkey for this?
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
closed
|
2024-11-15T07:58:36Z
|
2024-11-18T08:25:36Z
|
https://github.com/cvat-ai/cvat/issues/8707
|
[
"enhancement"
] |
ywangwxd
| 1
|
pallets/flask
|
python
| 4,428
|
Flask 2 binds to default host address when --host=0.0.0.0 is used
|
## Description
Flask ignores the parameter --host when it is equal to "0.0.0.0" and version is 2.0.x, binding to its host address.
## How to reproduce
Given the following server
```python3
from flask import Flask
# Create the http server
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
```
Executing Flask 2.0.0, 2.0.1 or 2.0.2
```shell
# flask run --host=0.0.0.0 --port=7000
* Environment: development
* Debug mode: on
* Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://10.235.32.9:7000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 104-550-245
```
Executing Flask 1.1.4
```shell
root@ngd-k3s-example-6bwtc:/app# flask run --host=0.0.0.0 --port=7000
* Environment: development
* Debug mode: on
* Running on http://0.0.0.0:7000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 215-924-045
```
## Expected behavior
With the parameters used, I expected Flask 2.0.x would also bind and listen to http://0.0.0.0:7000/.
## Environments:
(1)
- Python version: Python 3.8.10
- Flask version: 2.0.1
- OS: Ubuntu 20.04.3 LTS
(2)
- Python version: Python 3.10.2
- Flask version: 2.0.0, 2.0.1 or 2.0.2
- OS: Debian GNU/Linux 11 (bullseye)
|
closed
|
2022-01-21T19:00:34Z
|
2022-02-05T00:03:37Z
|
https://github.com/pallets/flask/issues/4428
|
[] |
diegofps
| 2
|
wkentaro/labelme
|
deep-learning
| 529
|
json_to_dataset.py
|
when i use json_to_dataset.py home/xxx.json, it can't work, and it will go wrong
[33m[WARNING][0m This script is aimed to demonstrate how to convert theJSON file to a single image dataset, and not to handlemultiple JSON files to generate a real-use dataset. ([1mjson_to_dataset.py[0m:16)
Traceback (most recent call last):
File "c:\users\xxx\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\xxx\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\xxx\Anaconda3\Scripts\labelme_json_to_dataset.exe\__main__.py", line 9, in <module>
File "c:\users\xxx\anaconda3\lib\site-packages\labelme\cli\json_to_dataset.py", line 59, in main
label=lbl, img=img, label_names=label_names, loc='rb'
File "c:\users\xxxanaconda3\lib\site-packages\imgviz\label.py", line 113, in label2rgb
legend, aabb1, aabb2, fill=(255, 255, 255))
File "c:\users\xxx\anaconda3\lib\site-packages\imgviz\draw.py", line 181, in rectangle
xy=(x1, y1, x2, y2), fill=fill, outline=outline, width=width
TypeError: rectangle() got an unexpected keyword argument 'width'
|
closed
|
2019-12-27T15:45:11Z
|
2022-09-05T03:00:09Z
|
https://github.com/wkentaro/labelme/issues/529
|
[] |
qvduoduo1997
| 7
|
fastapi/sqlmodel
|
sqlalchemy
| 383
|
Not able to read data via relationship back-populates
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class TripInput(SQLModel):
start: int
end: int
description: str
class TripOutput(TripInput):
id: int
class Trip(TripInput, table=True):
id: Optional[int] = Field(default=None, primary_key = True)
car_id: int = Field(foreign_key="car.id")
car: "Car" = Relationship(back_populates="trips")
class CarInput(SQLModel):
size: str
fuel: str = "electric"
doors: int
transmission: str = "auto"
class Car(CarInput, table=True):
id: Optional[int] = Field(primary_key = True, default=None)
trips: list[Trip] = Relationship(back_populates="car")
class CarOutput(CarInput):
id: int
trips: list[TripOutput] = []
@app.get('/api/cars/{id}', response_model=CarOutput)
def car_by_id(id: int, session: Session = Depends(get_session)) -> Car:
car = session.get(Car, id)
if car:
query = select(Trip).where(Trip.car_id == id)
triplist = session.exec(query).all()
car.trips.append(triplist)
return car
else:
raise HTTPException(status_code=400, detail=f"No car with id {id} found")
start end description id car_id
0 100 manali trip 1 1
300 600 Leh 2 1
0 20 Mall Visit 3 2
size fuel doors transmission id
m hybrid 4 auto 1
l hybrid 5 auto 2
```
### Description
I've created a carsharing app, which has car and trip classes. Both have a back populates relationship on them with the other object. I am using SQLite, and database already has a Car table with 2 entries, and trip table with 3 entries as shown in the example code.
I want to return car when a GET call is made with Car_id as input.
But getting an internal server error as below
car.trips.append(triplist)
AttributeError: 'Car' object has no attribute 'trips'
read the documentation and still not able to resolve. Appreciate any help.
[code.zip](https://github.com/tiangolo/sqlmodel/files/9162474/code.zip)
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.9.4
### Additional Context
_No response_
|
open
|
2022-07-21T20:10:49Z
|
2022-07-25T20:40:13Z
|
https://github.com/fastapi/sqlmodel/issues/383
|
[
"question"
] |
syncopatedGlitch
| 3
|
babysor/MockingBird
|
pytorch
| 849
|
pip install -r requirements.txt 出错
|
做到 pip install -r requirements.txt 这个步骤出错,前面都正常,在 anaconda 里完成的
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [275 lines of output]
...
TypeError: CCompiler_spawn() got an unexpected keyword argument 'env'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
|
open
|
2023-03-13T14:31:21Z
|
2023-05-25T11:24:06Z
|
https://github.com/babysor/MockingBird/issues/849
|
[] |
erelief
| 6
|
PaddlePaddle/PaddleNLP
|
nlp
| 9,908
|
[Bug]: 训练后评估阶段报错
|
### 软件环境
```Markdown
- paddlepaddle:
- paddlepaddle-gpu: 3.0.0rc1
- paddlenlp: 3.0.0b3
```
### 重复问题
- [x] I have searched the existing issues
### 错误描述
```Markdown
自己构造的数据:
python doccano.py --negative_ratio 5 --doccano_file ./data/doccano_ext.jsonl --task_type ext --save_dir ./data --splits 0.8 0.1 0.1 --schema_lang en
训练命令:
python finetune.py --device gpu --logging_steps 10 --save_steps 100 --eval_steps 100 --seed 42 --model_name_or_path uie-m-large --output_dir $finetuned_model --train_path data/train.txt --dev_path data/dev.txt --max_seq_length 512 --per_device_eval_batch_size 8 --per_device_train_batch_size 8 --num_train_epochs 20 --learning_rate 1e-5 --label_names "start_positions" "end_positions" --do_train --do_eval --do_export --export_model_dir $finetuned_model --overwrite_output_dir --disable_tqdm True --metric_for_best_model eval_f1 --load_best_model_at_end True --save_total_limit 1
报错:
[2025-02-20 06:44:11,305] [ INFO] - ***** Running Evaluation *****
[2025-02-20 06:44:11,305] [ INFO] - Num examples = 170
[2025-02-20 06:44:11,305] [ INFO] - Total prediction steps = 22
[2025-02-20 06:44:11,306] [ INFO] - Pre device batch size = 8
[2025-02-20 06:44:11,306] [ INFO] - Total Batch size = 8
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
label_ids contains 1
<class 'numpy.ndarray'>
(170, 512)
Traceback (most recent call last):
File "/root/autodl-tmp/PaddleNLP/slm/model_zoo/uie/finetune.py", line 269, in <module>
main()
File "/root/autodl-tmp/PaddleNLP/slm/model_zoo/uie/finetune.py", line 200, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/root/miniconda3/envs/uie/lib/python3.9/site-packages/paddlenlp/trainer/trainer.py", line 872, in train
return self._inner_training_loop(
File "/root/miniconda3/envs/uie/lib/python3.9/site-packages/paddlenlp/trainer/trainer.py", line 1231, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, epoch, ignore_keys_for_eval, inputs=inputs)
File "/root/miniconda3/envs/uie/lib/python3.9/site-packages/paddlenlp/trainer/trainer.py", line 1521, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/root/miniconda3/envs/uie/lib/python3.9/site-packages/paddlenlp/trainer/trainer.py", line 2989, in evaluate
output = self.evaluation_loop(
File "/root/miniconda3/envs/uie/lib/python3.9/site-packages/paddlenlp/trainer/trainer.py", line 3208, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=batch_labels))
File "/root/autodl-tmp/PaddleNLP/slm/model_zoo/uie/finetune.py", line 168, in compute_metrics
start_ids, end_ids = p.label_ids
ValueError: too many values to unpack (expected 2)
```
### 稳定复现步骤 & 代码
p.label_ids解包发生错误
|
open
|
2025-02-19T23:07:52Z
|
2025-03-17T08:35:11Z
|
https://github.com/PaddlePaddle/PaddleNLP/issues/9908
|
[
"bug"
] |
jqtian123
| 1
|
yt-dlp/yt-dlp
|
python
| 12,266
|
Fixing Reverse Playlist Download Start Issue in YouTube-DL Options
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
### GitHub Title
Fixing Reverse Playlist Download Start Issue in YouTube-DL Options
### GitHub Description
I encountered an issue while setting custom download options for `youtube-dl`. Below is the configuration I used in the `_get_ydl_options()` method:
```python
def _get_ydl_options(self, playlist_start: int, reverse_download: bool = False) -> Dict:
"""Get the options for youtube-dl."""
options = {
"format": f"(bestvideo[height<={self.max_resolution}]+bestvideo[height<=720][vcodec^=avc1]+bestaudio/best)",
"outtmpl": "%(autonumber)02d_%(title)s.%(ext)s",
"autonumber_start": playlist_start,
"playliststart": playlist_start,
"restrictfilenames": False,
"nooverwrites": True,
"writedescription": True,
"writeinfojson": False,
"writeannotations": True,
"writethumbnail": False,
"writesubtitles": False,
"writeautomaticsub": False,
"ignoreerrors": True,
"rm_cache_dir": True,
}
if reverse_download:
options["playlistreverse"] = True
return options
```
The issue arises when I set `playlistreverse=True`. Even after specifying a starting point (`playlist_start`), the download begins from the first video in the reversed order (which is the last video in the regular order), ignoring my specified start point.
**Expected Behavior:**
Start downloading from the specified starting point even in reverse.
**Actual Behavior:**
Downloads begin from the last video instead.
**Question:**
How can I modify the options or approach to ensure the playlist starts at the desired point in reverse mode?
Any advice or recommendations would be greatly appreciated!
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
```
|
closed
|
2025-02-03T10:35:08Z
|
2025-02-08T04:07:50Z
|
https://github.com/yt-dlp/yt-dlp/issues/12266
|
[
"question"
] |
MammadTavakoli
| 2
|
PokeAPI/pokeapi
|
graphql
| 505
|
Update Pokémon Dream World sprites
|
The sprites of *Pokémon Dream World* in the repo are very outdated. You can find an updated database in **Bulbapedia**:
[Pokémon Dream World](https://archives.bulbagarden.net/wiki/Category:Pok%C3%A9mon_Dream_World_artwork)
[Pokémon Dream World Items](https://archives.bulbagarden.net/wiki/Category:Pok%C3%A9mon_Dream_World_items)
Sadly **Bulbapedia** does not have a *Download All* button, so the sprites must be downloaded one by one. For the record I do not recommend to only download the ones that you don't have, because they apparently resized all items (they are smaller and have a transparent margin), so if you mix them the layout will be quite inconsistent.
|
open
|
2020-06-29T22:36:07Z
|
2020-07-06T16:19:34Z
|
https://github.com/PokeAPI/pokeapi/issues/505
|
[] |
LateusBetelgeuse
| 6
|
huggingface/transformers
|
tensorflow
| 36,683
|
AttributeError: 'Gemma3Config' object has no attribute 'vocab_size'
|
### System Info
v4.50.0.dev0
### Who can help?
@ArthurZucker
@LysandreJik
@xenova
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to run the new Gemma3 model, using version '4.50.0.dev0'.
When loading the model I get the error: 'Gemma3Config' object has no attribute 'vocab_size'.
Looking into this it seems Gemma3Config has `vocab_size` nested in a "text_config" attribute.
I try to load the model as AutoModelForCausalLM, running it with Gemma3ForConditionalGeneration does not raise this issue. Am I wrong in assuming I can run Gemma 3 as AutoModelForCausalLM?
### Expected behavior
Loading the model as AutoModelForCausalLM.from_pretrained without issue.
|
closed
|
2025-03-12T18:11:39Z
|
2025-03-22T12:08:48Z
|
https://github.com/huggingface/transformers/issues/36683
|
[
"bug"
] |
jumelet
| 16
|
neuml/txtai
|
nlp
| 23
|
Add batch indexing for transformer indices
|
Currently, sentence-transformer based indices are indexing documents one at a time. Calls to sentence-transformers should be batched together to decrease indexing time.
|
closed
|
2020-09-10T21:28:43Z
|
2021-05-13T15:03:09Z
|
https://github.com/neuml/txtai/issues/23
|
[] |
davidmezzetti
| 0
|
Lightning-AI/pytorch-lightning
|
machine-learning
| 20,027
|
[Fabric Lightning] Named barriers
|
### Description & Motivation
To prevent ranks losing alignment due to user error -- it would be beneficial to have named barriers with lightning allowing nodes to move forward only if same barrier name is met.
### Pitch
For example:
```
if fabric.global_rank == 0:
fabric.barrier("rank_0")
else:
fabric.barrier("not_rank_0")
```
will fail in this case, and upon timeout each rank will raise an error with the barrier at which it is held up.
This is as opposed to potential user error where due to incorrect logic the various ranks might go different paths, reach some other barrier which in turn enables the whole flow to continue.
An issue that will likely repeat itself is with `fabric.save`. It is not obvious to new users (that don't dig into the documentation) that this should be called in all nodes, as it implements its own internal barrier call.
A typical mistake would be to construct
```
if fabric.global_rank == 0:
fabric.save(...)
fabric.barrier()
do_training_stuff
fabric.barrier()
```
In this case, rank 0 will start to lag behind as it performs an additional barrier call.
If `fabric.save` would implement `fabric.barrier("save")` then the above program would exit printing that there is an alignment issue.
### Alternatives
_No response_
### Additional context
https://github.com/Lightning-AI/pytorch-lightning/issues/19780
cc @borda @awaelchli
|
open
|
2024-06-28T11:14:00Z
|
2024-06-28T12:25:44Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20027
|
[
"feature",
"help wanted",
"distributed"
] |
tesslerc
| 1
|
huggingface/datasets
|
pandas
| 7,306
|
Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values).
|
### Describe the bug
When creating a dataset from a list of datapoints, information is lost of the individual items.
Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below.
-> What is the best way to create a dataset from a list of datapoints?
---
e.g.:
**When running this code:**
```python
from datasets import load_dataset, Dataset
commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True)
datapoint = next(iter(commonvoice_data))
out = [datapoint]
new_data = Dataset.from_list(out) #this loses datatype information
new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information
```
**We get the following**:
---
1. `datapoint`: (the original datapoint)
```
'audio': {'path': 'it_test_0/common_voice_it_23606167.mp3', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
2.21619011e-05, 2.72628222e-05, 0.00000000e+00]), 'sampling_rate': 48000}
```
Original Dataset Features:
```
>>> commonvoice_data.features
'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None)
```
- Here we see column "audio", has the proper values (both `path` & and `array`) and has the correct datatype (Audio).
----
2. new_data[0]:
```
# Cannot be printed (as it prints the entire array).
```
New Dataset 1 Features:
```
>>> new_data.features
'audio': {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)}
```
- Here we see that the column "audio", has the correct values, but is not the Audio datatype anymore.
---
3. new_data2[0]:
```
'audio': {'path': None, 'array': array([0., 0., 0., ..., 0., 0., 0.]), 'sampling_rate': 48000},
```
New Dataset 2 Features:
```
>>> new_data2.features
'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None),
```
- Here we see that the column "audio", has the correct datatype, but all the array & path values were lost!
### Steps to reproduce the bug
## Run:
```python
from datasets import load_dataset, Dataset
commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True)
datapoint = next(iter(commonvoice_data))
out = [datapoint]
new_data = Dataset.from_list(out) #this loses datatype information
new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information
```
### Expected behavior
## Expected:
```datapoint == new_data[0]```
AND
```datapoint == new_data2[0]```
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.26.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
|
open
|
2024-12-05T09:07:53Z
|
2024-12-05T09:09:38Z
|
https://github.com/huggingface/datasets/issues/7306
|
[] |
ai-nikolai
| 0
|
onnx/onnx
|
pytorch
| 5,868
|
A way to convert NCHW -> NHWC model in onnx
|
# Ask a Question
### Question
Is there a way to change the input memory format from NCHW to NWHC in onnx? I have a pytorch model, and used `x.to(memory_format=torch.channels_last)` for inputs and the model itself during training. However, when converting to onnx, the shape is back to 1,3,256,256.
I'm using this code for conversion:
```
example_input = torch.randn(1, 3, 256, 256)
example_input = example_input.to(device, memory_format=torch.channels_last)
torch.onnx.export(model, example_input, 'path/to/save/, input_names=["input"], output_names=["predictions"], export_params=True)
```
I'm aware that my example_input here is NCHW but pytorch conv2d requires input in this format, but the use of torch.channels_last supposedly applies to conv2d gpu.
My end goal is conversion from Pytorch->onnx->TensorRT, all maintaining this NWHC input. Any reccomendations would be appreciated!
|
open
|
2024-01-19T16:09:32Z
|
2025-01-25T06:42:28Z
|
https://github.com/onnx/onnx/issues/5868
|
[
"question",
"stale"
] |
nathanjacobiOXOS
| 1
|
nvbn/thefuck
|
python
| 638
|
When pushing to or pulling from a git repository that needs merging, open mergetool.
|
Feature request :)
Often I push or pull from a git repository and get the error that it needs merging. Typing fuck currently just prompts me to enter my credentials again, then gives me the same error.
|
open
|
2017-04-28T05:00:15Z
|
2017-04-28T05:00:15Z
|
https://github.com/nvbn/thefuck/issues/638
|
[] |
copycatchiller
| 0
|
plotly/plotly.py
|
plotly
| 4,465
|
go.Histogram2dContour does not normalize when using shared coloraxis and coloring='fill'
|
Setup:
`plotly==5.18.0`
Bug:
When adding two `go.Histogram2dContour` to a subplot, the colorscale is not normalized, when using a shared coloraxis and setting `coloring='fill'`.
It works when setting `coloring='heatmap'`.
Dataframes used:
```
df_a = pd.DataFrame({
'a':[0,0,1,1],
'b':[0,1,0,1],
'c':[0,1,2,3],
})
df_b = pd.DataFrame({
'a':[0,0,1,1],
'b':[0,1,0,1],
'c':[10,20,30,40],
})
```
Example `coloring='fill'`:

Example `coloring='heatmap'`:

Code:
```
fig = make_subplots(rows=1,cols=2)
fig.add_trace(
go.Histogram2dContour(
x=df_a['a'].to_list(),
y=df_a['b'].to_list(),
z=df_a['c'].to_list(),
# contours=dict(coloring="heatmap"),
contours=dict(coloring="fill"),
coloraxis="coloraxis",
showscale=True,
showlegend=False,
histfunc='max',
),
row=1,
col=1
)
fig.add_trace(
go.Histogram2dContour(
x=df_b['a'].to_list(),
y=df_b['b'].to_list(),
z=df_b['c'].to_list(),
# contours=dict(coloring="heatmap"),
contours=dict(coloring="fill"),
coloraxis="coloraxis",
showscale=True,
showlegend=False,
histfunc='max',
),
row=1,
col=2
)
fig.show()
```
|
open
|
2023-12-17T16:49:59Z
|
2024-08-12T13:41:14Z
|
https://github.com/plotly/plotly.py/issues/4465
|
[
"bug",
"sev-2",
"P3"
] |
PhilippHa3
| 1
|
predict-idlab/plotly-resampler
|
data-visualization
| 249
|
[BUG] FigureResampler, FigureWidgetResampler, and register_plotly_resampler do not work for box traces
|
My original figure is about 20MB, I try the three methods in dash callback to return the figure to web frontend, but it seems the figure is not compressed
code as the following:
@callback(
Output("dd-figure-container", "children"),
Input("demo-dropdown", "value"),
State("sessionStore", "data"),
)
def update_output(value, sessionID):
data = query_data(sessionID)
fig = plot_box(data, value)
return dcc.Graph(figure=fig)
register_plotly_resampler(mode='auto')
def plot_box(data, select_column):
fig = FigureWidgetResampler(go.Figure())
fig.add_trace(
go.Box(
x=data[select_column],
y=data.iloc[:, 1],
boxpoints="outliers",
)
)
return fig
the data is same
|
closed
|
2023-07-26T02:52:14Z
|
2023-07-28T08:21:40Z
|
https://github.com/predict-idlab/plotly-resampler/issues/249
|
[
"bug"
] |
joshua-xia
| 4
|
marshmallow-code/flask-smorest
|
rest-api
| 157
|
TypeError: use_args() got an unexpected keyword argument 'location'
|
The error is very similar to #117 so I'm wondering if there's something outdated on my side. Anyway, this is the trace for one of the failing tests:
```
self = <tests.test_pagination.TestPagination object at 0x803b8c750>, app = <Flask 'API Test'>, schemas = Model(DocSchema=<class 'tests.conftest.schemas.<locals>.DocSchema'>, DocEtagSchema=<class 'tests.conftest.schemas.<locals>.DocEtagSchema'>, QueryArgsSchema=<class 'tests.conftest.schemas.<locals>.QueryArgsSchema'>)
def test_pagination_parameters_and_query_string_args(self, app, schemas):
api = Api(app)
blp = Blueprint('test', __name__, url_prefix='/test')
@blp.route('/')
@blp.arguments(schemas.QueryArgsSchema, location="query")
@blp.response()
> @blp.paginate(Page)
def func(query_args):
tests/test_pagination.py:350:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
func = <function TestPagination.test_pagination_parameters_and_query_string_args.<locals>.func at 0x80438a7a0>
def decorator(func):
@wraps(func)
def wrapper(*f_args, **f_kwargs):
return func(*f_args, **f_kwargs)
# Add parameter to parameters list in doc info in function object
# The deepcopy avoids modifying the wrapped function doc
wrapper._apidoc = deepcopy(getattr(wrapper, '_apidoc', {}))
docs = wrapper._apidoc.setdefault('arguments', {})
docs.setdefault('parameters', []).append(parameters)
docs.setdefault('responses', {})[
error_status_code
] = http.HTTPStatus(error_status_code).name
# Call use_args (from webargs) to inject params in function
return self.ARGUMENTS_PARSER.use_args(
> schema, location=location, **kwargs)(wrapper)
E TypeError: use_args() got an unexpected keyword argument 'location'
```
OS is FreeBSD and Python is 3.7.7.
|
closed
|
2020-06-07T21:13:18Z
|
2020-06-07T21:19:56Z
|
https://github.com/marshmallow-code/flask-smorest/issues/157
|
[] |
mekanix
| 1
|
microsoft/qlib
|
machine-learning
| 952
|
is it possible to add inheritance graph or UML to Qlib docs?
|
## 📖 Documentation
<!-- Please specify whether it's tutorial part or API reference part, and describe it.-->
Hi, for the API document, is it possible to add inheritance graph or UML to the docs? There will be very helpful if one wants to write additional Ops or other classes.
Thanks!
|
closed
|
2022-03-05T14:01:38Z
|
2022-06-14T09:02:03Z
|
https://github.com/microsoft/qlib/issues/952
|
[
"stale"
] |
wan9c9
| 4
|
ranaroussi/yfinance
|
pandas
| 1,463
|
Add retry in yf.download to avoid random data not found
|
**Describe the problem**
we typically save all tickers in a list and then download in yf
`python
tickers=pd.read_excel('tickers.xlsx').to_list()
stock_data_close = pd.DataFrame(yf.download(tickers, start=start_date, end=end_date)['Adj Close'])
stock_data_open = pd.DataFrame(yf.download(tickers, start=start_date, end=end_date)['Open'])
stock_data_high = pd.DataFrame(yf.download(tickers, start=start_date, end=end_date)['High'])
stock_data_low = pd.DataFrame(yf.download(tickers, start=start_date, end=end_date)['Low'])
`
but sometimes the ticker not getting results are randomly like this, does not always the same
[*********************100%***********************] 1630 of 1630 completed
27 Failed downloads:
- OXY: No data found for this date range, symbol may be delisted
- HD: No data found for this date range, symbol may be delisted
- GATX: No data found for this date range, symbol may be delisted
- RACE: No data found for this date range, symbol may be delisted
- FIS: No data found for this date range, symbol may be delisted
- BTU: No data found for this date range, symbol may be delisted
- ALV: No data found for this date range, symbol may be delisted
- TW: No data found for this date range, symbol may be delisted
- VSH: No data found for this date range, symbol may be delisted
- WPP: No data found for this date range, symbol may be delisted
**Describe the solution**
Add multiple retry for a specific ticker if it is not able to download
stock_data_low = pd.DataFrame(yf.download(tickers, start=start_date, end=end_date,**retry=5**)['Low'])
Means for any failed download in a specific ticker, retry for 5 times.
**Additional context**
|
closed
|
2023-03-24T02:35:40Z
|
2023-09-09T17:28:48Z
|
https://github.com/ranaroussi/yfinance/issues/1463
|
[
"enhancement",
"Yahoo spam"
] |
XJTLUmedia
| 6
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 1,068
|
How to add loss function only for G_A
|
Hi~ I'm confused about is G_A and G_B using same loss function? If it is, why can we get two different generator.
Besides, How can I add a loss function only for G_A, I just did not find similar question in Issues.
|
closed
|
2020-06-12T11:57:04Z
|
2020-06-13T00:46:17Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1068
|
[] |
Iarkii
| 2
|
kizniche/Mycodo
|
automation
| 768
|
Fantastic project, but I'm stuck.... Need help, or is it a bug?
|
### How does one select multiple inputs for the Math:Redundancy feature?
### Versions:
- Mycodo Version: 8.4.0
- Raspberry Pi Version: 3B
- Raspbian OS Version: Raspbian GNU/Linux 10 (buster)
### Reproducibility
New fresh install of Mycodo
Setup 2 DHT sensors or 2 DS18B20 sensors
Try to add 2 of these inputs into a Math:Redundancy controller
### Expected behavior
By clicking Ctrl, and selecting more than 1 input, multiple inputs should be shown in the "Order of Use" list. Unfortunately I can't seem to get the list to populate. Tried saving after selecting multiple inputs, tried pressing enter, tried dragging. What am I missing? I can't seem to select multiple inputs, and then order them.
### Screenshots

### Additional context
Not sure if this is a bug, or if I'm just missing doing something!
|
closed
|
2020-05-03T19:17:50Z
|
2020-05-05T10:04:11Z
|
https://github.com/kizniche/Mycodo/issues/768
|
[] |
matthall69
| 5
|
huggingface/text-generation-inference
|
nlp
| 2,185
|
Phi-3 mini 128k produces gibberish if context >4k tokens
|
### System Info
GPU: RTX4090
Run 2.1.0 with docker like:
`docker run -it --rm --gpus all --ipc=host -p 8080:80 -v /home/jp/.cache/data:/data ghcr.io/huggingface/text-generation-inference:2.1.0 --model-id microsoft/Phi-3-mini-128k-instruct --max-batch-prefill-tokens=8192 --max-total-tokens=8192 --max-input-tokens=8191 --trust-remote-code --revision bb5bf1e4001277a606e11debca0ef80323e5f824 --sharded false`
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
Running Phi-3 128k (the old revision as the new one fails - see #2172 ) I get good results as long as total context (input tokens + output tokens) are below 4096.
As soon as Input + Output tokens > 4096, Phi-3 outputs just gibberish, e.g.
`,,..,,,,,,,,,,,,,,,,ß,,.s,ß,gen,gen,,,,s,,,,,,,,,,,,,,,,,,,,,,,,,,,o,,,,,,,,,,,,,,,,,,,,,,-hn,.,,,,,,,,,,und,,,,,,,,,,,,,,,,,,,,,,,s,,gen...,`
I think there has to be some bug in the rotary embedding implementation, see also #2060 and #2055 .
### Expected behavior
Inference works for longer contexts.
|
open
|
2024-07-04T08:37:20Z
|
2025-01-29T14:59:13Z
|
https://github.com/huggingface/text-generation-inference/issues/2185
|
[] |
jphme
| 5
|
huggingface/datasets
|
tensorflow
| 6,865
|
Example on Semantic segmentation contains bug
|
### Describe the bug
https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms.
Specifically, as one can see in screenshot below, the object boundaries have weird colors.
<img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59aa0e2c-2e3e-415b-9d42-2314044c5aee">
Original example with `albumentations` is correct
<img width="705" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/27dbd725-cea5-4e48-ba59-7050c3ce17b3">
That is because `torch vision.transforms.Resize` interpolates with bilinear everything which is wrong when used for segmentation labels - you just cannot mix them. Overall, `torchvision.transforms` is designed for classification only and cannot be used to images and masks together, unless you write two separate branches of augmentations.
The correct way would be to use `v2` version of transforms and convert the segmentation labels to https://pytorch.org/vision/main/generated/torchvision.tv_tensors.Mask.html#torchvision.tv_tensors.Mask object
### Steps to reproduce the bug
Go to the website.
<img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/ea1276d0-d69a-48cf-b9c2-cd61217815ef">
https://huggingface.co/docs/datasets/en/semantic_segmentation
### Expected behavior
Results, similar to `albumentation`. Or remove the torch vision part altogether. Or use `kornia` instead.
### Environment info
Irrelevant
|
open
|
2024-05-03T09:40:12Z
|
2024-05-03T09:40:12Z
|
https://github.com/huggingface/datasets/issues/6865
|
[] |
ducha-aiki
| 0
|
scrapy/scrapy
|
web-scraping
| 6,643
|
Add support for async HTTP cache storages
|
https://stackoverflow.com/questions/79396472/how-to-extend-scrapy-with-custom-http-cache-which-needs-to-perform-asynchronous
It should be relatively easy to make it possible to have `HTTPCACHE_STORAGE` storages whose methods are asynchronous, because they are used only in `scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware` which can be changed to have asynchonous methods. There is even an old PR that does this, #2799, though nowadays we maybe want `async def` methods in the async storage interface instead of returning Deferreds.
|
open
|
2025-01-31T15:03:52Z
|
2025-02-11T20:18:29Z
|
https://github.com/scrapy/scrapy/issues/6643
|
[
"enhancement",
"asyncio"
] |
wRAR
| 0
|
deepspeedai/DeepSpeed
|
deep-learning
| 6,576
|
no-torch CI test failure
|
The Nightly CI for https://github.com/microsoft/DeepSpeed/actions/runs/11062306053 failed.
|
closed
|
2024-09-27T00:25:51Z
|
2024-09-27T20:32:50Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6576
|
[
"ci-failure"
] |
github-actions[bot]
| 0
|
quokkaproject/quokka
|
flask
| 211
|
In hopes of getting myself rid of #204 Im doing a complete reinstall now getting "ImportError: No module named PyRSS2Gen"
|
In hopes of getting myself rid of #204 Im doing a complete reinstall now getting "ImportError: No module named PyRSS2Gen"
But I have the module installed. Can I get a hand thanks.
Traceback (most recent call last):
File "./wsgi.py", line 8, in <module>
application = DispatcherMiddleware(create_app(), {
File "./quokka/**init**.py", line 48, in create_app
from .ext import configure_extensions
File "./quokka/ext/**init**.py", line 13, in <module>
from . import (generic, babel, blueprints, error_handlers, context_processor
s,
File "./quokka/ext/views.py", line 6, in <module>
from quokka.core.views import ContentDetail, ContentList, TagList
File "./quokka/core/views.py", line 6, in <module>
import PyRSS2Gen as pyrss
ImportError: No module named PyRSS2Gen
Tue Jun 2 15:19:07 2015 - unable to load app 0 (mountpoint='') (callable not fo
und or import error)
(venv)root@sid:/var/log/uwsgi/app# pip install PyRSS2Gen
Requirement already satisfied (use --upgrade to upgrade): PyRSS2Gen in /usr/locl/lib/python2.7/dist-packages
(venv)root@sid:/var/log/uwsgi/app#
|
closed
|
2015-06-02T19:23:03Z
|
2015-07-16T02:56:10Z
|
https://github.com/quokkaproject/quokka/issues/211
|
[] |
eurabilis
| 11
|
iperov/DeepFaceLab
|
machine-learning
| 5,284
|
GPU->CPU Memcpy failed
|
Extracting faces...
Caching GPU kernels...
Running on GeForce RTX 2060
1%|5 | 61/8453 [00:22<52:35, 2.66it/s]2021-03-01 00:16:03.405765: F tensorflow/core/common_runtime/gpu/gpu_util.cc:291] GPU->CPU Memcpy failed
GeForce RTX 2060 doesnt response, terminating it.
1%|5 | 61/8453 [02:22<5:27:46, 2.34s/it]
-------------------------
Images found: 8453
Faces detected: 0
-------------------------
怎么办
|
open
|
2021-02-28T17:12:12Z
|
2023-06-08T22:30:39Z
|
https://github.com/iperov/DeepFaceLab/issues/5284
|
[] |
MyY2T2
| 1
|
plotly/dash
|
dash
| 3,126
|
How to create a Loading component triggered by an external component
|
Hello,
I want to create a Loading component in a specific location of my page which is triggered by the content in another Div which is not wrapped by the Loading component. Is it possible?
|
closed
|
2025-01-19T19:04:14Z
|
2025-01-23T20:25:19Z
|
https://github.com/plotly/dash/issues/3126
|
[] |
marfago
| 2
|
jina-ai/serve
|
machine-learning
| 6,073
|
How to set GET method?
|

This is default POST method, But how can I set it's a GET method?
|
closed
|
2023-10-07T07:52:48Z
|
2024-06-30T09:09:22Z
|
https://github.com/jina-ai/serve/issues/6073
|
[] |
xuhaoguang
| 11
|
pydantic/pydantic-settings
|
pydantic
| 142
|
ImportError: python-dotenv is not installed, run `pip install pydantic[dotenv]`
|
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Is command ```pip install pydantic[dotenv]``` depricated?
I used BaseSetting, and just follow this hint ```ImportError: python-dotenv is not installed, run `pip install pydantic[dotenv]```, and then got
```WARNING: pydantic 2.1.1 does not provide the extra 'dotenv'```
### Example Code
_No response_
### Python, Pydantic & OS Version
```Text
pydantic version: 2.1.1
pydantic-core version: 2.4.0
python version: 3.10.12 (main, Jun 11 2023, 05:26:28)
```
Selected Assignee: @samuelcolvin
Selected Assignee: @hramezani
|
closed
|
2023-08-16T09:35:45Z
|
2023-08-19T15:57:04Z
|
https://github.com/pydantic/pydantic-settings/issues/142
|
[
"unconfirmed"
] |
horw
| 2
|
microsoft/qlib
|
machine-learning
| 1,289
|
请问能把处理器分个类吗?
|
您好,请问能把内置的如下处理器分个类吗:哪些是共享处理器,哪些是学习处理器,哪些是推理处理器?
并指出哪些是处理特征的哪些是处理标签的?
DropnaProcessor: processor that drops N/A features.
DropnaLabel: processor that drops N/A labels.
TanhProcess: processor that uses tanh to process noise data. 用于特征还是标签?
ProcessInf: processor that handles infinity values, it will be replaced by the mean of the column. 用于特征还是标签?
Fillna: processor that handles N/A values, which will fill the N/A value by 0 or other given number. 用于特征还是标签?
MinMaxNorm: processor that applies min-max normalization. 用于特征还是标签?
ZscoreNorm: processor that applies z-score normalization. 用于特征还是标签?
RobustZScoreNorm: processor that applies robust z-score normalization.用于特征还是标签?
CSZScoreNorm: processor that applies cross sectional z-score normalization.用于特征还是标签?
CSRankNorm: processor that applies cross sectional rank normalization.用于特征还是标签?
CSZFillna: processor that fills N/A values in a cross sectional way by the mean of the column.用于特征还是标签?
|
closed
|
2022-09-13T09:03:03Z
|
2023-01-01T03:06:50Z
|
https://github.com/microsoft/qlib/issues/1289
|
[
"question",
"stale"
] |
quantcn
| 7
|
JaidedAI/EasyOCR
|
machine-learning
| 910
|
get low accuracy on icdar2013 recognition test set.
|
Hi, it's an great project.
But i found the default recognition english model test on ICD13 get poor acc.
{"totalWords": 1095, "detWords": 1095, "crwN": 768.0, "crwupN": 805.0, "ted": 911.0, "tedL": 197.8682213211625, "crw": 0.7013698630136986, "tedup": 739.0, "tedupL": 167.13990935535054, "crwup": 0.7351598173515982}
And i modified some code part ensure not exec detect, this result is only recognition part.
So, I'm consider what's your training data, and am I test right?
|
open
|
2022-12-20T09:14:56Z
|
2022-12-20T09:14:56Z
|
https://github.com/JaidedAI/EasyOCR/issues/910
|
[] |
zhangsiqiGit
| 0
|
tfranzel/drf-spectacular
|
rest-api
| 1,346
|
Can't define only part of example data?
|
I have a Building serializer that nests a UtilityBill serializer. In the UtiltyBillSerializer, I want to provide some reasonable example data for a `bill_start_date` and `bill_end_date` to suggest what a real bill would have for the start and then end of a month.
I see this is possible with the the `extend_schema_serializer` decorator. (Unfortunately, I think I can't define it for just the two fields, so I have to provide manual examples for every field in the serializer.) For example...
```
@extend_schema_serializer(
examples=[
OpenApiExample(
"Valid utility bill example 1",
value={
"fuel_type": "ELECTRIC_GRID",
"bill_start_date": "2024-12-01",
"bill_end_date": "2024-12-31",
"consumption": 45000,
"unit": "KWH",
"cost": 5400,
},
request_only=True,
),
],
)
class UtilityBillDetailSerializer(serializers.ModelSerializer):
...
```
That works for docs created for this endpoint, but when this serializer is nested in the BuildingSerializer, the generated example doesn't include this modification. It uses the default '2024-12-06' for both fields, which doesn't really make sense for the use case of a utility bill.
<img width="341" alt="image" src="https://github.com/user-attachments/assets/a20bca83-2f1f-487e-bddf-15dd6a3f1808">
...and it doesn't seem to be possible to use `@extend_schema_serializer` on my BuildingSerializer to provide example data for *just* the utility_bills field.
So it seems I have to provide a custom example for the **entire** BuildingSerializer, which is kind of brittle since that means manual updates to the example anytime anything in the serializer (or child serializers) changes.
Is there any way to define example data that flows through to each use of a field or serializer, even if nested?
(Thanks for this really helpful library!)
|
closed
|
2024-12-06T21:20:38Z
|
2024-12-09T04:57:05Z
|
https://github.com/tfranzel/drf-spectacular/issues/1346
|
[] |
danielmcquillen
| 6
|
autogluon/autogluon
|
scikit-learn
| 4,622
|
GPU Not Used Despite Configuration – Add Clarifying Log Message
|
When attempting to use AutoGluon with a specific GPU, I observed that the GPU is sometimes not utilized as expected. Instead, a message appears indicating that no GPUs are available, despite them being properly configured and detected by the system.
For example, when running `predictor.fit(train_data=data, presets='best_quality', num_gpus=1)`, the output may show:
`"Specified total num_gpus: 1, but only 0 are available. Will use 0 instead."`
To force GPU usage, setting `ag_args_ensemble={"fold_fitting_strategy": "parallel_local"}` works, but in some cases, this can be less efficient than using CPUs alone.
This issue was initially discussed here: https://github.com/autogluon/autogluon/discussions/4600
Suggestion
It would be helpful if AutoGluon included a log message explaining why the GPU is not used in cases like these, especially when the system automatically switches to sequential_local mode (CPU only) without notifying the user. This would prevent confusion and help users understand when and why the GPU may not be the optimal choice.
Possible Improvement
Add a log message or warning that indicates when the GPU will not be used due to efficiency considerations, such as when using parallel processing with CPUs may be faster. Additionally, consider recommending adjustments to the ag_args_ensemble parameter based on the user's specific setup.
|
closed
|
2024-11-07T11:29:08Z
|
2024-11-07T13:32:23Z
|
https://github.com/autogluon/autogluon/issues/4622
|
[
"bug: unconfirmed",
"Needs Triage"
] |
celestinoxp
| 0
|
biolab/orange3
|
data-visualization
| 5,993
|
Create Instance: Enable nan
|
<!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
<!-- In other words, what's your pain point? -->
<!-- Is your request related to a problem, or perhaps a frustration? -->
<!-- Tell us the story that led you to write this request. -->
Sometimes there's no data available for a given attribute. Or you'd like to demonstrate something with missing values.
**What's your proposed solution?**
<!-- Be specific, clear, and concise. -->
Add a checkbox to Create Instance to the right of the current view. Off by default. If enabled, the value of the attribute is set to nan.
**Are there any alternative solutions?**
?
|
closed
|
2022-05-25T11:27:04Z
|
2022-06-10T10:33:30Z
|
https://github.com/biolab/orange3/issues/5993
|
[
"meal"
] |
ajdapretnar
| 0
|
litestar-org/litestar
|
pydantic
| 3,887
|
Bug: 405: Method Not Allowed when using Websockets with Litestar and Nginx Unit
|
### Description
I believe there may be an issue with how Litestar handles Websocket connections incoming from a client app hosted with Nginx Unit.
This problem does not happen with uvicorn, only Nginx Unit.
From my typescript react app I initiate the websocket connection:
```typescript
const ws = new WebSocket(
`${import.meta.env.VITE_WEBSOCKET_URL}/test?token=myauthtoken`,
);
```
and receive this error from litestar:
```
/site-packages/litestar/_asgi/routing_trie/traversal.py", line 174, in parse_path_to_route
raise MethodNotAllowedException() from e
litestar.exceptions.http_exceptions.MethodNotAllowedException: 405: Method Not Allowed
```
I traced the issue back to this function in that same file:
```python
def parse_node_handlers(
node: RouteTrieNode,
method: Method | None,
) -> ASGIHandlerTuple:
"""Retrieve the handler tuple from the node.
Args:
node: The trie node to parse.
method: The scope's method.
Raises:
KeyError: If no matching method is found.
Returns:
An ASGI Handler tuple.
"""
if node.is_asgi:
return node.asgi_handlers["asgi"]
if method:
return node.asgi_handlers[method]
return node.asgi_handlers["websocket"]
```
When I watch the "method" parameter on the incoming connection from the websocket and I'm using Uvicorn, method is "None" and everything works as expected.
When using Nginx Unit method is "GET" so it tries to handle it like an http connection rather than a websocket one and you get the above error.
If I then modify
```python
if method:
```
to
```python
if method and not node.asgi_handlers.get("websocket"):
```
I get past the "method not allowed" error but then I get
```
/site-packages/litestar/middleware/_internal/exceptions/middleware.py", line 232, in handle_websocket_exception
await send(event)
RuntimeError: WebSocket connect not received
```
I then took a look at the function from the first error:
```python
def parse_path_to_route(
method: Method | None,
mount_paths_regex: Pattern | None,
mount_routes: dict[str, RouteTrieNode],
path: str,
plain_routes: set[str],
root_node: RouteTrieNode,
) -> tuple[ASGIApp, RouteHandlerType, str, dict[str, Any], str]:
"""Given a scope object, retrieve the asgi_handlers and is_mount boolean values from correct trie node.
Args:
method: The scope's method, if any.
root_node: The root trie node.
path: The path to resolve scope instance.
plain_routes: The set of plain routes.
mount_routes: Mapping of mount routes to trie nodes.
mount_paths_regex: A compiled regex to match the mount routes.
Raises:
MethodNotAllowedException: if no matching method is found.
NotFoundException: If no correlating node is found or if path params can not be parsed into values according to the node definition.
Returns:
A tuple containing the stack of middlewares and the route handler that is wrapped by it.
"""
try:
if path in plain_routes:
asgi_app, handler = parse_node_handlers(node=root_node.children[path], method=method)
return asgi_app, handler, path, {}, path
if mount_paths_regex and (match := mount_paths_regex.match(path)):
mount_path = path[: match.end()]
mount_node = mount_routes[mount_path]
remaining_path = path[match.end() :]
# since we allow regular handlers under static paths, we must validate that the request does not match
# any such handler.
children = (
normalize_path(sub_route)
for sub_route in mount_node.children or []
if sub_route != mount_path and isinstance(sub_route, str)
)
if not any(remaining_path.startswith(f"{sub_route}/") for sub_route in children):
asgi_app, handler = parse_node_handlers(node=mount_node, method=method)
remaining_path = remaining_path or "/"
if not mount_node.is_static:
remaining_path = remaining_path if remaining_path.endswith("/") else f"{remaining_path}/"
return asgi_app, handler, remaining_path, {}, root_node.path_template
node, path_parameters, path = traverse_route_map(
root_node=root_node,
path=path,
)
asgi_app, handler = parse_node_handlers(node=node, method=method)
key = method or ("asgi" if node.is_asgi else "websocket")
parsed_path_parameters = parse_path_params(node.path_parameters[key], tuple(path_parameters))
return (
asgi_app,
handler,
path,
parsed_path_parameters,
node.path_template,
)
except KeyError as e:
raise MethodNotAllowedException() from e
except ValueError as e:
raise NotFoundException() from e
```
I then modified
```python
key = method or ("asgi" if node.is_asgi else "websocket")
```
to
```python
key = method if method and not node.asgi_handlers.get("websocket") else ("asgi" if node.is_asgi else "websocket")
```
and now everything works as expected.
The reason I believe this may be a bug is the way if its determining if it's a websocket connection in the "parse_node_handlers" function.
When I check the [websocket spec](https://datatracker.ietf.org/doc/html/rfc6455), page 17, point 2 it says
>"The method of the request MUST be **GET**, and the HTTP version MUST be at least 1.1."
So I **think** the method coming through as "GET" on the websocket connection from Nginx Unit is normal behavior and the method being "None" from Uvicorn is abnormal.
Unfortunately, it seems like Litestar relies on method being "none" currently to handle the websocket connection which is breaking websockets for servers following that spec.
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
```bash
/site-packages/litestar/_asgi/routing_trie/traversal.py", line 174, in parse_path_to_route
raise MethodNotAllowedException() from e
litestar.exceptions.http_exceptions.MethodNotAllowedException: 405: Method Not Allowed
/site-packages/litestar/middleware/_internal/exceptions/middleware.py", line 232, in handle_websocket_exception
await send(event)
RuntimeError: WebSocket connect not received
```
### Litestar Version
2.13.0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
|
closed
|
2024-12-05T18:56:13Z
|
2025-03-20T15:55:02Z
|
https://github.com/litestar-org/litestar/issues/3887
|
[
"Upstream"
] |
FixFlare
| 12
|
microsoft/MMdnn
|
tensorflow
| 218
|
Convert Resnet100 from MxNet to Caffe
|
Platform (like ubuntu 16.04/win10): ubuntu 16.04
Python version: 2.7
Source framework with version (like Tensorflow 1.4.1 with GPU):MxNet 1.2.0
Destination framework with version (like CNTK 2.3 with GPU):
Pre-trained model path (webpath or webdisk path):
Running scripts:
I ran this command to convert my mxnet model to IR model:
python -m mmdnn.conversion._script.convertToIR -f mxnet -n /home/sll/zl/resnet-100-symbol.json -w /home/sll/zl/resnet-100-0197.params -d resnet100 --inputShape 3 112 112
This is the error trace I got.
[10:01:07] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v1.1.0. Attempting to upgrade...
[10:01:07] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
Warning: MXNet Parser has not supported operator one_hot with name one_hot0.
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/sll/.local/lib/python2.7/site-packages/mmdnn/conversion/_script/convertToIR.py", line 186, in <module>
_main()
File "/home/sll/.local/lib/python2.7/site-packages/mmdnn/conversion/_script/convertToIR.py", line 181, in _main
ret = _convert(args)
File "/home/sll/.local/lib/python2.7/site-packages/mmdnn/conversion/_script/convertToIR.py", line 99, in _convert
parser.run(args.dstPath)
File "/home/sll/.local/lib/python2.7/site-packages/mmdnn/conversion/common/DataStructure/parser.py", line 22, in run
self.gen_IR()
File "/home/sll/.local/lib/python2.7/site-packages/mmdnn/conversion/mxnet/mxnet_parser.py", line 265, in gen_IR
self.rename_UNKNOWN(current_node)
File "/home/sll/.local/lib/python2.7/site-packages/mmdnn/conversion/mxnet/mxnet_parser.py", line 377, in rename_UNKNOWN
raise NotImplementedError()
NotImplementedError
|
closed
|
2018-05-31T02:01:47Z
|
2018-08-07T11:47:31Z
|
https://github.com/microsoft/MMdnn/issues/218
|
[] |
321zhangli123
| 3
|
aiortc/aiortc
|
asyncio
| 1,154
|
Streaming video from camera to browser and recording it
|
I want to stream the video from a local camera to a remote browser using h264 encoding. This I got working.
Now I want to simultanously record this stream as mp4 file.
I create the camera track:
```
cap = cv2.VideoCapture(0)
track = CameraStreamTrack(cap)
```
where CameraStreamTrack is a subclass of MediaStreamTrack:
```
class CameraStreamTrack(MediaStreamTrack):
kind = "video"
def __init__(self, source):
super().__init__() # Initialize the base class
self.source = source
self.frame_count = 0
async def recv(self):
# Read frame from the webcam (OpenCV)
ret, frame = self.source.read()
if not ret:
raise Exception("Error reading frame from source")
# Convert the frame from BGR (OpenCV default) to RGB
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Wrap the RGB frame in a VideoFrame
video_frame = VideoFrame.from_ndarray(frame_rgb, format="rgb24")
# Calculate pts based on the frame rate (30 fps assumed here)
self.frame_count = (self.frame_count + 1) % (30 * 60 * 60 * 24) # Reset every 24 hours
video_frame.pts = self.frame_count * 90000 // 30 # 90000 is a common time_base in video encoding
# Set the time_base for the frame
video_frame.time_base = fractions.Fraction(1, 90000)
return video_frame
```
then I add this track to the peer connection:
`pc.addTrack(track)`
As i said this works fine and I can see the h264 encoded stream in a browser.
But now i want to record the video and thought that MediaRecorder is the correct way to do it. Since I found no examples (there are only examples for recording video tracks that originate from the browser, not a local camera) I tried it with:
```
recorder = MediaRecorder(filename)
recorder.addTrack(track)
await recorder.start()
```
where track is the locally created track from the camera.
When I start the recording, the video stream in the browser freezes and the recording file is created, but stays at 0kb.
The console output is:
```
[libx264 @ 000002678fffda80] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 000002678fffda80] profile High, level 3.0, 4:2:0, 8-bit
[libx264 @ 000002678fffda80] 264 - core 164 - H.264/MPEG-4 AVC codec - Copyleft 2003-2024 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=7 lookahead_threads=7 sliced_threads=1 slices=7 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=1024 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
```
PS C:\dev\VideoMonitor>
What am I missing?
|
closed
|
2024-09-09T15:39:31Z
|
2024-10-16T10:54:57Z
|
https://github.com/aiortc/aiortc/issues/1154
|
[] |
jochennaumann
| 0
|
vaexio/vaex
|
data-science
| 1,511
|
[BUG-REPORT] RobustScaler is currenty broken
|
Thank you for reaching out and helping us improve Vaex!
Before you submit a new Issue, please read through the [documentation](https://docs.vaex.io/en/latest/). Also, make sure you search through the Open and Closed Issues - your problem may already be discussed or addressed.
**Description**
`vaex.ml.RobustScaler` is broken
**Software information**
- Vaex version (`import vaex; vaex.__version__)`:
{'vaex-core': '4.3.0.post1',
'vaex-viz': '0.5.0',
'vaex-hdf5': '0.8.0',
'vaex-jupyter': '0.6.0',
'vaex-ml': '0.12.0'}
- Vaex was installed via: pip (poetry)
- OS: Ubuntu 20.04.2 LTS
**Additional information**

|
closed
|
2021-08-13T09:01:14Z
|
2021-10-10T00:01:08Z
|
https://github.com/vaexio/vaex/issues/1511
|
[] |
danielgafni
| 2
|
python-restx/flask-restx
|
api
| 193
|
Why was the changelog removed from the repository and the documentation?
|
**Ask a question**
Why was the changelog removed from the repository and the documentation?
**Additional context**
Hi, I was looking at trying to move a project from flask_restplus==0.10.1 to restx but had a hard time figuring out what had changed. The docs say that is it mostly compatible with restplus but that is specific to a version, I think relabeling tags was a bad idea and can be reverted. Also having the changelog in the docs is not only very common but also a great way to figure out the risks when trying to move from restplus to restx.
|
open
|
2020-08-07T16:17:54Z
|
2020-09-02T18:48:40Z
|
https://github.com/python-restx/flask-restx/issues/193
|
[
"question"
] |
avilaton
| 3
|
PrefectHQ/prefect
|
data-science
| 17,433
|
Work queues status "Not Ready" but Worker is "online"
|
### Bug summary
When I upgraded self-hosted Prefect from 3.0.2 to 3.2.5 the Work Queues status is stuck in "Not Ready".
The Worker is online and submitting jobs as expected.
### Version info
```Text
Version: 3.2.5
API version: 0.8.4
Python version: 3.9.20
Git commit: 168280f7
Built: Wed, Feb 19, 2025 3:25 AM
OS/Arch: linux/aarch64
Profile: ephemeral
Server type: ephemeral
Pydantic version: 2.10.6
Server:
Database: AWS RDS Postgres
Postgres version: 16.4
Integrations:
prefect-aws: 0.5.5
```
### Additional context
For an additional test, I deleted and recreated the work pool. The worker started as normal and was able to submit jobs successfully. But the queues are still in Not Ready state.
|
closed
|
2025-03-10T13:49:41Z
|
2025-03-11T14:16:34Z
|
https://github.com/PrefectHQ/prefect/issues/17433
|
[
"bug"
] |
Pballer
| 3
|
microsoft/JARVIS
|
pytorch
| 140
|
Is nvidia 4070ti can to run this model?
|
Is it only the vram that matters? Can 4070ti run Jarvis? Has anyone tried it?
|
open
|
2023-04-13T04:50:08Z
|
2023-04-18T06:41:50Z
|
https://github.com/microsoft/JARVIS/issues/140
|
[] |
Ryan2009
| 1
|
mlfoundations/open_clip
|
computer-vision
| 575
|
Help on strange error:
|
hello,
I am running the following example from the web:
model, _, transform = open_clip.create_model_and_transforms(
model_name="coca_ViT-L-14",
pretrained="mscoco_finetuned_laion2B-s13B-b90k"
)
im = Image.open("cat.jpg").convert("RGB")
im = transform(im).unsqueeze(0)
with torch.no_grad():
generated = model.generate(im)
print(open_clip.decode(generated[0]).split("<end_of_text>")[0].replace("<start_of_text>", ""))
I get the following error:
in _generate_beamsearch
raise ValueError(
ValueError: Batch dimension of `input_ids` should be 18, but is 6.
what should I do?
Thanks
|
closed
|
2023-07-21T01:20:59Z
|
2023-09-15T23:24:26Z
|
https://github.com/mlfoundations/open_clip/issues/575
|
[] |
shersoni610
| 6
|
netbox-community/netbox
|
django
| 18,990
|
Comment/description for image attachments
|
### NetBox version
v4.2.5
### Feature type
Data model extension
### Proposed functionality
Other objects support a free-floating comment/description field, however image attachments do not.
We found this upon importing images from RackTables, which does have a comment field for attachments.
### Use case
This would be useful to:
- have a text reference about what the image is about before opening it
- have a text caption about the what is displayed on the image for visually impaired users
Using the `Name` field for this purpose not ideal, as that one alters the display of the image file name as well.
### Database changes
It requires an additional field.
### External dependencies
_No response_
|
open
|
2025-03-24T14:07:39Z
|
2025-03-24T14:07:39Z
|
https://github.com/netbox-community/netbox/issues/18990
|
[
"type: feature",
"status: needs triage"
] |
tacerus
| 0
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,029
|
Template dataset.py
|
Hello,
I want to implement cycle gans . I have preprocessed my dicom images to get same size and slice thickness and saved them as .mf2 file. After that I extracted the image patches as I want to pass patches as input to my model. I stored the patches as tensors .pt files.
So currently the torch.Size([7, 4, 4, 64, 64, 64]) for both domain A and domain B dataset.
My doubt is how should I now load these tensors now. Should I save them in disk and then write dataset loader?.
Where should I implement this.
Thanks in advance
|
open
|
2020-05-17T17:00:12Z
|
2020-05-18T03:23:25Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1029
|
[] |
SurbhiKhushu
| 1
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 535
|
TypeError: make_dilated() missing 1 required positional argument: 'dilation_list'
|
when I use DeepLabV3plus which use vgg16 or Xception as encoder,I met the error :
ypeError Traceback (most recent call last)
<ipython-input-11-0da076f0c3cd> in <module>()
35 classes=3,
36 activation=None,
---> 37 in_channels=3,
38 )
segmentation_models_pytorch/deeplabv3/model.py in __init__(self, encoder_name, encoder_depth, encoder_weights, encoder_output_stride, decoder_channels, decoder_atrous_rates, in_channels, classes, activation, upsampling, aux_params)
144 depth=encoder_depth,
145 weights=encoder_weights,
--> 146 output_stride=encoder_output_stride,
147 )
148
segmentation_models_pytorch/encoders/__init__.py in get_encoder(name, in_channels, depth, weights, output_stride, **kwargs)
78 encoder.set_in_channels(in_channels, pretrained=weights is not None)
79 if output_stride != 32:
---> 80 encoder.make_dilated(output_stride)
81
82 return encoder
TypeError: make_dilated() missing 1 required positional argument: 'dilation_list'
i don't konw how to solve it and what cause the problem.
|
closed
|
2022-01-08T08:07:36Z
|
2022-06-27T07:55:28Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/535
|
[] |
OneDirection67
| 3
|
iperov/DeepFaceLab
|
deep-learning
| 534
|
Training won't start
|
I have completed Steps 1 to 5 but now I want to start training, run training H64. But It Won't start training. Same Info appears at other trainingsetups.
Running trainer. That's all that happened:
Loading model...
Model first run.
Enable autobackup? (y/n ?:help skip:n) : n
Write preview history? (y/n ?:help skip:n) : n
Target iteration (skip:unlimited/default) : default
0
Batch_size (?:help skip:0) : 20
Flip faces randomly? (y/n ?:help skip:y) : y
Use lightweight autoencoder? (y/n, ?:help skip:n) : n
Use pixel loss? (y/n, ?:help skip: n/default ) : default
Using plaidml.keras.backend backend.
INFO:plaidml:Opening device "opencl_amd_gfx900.0"
|
closed
|
2019-12-28T05:05:54Z
|
2020-03-28T05:42:15Z
|
https://github.com/iperov/DeepFaceLab/issues/534
|
[] |
Jofloku
| 0
|
pywinauto/pywinauto
|
automation
| 396
|
pywintypes.error : (0, 'SetCursorPos', 'No error message is available') on using Regex
|
Hi,
I am using Python 2.7 (64 bit) with pyWinAuto 0.6.3 (64 bit) on Windows 2012 R2 (64 bit) to automate 64-bit application.
I am getting following error when we use regex for title_re.
```
<class 'pywintypes.error'>: (0, 'SetCursorPos', 'No error message is available')
```
Here is my Code snippet:
```python
app = application.Application().connect(title_re='.*Application Designer.*')
app.window_(title_re = 'Application Designer.*').MenuSelect('File->Delete')
```
However, when we connect to our application by giving complete name of window through title then we are not facing above mentioned issue.
Below code works fine:
```python
app = application.Application().connect(title='Application Designer - Untitled')
app.window_(title_re = 'Application Designer.*').MenuSelect('File->Delete')
```
Please let me know how to fix this issue if I need to use regex in my pywinauto code.
Thanks,
Jagadeesh
|
closed
|
2017-08-01T18:07:16Z
|
2018-11-10T08:44:11Z
|
https://github.com/pywinauto/pywinauto/issues/396
|
[
"bug"
] |
jagadeesh1983
| 8
|
microsoft/JARVIS
|
pytorch
| 33
|
KeyError: 'choices'
|
I am getting this error when using the cli:
Traceback (most recent call last):
File "awesome_chat.py", line 970, in <module>
cli()
File "awesome_chat.py", line 941, in cli
answer = chat_huggingface(messages)
File "awesome_chat.py", line 850, in chat_huggingface
task_str = parse_task(context, input).strip()
File "awesome_chat.py", line 297, in parse_task
return send_request(data)
File "awesome_chat.py", line 167, in send_request
return response.json()["choices"][0]["text"].strip()
KeyError: 'choices'
(jarvis)
My prompt was this:
[ User ]: based on this image, assets\example_image.JPG , please tell me what you see.
|
closed
|
2023-04-05T02:24:42Z
|
2023-04-05T20:34:59Z
|
https://github.com/microsoft/JARVIS/issues/33
|
[] |
MatthewCaponi
| 1
|
cvat-ai/cvat
|
pytorch
| 8,463
|
In the server, after modifying the CVAT backend code, how can I make it take effect?
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
In the server, after modifying the CVAT backend code, how can I make it take effect?
### Expected Behavior
In the server, after modifying the CVAT backend code, how can I make it take effect?
### Possible Solution
In the server, after modifying the CVAT backend code, how can I make it take effect?
### Context
In the server, after modifying the CVAT backend code, how can I make it take effect?
### Environment
_No response_
|
closed
|
2024-09-22T02:53:43Z
|
2024-09-24T07:57:36Z
|
https://github.com/cvat-ai/cvat/issues/8463
|
[
"question"
] |
zhanghaihangandliuli
| 2
|
NVIDIA/pix2pixHD
|
computer-vision
| 225
|
Pretrained 1024x512 model
|
Could you please provide the pretrained model of 1024x512 resolution?
|
open
|
2020-10-04T04:16:07Z
|
2020-10-04T04:18:58Z
|
https://github.com/NVIDIA/pix2pixHD/issues/225
|
[] |
wang-zm18
| 1
|
miguelgrinberg/Flask-Migrate
|
flask
| 210
|
Click+flask Migrate command issue
|
I'm a beginner in python and flask so this is probably wrong. I managed to get click working with app factory pattern without any global variable or so.
I don't know if this is the right way to do things but it some how works.
The output is blank so I cannot figure out whats going on.
usage:- python clitools.py --config testing run_app
```
class FlaskApp():
def __init__(self,config_name='development',*args,**kwargs):
self.app=create_app(config_name)
migrate=Migrate(self.app,db)
# manager=Manager(self.app)
from feedback.models.user_databse import Users
# super(FlaskGroup,self).__init__(*args,**kwargs)
@click.group()
@click.option('--config',default='development')
@click.pass_context
def cli(ctx,config):
if not config:
config=click.prompt("config")
ctx.obj=FlaskApp(configname=config)
# pass
@cli.command('run_app',help="Run the flask app")
@click.pass_context
def run_app(ctx):
ctx.obj.app.run()
@cli.command('create_db',help="Creates the database")
@click.pass_context
def create_db(ctx):
# with ctx.obj.app.app_context():
with ctx.obj.app.app_context():
db.create_all()
@cli.command('drop_db',help="Drops the database")
@click.pass_context
def drop_db(ctx):
if click.confirm("Are you sure"):
click.echo("Done")
with ctx.obj.app.app_context():
# db.drop_all()
db.drop_all()
@cli.command('unittest',help="Runs unit tests")
def unit_test():
import unittest
test_directory='test'
suite=unittest.TestLoader().discover(test_directory)
unittest.TextTestRunner(verbosity=2).run(suite)
# MigrateCommand returns FakeCommand()
@cli.command('migratecommand')
@click.pass_context
def migratedb(ctx):
# migrate=Migrate(ctx.obj.app,db)
mcommand=Manager(usage="Perform db migrations")
return mcommand
if __name__=="__main__":
cli()
```
I cannot figure out how to call the migrate command list
|
closed
|
2018-06-09T18:36:00Z
|
2019-01-13T22:20:32Z
|
https://github.com/miguelgrinberg/Flask-Migrate/issues/210
|
[
"question",
"auto-closed"
] |
afro-coder
| 3
|
jupyter-incubator/sparkmagic
|
jupyter
| 818
|
Jupyterlab 4.0.2 python 3.10
|
**Describe the bug**
%%sql magic function not working
```TypeError: required field "type_ignores" missing from Module```
**To Reproduce**
```
%%sql
show databases;
```
**Expected behavior**
Works similar to python 3.6
**Screenshots**

**Versions:**
- sparkmagic 0.20.5
- Livy (if you know it) custom build for 3.3.0 spark
- Spark 3.3.0
- pyspark 3.3.0
- ipython 8.14.0
- jupyterlab 4.0.2
- python 3.10
I have tried downgrading the ipython package to 7.33.0 . But it did not help
Any ideas what is the reason?
|
open
|
2023-06-22T14:31:09Z
|
2023-09-15T07:16:08Z
|
https://github.com/jupyter-incubator/sparkmagic/issues/818
|
[] |
Armadik
| 1
|
postmanlabs/httpbin
|
api
| 168
|
Add newline to end of page
|
I love httpbin, but one thing that bothers me is that the pages do not end with a linebreak. Some clients throw warnings, some might not even parse the final line.
There are some [good arguments](https://stackoverflow.com/questions/729692/why-should-files-end-with-a-newline) why text files should end with an eol. From a json perspective it doesn't matter, all whitespace is optional. Would it be possible to add this?
``` s
Warning message:
In readLines(curl("http://httpbin.org/get")) :
incomplete final line found on 'http://httpbin.org/get'
```
|
closed
|
2014-11-19T07:57:15Z
|
2018-04-26T17:51:03Z
|
https://github.com/postmanlabs/httpbin/issues/168
|
[] |
jeroen
| 5
|
scikit-image/scikit-image
|
computer-vision
| 7,305
|
SimpleITK and PIL tests are testing with tifffile
|
### Description:
If `plugin=None` (default) and a path ending in `.tif` are passed to `skimage.io.imread` the plugin order is ignored because of these lines https://github.com/scikit-image/scikit-image/blob/f4c1b34ac968d9fda332d7d9a63c83499aaac1f6/skimage/io/_io.py#L54-L56
This looks like unintended behavior at a first glance. Because at least [one test](https://github.com/scikit-image/scikit-image/blob/f4c1b34ac968d9fda332d7d9a63c83499aaac1f6/skimage/io/tests/test_pil.py#L168-L172) in `test_pil.py` is "secretly" using tifffile's imread. The autofixture calling and setting the plugin order via `use_plugin("pil")` has no effect here.
Forcing the correct plugin with `plugin="pil"` in [this line](https://github.com/scikit-image/scikit-image/blob/f4c1b34ac968d9fda332d7d9a63c83499aaac1f6/skimage/io/tests/test_pil.py#L170) will make the test fail because of the wrong dtype `>u2` (big) were apparently `<u2` (little) is expected? I'm not sure if this is the original intention of the test...
<details><summary>Test output</summary>
<p>
```python
def test_imread_uint16_big_endian():
expected = np.load(fetch('data/chessboard_GRAY_U8.npy'))
img = imread(fetch('data/chessboard_GRAY_U16B.tif'), plugin="pil")
> assert img.dtype == np.uint16
E AssertionError: assert dtype('>u2') == <class 'numpy.uint16'>
E + where dtype('>u2') = array([[255, 255, 255, ..., 0, 0, 0],\n [255, 255, 255, ..., 0, 0, 0],\n [255, 255, 255, ..., 0, 0, 0],\n ...,\n [ 0, 0, 0, ..., 255, 255, 255],\n [ 0, 0, 0, ..., 255, 255, 255],\n [ 0, 0, 0, ..., 255, 255, 255]], dtype='>u2').dtype
E + and <class 'numpy.uint16'> = np.uint16
skimage/io/tests/test_pil.py:171: AssertionError
```
</p>
</details>
Putting a `raise` in tifffile's imread shows that the following tests use the wrong plugin:
```
FAILED test_pil.py::test_imread_separate_channels[False] - RuntimeError
FAILED test_pil.py::test_imread_separate_channels[True] - RuntimeError
FAILED test_pil.py::test_imread_multipage_rgb_tif - RuntimeError
FAILED test_pil.py::test_imread_uint16 - RuntimeError
FAILED test_pil.py::test_imread_uint16_big_endian - RuntimeError
FAILED test_simpleitk.py::test_imread_uint16 - RuntimeError
FAILED test_simpleitk.py::test_imread_uint16_big_endian - RuntimeError
```
|
open
|
2024-01-22T19:25:42Z
|
2024-07-21T02:29:38Z
|
https://github.com/scikit-image/scikit-image/issues/7305
|
[
":sleeping: Dormant",
":bug: Bug"
] |
lagru
| 1
|
roboflow/supervision
|
deep-learning
| 1,558
|
too many values to unpack (expected 5)
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
I was using Supervision for an inference of yolov8 model on a sample video data. When I am trying to run the program using the code below, its showing me this error:
```
for _, _, confidence, class_id, _
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: too many values to unpack (expected 5)
```
I have went through the updated code in the `Detections/core.py` but haven't found out exactly why this error is coming up.

I have also referred to downgrading the `supervision` version to `0.3.0` but that has created more clashes as python versions and pytorch versions also clash.
### Environment
- supervision==0.23.0
- opencv-python==4.10.0.84
- ultralytics==8.3.1
- python==3.12.0
### Minimal Reproducible Example
```import supervision as sv
import numpy as np
from ultralytics import YOLO
video_path = r'det/data/football.mp4'
model = YOLO("yolov8s.pt")
video_info = sv.VideoInfo.from_video_path(video_path)
def process_frame(frame: np.ndarray,_) -> np.ndarray:
results = model(frame,imgsz = 1280)[0]
detections = sv.Detections.from_ultralytics(results)
box_annotator = sv.BoundingBoxAnnotator()
labels = [
f"{model.names[class_id]} {confidence:0.2f}"
for _, _, confidence, class_id, _
in detections
]
frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels)
return frame
sv.process_video(source_path=video_path, target_path=f"result.mp4", callback=process_frame)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR!
|
closed
|
2024-09-30T18:40:27Z
|
2024-10-01T16:58:16Z
|
https://github.com/roboflow/supervision/issues/1558
|
[
"bug"
] |
0xD4rky
| 4
|
tensorpack/tensorpack
|
tensorflow
| 1,208
|
FasterRCNN training, optimizer shape error.
|
Hello all!
First of all thank you for this project.
Brief: I converted my custom dataset into COCO-like. And get optimizer shape error.
Also, every time I start train.py script, the shape is different.
About dataset:
* Images in dataset have same shape.
* Segmentation is missing (Replaced it in json to empty 2d list)
* iscrowd: 0
So, could someone please suggest me what kind of reason could be in such error?
PS: meanwhile I'm downloading COCO 2014 to check dataset issue
Thanks!
### 1. What you did:
I converted my custom dataset to COCO-like and started training.
I run train.py w/o parameters
Modified config according to custom dataset (path, classes)
### 2. What you observed:
~~~~
/home/vstupakov/miniconda3/envs/gen3_gpu/bin/python /home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/train.py
[0523 13:31:05 @logger.py:90] Argv: /home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/train.py
[0523 13:31:05 @train.py:66] Environment Information:
-------------------- -------------------------------------------------------------------
/home/vstupakov/miniconda3/envs/gen3_gpu/bin/python /home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/train.py
[0523 13:56:41 @logger.py:90] Argv: /home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/train.py
[0523 13:56:41 @train.py:66] Environment Information:
-------------------- -------------------------------------------------------------------
sys.platform linux
Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) [GCC 7.3.0]
Tensorpack v0.9.4-68-g2981c5d4-dirty
Numpy 1.16.3
TensorFlow 1.13.1/b'unknown'
TF Compiler Version 5.4.0
TF CUDA support True
TF MKL support False
Nvidia Driver /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.418.56
CUDA /home/vstupakov/miniconda3/envs/gen3_gpu/lib/libcudart.so.10.0.130
CUDNN /home/vstupakov/miniconda3/envs/gen3_gpu/lib/libcudnn.so.7.3.1
NCCL
CUDA_VISIBLE_DEVICES None
GPU 0 Quadro GV100
Free RAM 23.33/31.26 GB
CPU Count 12
cv2 4.1.0
msgpack 0.6.1
python-prctl True
-------------------- -------------------------------------------------------------------
[0523 13:56:41 @config.py:282] Config: ------------------------------------------
{'BACKBONE': {'FREEZE_AFFINE': False,
'FREEZE_AT': 2,
'NORM': 'FreezeBN',
'RESNET_NUM_BLOCKS': [3, 4, 6, 3],
'STRIDE_1X1': False,
'TF_PAD_MODE': False,
'WEIGHTS': ''},
'CASCADE': {'BBOX_REG_WEIGHTS': [[10.0, 10.0, 5.0, 5.0], [20.0, 20.0, 10.0, 10.0],
[30.0, 30.0, 15.0, 15.0]],
'IOUS': [0.5, 0.6, 0.7]},
'DATA': {'ABSOLUTE_COORD': True,
'BASEDIR': '/tmp/converter_test',
'CLASS_NAMES': ['BG', '1', '2', '3'],
'NUM_CATEGORY': 3,
'NUM_WORKERS': 0,
'TRAIN': ('cocolike_train',),
'VAL': ('cocolike_val',)},
'FPN': {'ANCHOR_STRIDES': (4, 8, 16, 32, 64),
'CASCADE': False,
'FRCNN_CONV_HEAD_DIM': 256,
'FRCNN_FC_HEAD_DIM': 1024,
'FRCNN_HEAD_FUNC': 'fastrcnn_2fc_head',
'MRCNN_HEAD_FUNC': 'maskrcnn_up4conv_head',
'NORM': 'None',
'NUM_CHANNEL': 256,
'PROPOSAL_MODE': 'Level',
'RESOLUTION_REQUIREMENT': 32},
'FRCNN': {'BATCH_PER_IM': 512,
'BBOX_REG_WEIGHTS': [10.0, 10.0, 5.0, 5.0],
'FG_RATIO': 0.25,
'FG_THRESH': 0.5},
'MODE_FPN': False,
'MODE_MASK': False,
'MRCNN': {'HEAD_DIM': 256},
'PREPROC': {'MAX_SIZE': 1333,
'PIXEL_MEAN': [123.675, 116.28, 103.53],
'PIXEL_STD': [58.395, 57.12, 57.375],
'TEST_SHORT_EDGE_SIZE': 800,
'TRAIN_SHORT_EDGE_SIZE': [800, 800]},
'RPN': {'ANCHOR_RATIOS': (0.5, 1.0, 2.0),
'ANCHOR_SIZES': (4, 8, 16, 32, 64),
'ANCHOR_STRIDE': 16,
'BATCH_PER_IM': 256,
'CROWD_OVERLAP_THRESH': 9.99,
'FG_RATIO': 0.5,
'HEAD_DIM': 1024,
'MIN_SIZE': 0,
'NEGATIVE_ANCHOR_THRESH': 0.3,
'NUM_ANCHOR': 15,
'POSITIVE_ANCHOR_THRESH': 0.7,
'PROPOSAL_NMS_THRESH': 0.7,
'TEST_PER_LEVEL_NMS_TOPK': 1000,
'TEST_POST_NMS_TOPK': 1000,
'TEST_PRE_NMS_TOPK': 6000,
'TRAIN_PER_LEVEL_NMS_TOPK': 2000,
'TRAIN_POST_NMS_TOPK': 2000,
'TRAIN_PRE_NMS_TOPK': 12000},
'TEST': {'FRCNN_NMS_THRESH': 0.5,
'RESULTS_PER_IM': 100,
'RESULT_SCORE_THRESH': 0.05,
'RESULT_SCORE_THRESH_VIS': 0.5},
'TRAIN': {'BASE_LR': 0.01,
'EVAL_PERIOD': 25,
'LR_SCHEDULE': [240000, 320000, 360000],
'NUM_GPUS': 1,
'STARTING_EPOCH': 1,
'STEPS_PER_EPOCH': 500,
'WARMUP': 1000,
'WARMUP_INIT_LR': 0.0033000000000000004,
'WEIGHT_DECAY': 0.0001},
'TRAINER': 'replicated'}
[0523 13:56:41 @train.py:83] Warm Up Schedule (steps, value): [(0, 0.0033000000000000004), (1000, 0.01)]
[0523 13:56:41 @train.py:84] LR Schedule (epochs, value): [(2, 0.01), (3840.0, 0.001), (5120.0, 0.00010000000000000002)]
loading annotations into memory...
Done (t=0.75s)
creating index...
index created!
[0523 13:56:42 @coco.py:71] Instances loaded from /tmp/converter_test/annotations/instances_train.json.
99%|█████████▊| 848/860 [00:02<00:00, 252.31it/s][0523 13:56:45 @timer.py:50] Load Groundtruth Boxes for train finished, time:2.7646 sec.
100%|██████████| 860/860 [00:02<00:00, 312.21it/s]
[0523 13:56:45 @data.py:54] Ground-Truth Boxes:
| class | #box |
|:--------|-------:|
| 1 | 68741 |
| 2 | 7047 |
| 3 | 0 |
| total | 75788 |
[0523 13:56:45 @data.py:383] Filtered 0 images which contain no non-crowd groudtruth boxes. Total #images for training: 860
[0523 13:56:45 @train.py:88] Total passes of the training set is: 3348.8
[0523 13:56:45 @prof.py:49] WRN [GPUUtilizationTracker] Both devices and CUDA_VISIBLE_DEVICES are None! Will monitor all 1 visible GPUs!
[0523 13:56:45 @input_source.py:223] Setting up the queue 'QueueInput/input_queue' for CPU prefetching ...
WARNING:tensorflow:From /home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
[0523 13:56:45 @training.py:110] Building graph for training tower 0 on device /gpu:0 ...
[0523 13:56:45 @registry.py:135] conv0 input: [1, 3, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] conv0 output: [1, 64, None, None]
[0523 13:56:45 @registry.py:135] pool0 input: [1, 64, None, None]
[0523 13:56:45 @registry.py:143] pool0 output: [1, 64, None, None]
[0523 13:56:45 @registry.py:135] group0/block0/conv1 input: [1, 64, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group0/block0/conv1 output: [1, 64, None, None]
[0523 13:56:45 @registry.py:135] group0/block0/conv2 input: [1, 64, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group0/block0/conv2 output: [1, 64, None, None]
[0523 13:56:45 @registry.py:135] group0/block0/conv3 input: [1, 64, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group0/block0/conv3 output: [1, 256, None, None]
[0523 13:56:45 @registry.py:135] group0/block0/convshortcut input: [1, 64, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group0/block0/convshortcut output: [1, 256, None, None]
[0523 13:56:45 @registry.py:135] group0/block1/conv1 input: [1, 256, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group0/block1/conv1 output: [1, 64, None, None]
[0523 13:56:45 @registry.py:135] group0/block1/conv2 input: [1, 64, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group0/block1/conv2 output: [1, 64, None, None]
[0523 13:56:45 @registry.py:135] group0/block1/conv3 input: [1, 64, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group0/block1/conv3 output: [1, 256, None, None]
[0523 13:56:45 @registry.py:135] group0/block2/conv1 input: [1, 256, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group0/block2/conv1 output: [1, 64, None, None]
[0523 13:56:45 @registry.py:135] group0/block2/conv2 input: [1, 64, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group0/block2/conv2 output: [1, 64, None, None]
[0523 13:56:45 @registry.py:135] group0/block2/conv3 input: [1, 64, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group0/block2/conv3 output: [1, 256, None, None]
[0523 13:56:45 @registry.py:135] group1/block0/conv1 input: [1, 256, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group1/block0/conv1 output: [1, 128, None, None]
[0523 13:56:45 @registry.py:135] group1/block0/conv2 input: [1, 128, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group1/block0/conv2 output: [1, 128, None, None]
[0523 13:56:45 @registry.py:135] group1/block0/conv3 input: [1, 128, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group1/block0/conv3 output: [1, 512, None, None]
[0523 13:56:45 @registry.py:135] group1/block0/convshortcut input: [1, 256, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group1/block0/convshortcut output: [1, 512, None, None]
[0523 13:56:45 @registry.py:135] group1/block1/conv1 input: [1, 512, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group1/block1/conv1 output: [1, 128, None, None]
[0523 13:56:45 @registry.py:135] group1/block1/conv2 input: [1, 128, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group1/block1/conv2 output: [1, 128, None, None]
[0523 13:56:45 @registry.py:135] group1/block1/conv3 input: [1, 128, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group1/block1/conv3 output: [1, 512, None, None]
[0523 13:56:45 @registry.py:135] group1/block2/conv1 input: [1, 512, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group1/block2/conv1 output: [1, 128, None, None]
[0523 13:56:45 @registry.py:135] group1/block2/conv2 input: [1, 128, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group1/block2/conv2 output: [1, 128, None, None]
[0523 13:56:45 @registry.py:135] group1/block2/conv3 input: [1, 128, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:45 @registry.py:143] group1/block2/conv3 output: [1, 512, None, None]
[0523 13:56:45 @registry.py:135] group1/block3/conv1 input: [1, 512, None, None]
[0523 13:56:45 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group1/block3/conv1 output: [1, 128, None, None]
[0523 13:56:46 @registry.py:135] group1/block3/conv2 input: [1, 128, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group1/block3/conv2 output: [1, 128, None, None]
[0523 13:56:46 @registry.py:135] group1/block3/conv3 input: [1, 128, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group1/block3/conv3 output: [1, 512, None, None]
[0523 13:56:46 @registry.py:135] group2/block0/conv1 input: [1, 512, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block0/conv1 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block0/conv2 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block0/conv2 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block0/conv3 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block0/conv3 output: [1, 1024, None, None]
[0523 13:56:46 @registry.py:135] group2/block0/convshortcut input: [1, 512, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block0/convshortcut output: [1, 1024, None, None]
[0523 13:56:46 @registry.py:135] group2/block1/conv1 input: [1, 1024, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block1/conv1 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block1/conv2 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block1/conv2 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block1/conv3 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block1/conv3 output: [1, 1024, None, None]
[0523 13:56:46 @registry.py:135] group2/block2/conv1 input: [1, 1024, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block2/conv1 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block2/conv2 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block2/conv2 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block2/conv3 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block2/conv3 output: [1, 1024, None, None]
[0523 13:56:46 @registry.py:135] group2/block3/conv1 input: [1, 1024, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block3/conv1 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block3/conv2 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block3/conv2 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block3/conv3 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block3/conv3 output: [1, 1024, None, None]
[0523 13:56:46 @registry.py:135] group2/block4/conv1 input: [1, 1024, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block4/conv1 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block4/conv2 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block4/conv2 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block4/conv3 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block4/conv3 output: [1, 1024, None, None]
[0523 13:56:46 @registry.py:135] group2/block5/conv1 input: [1, 1024, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block5/conv1 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block5/conv2 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block5/conv2 output: [1, 256, None, None]
[0523 13:56:46 @registry.py:135] group2/block5/conv3 input: [1, 256, None, None]
[0523 13:56:46 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:46 @registry.py:143] group2/block5/conv3 output: [1, 1024, None, None]
[0523 13:56:46 @registry.py:135] rpn input: [1, 1024, None, None]
[0523 13:56:46 @registry.py:135] rpn/conv0 input: [1, 1024, None, None]
[0523 13:56:46 @registry.py:143] rpn/conv0 output: [1, 1024, None, None]
[0523 13:56:46 @registry.py:135] rpn/class input: [1, 1024, None, None]
[0523 13:56:46 @registry.py:143] rpn/class output: [1, 15, None, None]
[0523 13:56:46 @registry.py:135] rpn/box input: [1, 1024, None, None]
[0523 13:56:46 @registry.py:143] rpn/box output: [1, 60, None, None]
[0523 13:56:46 @registry.py:143] rpn output: [None, None, 15],[None, None, 15, 4]
WARNING:tensorflow:From /home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/ops/losses/losses_impl.py:448: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
[0523 13:56:47 @registry.py:135] group3/block0/conv1 input: [None, 1024, 14, 14]
[0523 13:56:47 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:47 @registry.py:143] group3/block0/conv1 output: [None, 512, 14, 14]
[0523 13:56:47 @registry.py:135] group3/block0/conv2 input: [None, 512, 15, 15]
[0523 13:56:47 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:47 @registry.py:143] group3/block0/conv2 output: [None, 512, 7, 7]
[0523 13:56:47 @registry.py:135] group3/block0/conv3 input: [None, 512, 7, 7]
[0523 13:56:47 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:47 @registry.py:143] group3/block0/conv3 output: [None, 2048, 7, 7]
[0523 13:56:47 @registry.py:135] group3/block0/convshortcut input: [None, 1024, 13, 13]
[0523 13:56:47 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:47 @registry.py:143] group3/block0/convshortcut output: [None, 2048, 7, 7]
[0523 13:56:47 @registry.py:135] group3/block1/conv1 input: [None, 2048, 7, 7]
[0523 13:56:47 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:47 @registry.py:143] group3/block1/conv1 output: [None, 512, 7, 7]
[0523 13:56:47 @registry.py:135] group3/block1/conv2 input: [None, 512, 7, 7]
[0523 13:56:47 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:47 @registry.py:143] group3/block1/conv2 output: [None, 512, 7, 7]
[0523 13:56:47 @registry.py:135] group3/block1/conv3 input: [None, 512, 7, 7]
[0523 13:56:47 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:47 @registry.py:143] group3/block1/conv3 output: [None, 2048, 7, 7]
[0523 13:56:47 @registry.py:135] group3/block2/conv1 input: [None, 2048, 7, 7]
[0523 13:56:47 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:47 @registry.py:143] group3/block2/conv1 output: [None, 512, 7, 7]
[0523 13:56:47 @registry.py:135] group3/block2/conv2 input: [None, 512, 7, 7]
[0523 13:56:47 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:47 @registry.py:143] group3/block2/conv2 output: [None, 512, 7, 7]
[0523 13:56:47 @registry.py:135] group3/block2/conv3 input: [None, 512, 7, 7]
[0523 13:56:47 @batch_norm.py:213] WRN [BatchNorm] Using moving_mean/moving_variance in training.
[0523 13:56:47 @registry.py:143] group3/block2/conv3 output: [None, 2048, 7, 7]
[0523 13:56:47 @registry.py:135] gap input: [None, 2048, 7, 7]
[0523 13:56:47 @registry.py:143] gap output: [None, 2048]
[0523 13:56:47 @registry.py:135] fastrcnn input: [None, 2048]
[0523 13:56:47 @registry.py:135] fastrcnn/class input: [None, 2048]
[0523 13:56:47 @registry.py:143] fastrcnn/class output: [None, 4]
[0523 13:56:47 @registry.py:135] fastrcnn/box input: [None, 2048]
[0523 13:56:47 @registry.py:143] fastrcnn/box output: [None, 16]
[0523 13:56:47 @registry.py:143] fastrcnn output: [None, 4],[None, 4, 4]
[0523 13:56:47 @regularize.py:97] regularize_cost() found 47 variables to regularize.
[0523 13:56:47 @regularize.py:21] The following tensors will be regularized: group1/block0/conv1/W:0, group1/block0/conv2/W:0, group1/block0/conv3/W:0, group1/block0/convshortcut/W:0, group1/block1/conv1/W:0, group1/block1/conv2/W:0, group1/block1/conv3/W:0, group1/block2/conv1/W:0, group1/block2/conv2/W:0, group1/block2/conv3/W:0, group1/block3/conv1/W:0, group1/block3/conv2/W:0, group1/block3/conv3/W:0, group2/block0/conv1/W:0, group2/block0/conv2/W:0, group2/block0/conv3/W:0, group2/block0/convshortcut/W:0, group2/block1/conv1/W:0, group2/block1/conv2/W:0, group2/block1/conv3/W:0, group2/block2/conv1/W:0, group2/block2/conv2/W:0, group2/block2/conv3/W:0, group2/block3/conv1/W:0, group2/block3/conv2/W:0, group2/block3/conv3/W:0, group2/block4/conv1/W:0, group2/block4/conv2/W:0, group2/block4/conv3/W:0, group2/block5/conv1/W:0, group2/block5/conv2/W:0, group2/block5/conv3/W:0, rpn/conv0/W:0, rpn/class/W:0, rpn/box/W:0, group3/block0/conv1/W:0, group3/block0/conv2/W:0, group3/block0/conv3/W:0, group3/block0/convshortcut/W:0, group3/block1/conv1/W:0, group3/block1/conv2/W:0, group3/block1/conv3/W:0, group3/block2/conv1/W:0, group3/block2/conv2/W:0, group3/block2/conv3/W:0, fastrcnn/class/W:0, fastrcnn/box/W:0
WARNING:tensorflow:From /home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/ops/array_grad.py:425: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:110: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
[0523 13:56:50 @training.py:348] 'sync_variables_from_main_tower' includes 0 operations.
[0523 13:56:50 @model_utils.py:67] List of Trainable Variables:
name shape #elements
------------------------------------- ------------------ -----------
group1/block0/conv1/W:0 [1, 1, 256, 128] 32768
group1/block0/conv1/bn/gamma:0 [128] 128
group1/block0/conv1/bn/beta:0 [128] 128
group1/block0/conv2/W:0 [3, 3, 128, 128] 147456
group1/block0/conv2/bn/gamma:0 [128] 128
group1/block0/conv2/bn/beta:0 [128] 128
group1/block0/conv3/W:0 [1, 1, 128, 512] 65536
group1/block0/conv3/bn/gamma:0 [512] 512
group1/block0/conv3/bn/beta:0 [512] 512
group1/block0/convshortcut/W:0 [1, 1, 256, 512] 131072
group1/block0/convshortcut/bn/gamma:0 [512] 512
group1/block0/convshortcut/bn/beta:0 [512] 512
group1/block1/conv1/W:0 [1, 1, 512, 128] 65536
group1/block1/conv1/bn/gamma:0 [128] 128
group1/block1/conv1/bn/beta:0 [128] 128
group1/block1/conv2/W:0 [3, 3, 128, 128] 147456
group1/block1/conv2/bn/gamma:0 [128] 128
group1/block1/conv2/bn/beta:0 [128] 128
group1/block1/conv3/W:0 [1, 1, 128, 512] 65536
group1/block1/conv3/bn/gamma:0 [512] 512
group1/block1/conv3/bn/beta:0 [512] 512
group1/block2/conv1/W:0 [1, 1, 512, 128] 65536
group1/block2/conv1/bn/gamma:0 [128] 128
group1/block2/conv1/bn/beta:0 [128] 128
group1/block2/conv2/W:0 [3, 3, 128, 128] 147456
group1/block2/conv2/bn/gamma:0 [128] 128
group1/block2/conv2/bn/beta:0 [128] 128
group1/block2/conv3/W:0 [1, 1, 128, 512] 65536
group1/block2/conv3/bn/gamma:0 [512] 512
group1/block2/conv3/bn/beta:0 [512] 512
group1/block3/conv1/W:0 [1, 1, 512, 128] 65536
group1/block3/conv1/bn/gamma:0 [128] 128
group1/block3/conv1/bn/beta:0 [128] 128
group1/block3/conv2/W:0 [3, 3, 128, 128] 147456
group1/block3/conv2/bn/gamma:0 [128] 128
group1/block3/conv2/bn/beta:0 [128] 128
group1/block3/conv3/W:0 [1, 1, 128, 512] 65536
group1/block3/conv3/bn/gamma:0 [512] 512
group1/block3/conv3/bn/beta:0 [512] 512
group2/block0/conv1/W:0 [1, 1, 512, 256] 131072
group2/block0/conv1/bn/gamma:0 [256] 256
group2/block0/conv1/bn/beta:0 [256] 256
group2/block0/conv2/W:0 [3, 3, 256, 256] 589824
group2/block0/conv2/bn/gamma:0 [256] 256
group2/block0/conv2/bn/beta:0 [256] 256
group2/block0/conv3/W:0 [1, 1, 256, 1024] 262144
group2/block0/conv3/bn/gamma:0 [1024] 1024
group2/block0/conv3/bn/beta:0 [1024] 1024
group2/block0/convshortcut/W:0 [1, 1, 512, 1024] 524288
group2/block0/convshortcut/bn/gamma:0 [1024] 1024
group2/block0/convshortcut/bn/beta:0 [1024] 1024
group2/block1/conv1/W:0 [1, 1, 1024, 256] 262144
group2/block1/conv1/bn/gamma:0 [256] 256
group2/block1/conv1/bn/beta:0 [256] 256
group2/block1/conv2/W:0 [3, 3, 256, 256] 589824
group2/block1/conv2/bn/gamma:0 [256] 256
group2/block1/conv2/bn/beta:0 [256] 256
group2/block1/conv3/W:0 [1, 1, 256, 1024] 262144
group2/block1/conv3/bn/gamma:0 [1024] 1024
group2/block1/conv3/bn/beta:0 [1024] 1024
group2/block2/conv1/W:0 [1, 1, 1024, 256] 262144
group2/block2/conv1/bn/gamma:0 [256] 256
group2/block2/conv1/bn/beta:0 [256] 256
group2/block2/conv2/W:0 [3, 3, 256, 256] 589824
group2/block2/conv2/bn/gamma:0 [256] 256
group2/block2/conv2/bn/beta:0 [256] 256
group2/block2/conv3/W:0 [1, 1, 256, 1024] 262144
group2/block2/conv3/bn/gamma:0 [1024] 1024
group2/block2/conv3/bn/beta:0 [1024] 1024
group2/block3/conv1/W:0 [1, 1, 1024, 256] 262144
group2/block3/conv1/bn/gamma:0 [256] 256
group2/block3/conv1/bn/beta:0 [256] 256
group2/block3/conv2/W:0 [3, 3, 256, 256] 589824
group2/block3/conv2/bn/gamma:0 [256] 256
group2/block3/conv2/bn/beta:0 [256] 256
group2/block3/conv3/W:0 [1, 1, 256, 1024] 262144
group2/block3/conv3/bn/gamma:0 [1024] 1024
group2/block3/conv3/bn/beta:0 [1024] 1024
group2/block4/conv1/W:0 [1, 1, 1024, 256] 262144
group2/block4/conv1/bn/gamma:0 [256] 256
group2/block4/conv1/bn/beta:0 [256] 256
group2/block4/conv2/W:0 [3, 3, 256, 256] 589824
group2/block4/conv2/bn/gamma:0 [256] 256
group2/block4/conv2/bn/beta:0 [256] 256
group2/block4/conv3/W:0 [1, 1, 256, 1024] 262144
group2/block4/conv3/bn/gamma:0 [1024] 1024
group2/block4/conv3/bn/beta:0 [1024] 1024
group2/block5/conv1/W:0 [1, 1, 1024, 256] 262144
group2/block5/conv1/bn/gamma:0 [256] 256
group2/block5/conv1/bn/beta:0 [256] 256
group2/block5/conv2/W:0 [3, 3, 256, 256] 589824
group2/block5/conv2/bn/gamma:0 [256] 256
group2/block5/conv2/bn/beta:0 [256] 256
group2/block5/conv3/W:0 [1, 1, 256, 1024] 262144
group2/block5/conv3/bn/gamma:0 [1024] 1024
group2/block5/conv3/bn/beta:0 [1024] 1024
rpn/conv0/W:0 [3, 3, 1024, 1024] 9437184
rpn/conv0/b:0 [1024] 1024
rpn/class/W:0 [1, 1, 1024, 15] 15360
rpn/class/b:0 [15] 15
rpn/box/W:0 [1, 1, 1024, 60] 61440
rpn/box/b:0 [60] 60
group3/block0/conv1/W:0 [1, 1, 1024, 512] 524288
group3/block0/conv1/bn/gamma:0 [512] 512
group3/block0/conv1/bn/beta:0 [512] 512
group3/block0/conv2/W:0 [3, 3, 512, 512] 2359296
group3/block0/conv2/bn/gamma:0 [512] 512
group3/block0/conv2/bn/beta:0 [512] 512
group3/block0/conv3/W:0 [1, 1, 512, 2048] 1048576
group3/block0/conv3/bn/gamma:0 [2048] 2048
group3/block0/conv3/bn/beta:0 [2048] 2048
group3/block0/convshortcut/W:0 [1, 1, 1024, 2048] 2097152
group3/block0/convshortcut/bn/gamma:0 [2048] 2048
group3/block0/convshortcut/bn/beta:0 [2048] 2048
group3/block1/conv1/W:0 [1, 1, 2048, 512] 1048576
group3/block1/conv1/bn/gamma:0 [512] 512
group3/block1/conv1/bn/beta:0 [512] 512
group3/block1/conv2/W:0 [3, 3, 512, 512] 2359296
group3/block1/conv2/bn/gamma:0 [512] 512
group3/block1/conv2/bn/beta:0 [512] 512
group3/block1/conv3/W:0 [1, 1, 512, 2048] 1048576
group3/block1/conv3/bn/gamma:0 [2048] 2048
group3/block1/conv3/bn/beta:0 [2048] 2048
group3/block2/conv1/W:0 [1, 1, 2048, 512] 1048576
group3/block2/conv1/bn/gamma:0 [512] 512
group3/block2/conv1/bn/beta:0 [512] 512
group3/block2/conv2/W:0 [3, 3, 512, 512] 2359296
group3/block2/conv2/bn/gamma:0 [512] 512
group3/block2/conv2/bn/beta:0 [512] 512
group3/block2/conv3/W:0 [1, 1, 512, 2048] 1048576
group3/block2/conv3/bn/gamma:0 [2048] 2048
group3/block2/conv3/bn/beta:0 [2048] 2048
fastrcnn/class/W:0 [2048, 4] 8192
fastrcnn/class/b:0 [4] 4
fastrcnn/box/W:0 [2048, 16] 32768
fastrcnn/box/b:0 [16] 16
Number of trainable variables: 136
Number of parameters (elements): 32838751
Storage space needed for all trainable variables: 125.27MB
[0523 13:56:50 @base.py:209] Setup callbacks graph ...
[0523 13:56:51 @prof.py:270] [HostMemoryTracker] Free RAM in setup_graph() is 23.17 GB.
[0523 13:56:51 @tower.py:140] Building graph for predict tower 'tower-pred-0' on device /gpu:0 ...
[0523 13:56:52 @collection.py:152] Size of these collections were changed in tower-pred-0: (tf.GraphKeys.MODEL_VARIABLES: 183->238)
loading annotations into memory...
Done (t=0.06s)
creating index...
index created!
[0523 13:56:52 @coco.py:71] Instances loaded from /tmp/converter_test/annotations/instances_val.json.
0%| | 0/95 [00:00<?, ?it/s][0523 13:56:52 @timer.py:50] Load Groundtruth Boxes for val finished, time:0.0067 sec.
100%|██████████| 95/95 [00:00<00:00, 18311.53it/s]
[0523 13:56:52 @data.py:409] Found 95 images for inference.
loading annotations into memory...
Done (t=0.15s)
creating index...
0%| | 0/95 [00:00<?, ?it/s]index created!
[0523 13:56:52 @coco.py:71] Instances loaded from /tmp/converter_test/annotations/instances_val.json.
100%|██████████| 95/95 [00:00<00:00, 20896.73it/s]
[0523 13:56:52 @timer.py:50] Load Groundtruth Boxes for val finished, time:0.0049 sec.
[0523 13:56:52 @data.py:409] Found 95 images for inference.
[0523 13:56:52 @summary.py:46] [MovingAverageSummary] 23 operations in collection 'MOVING_SUMMARY_OPS' will be run with session hooks.
[0523 13:56:52 @summary.py:93] Summarizing collection 'summaries' of size 25.
2019-05-23 13:56:53.375630: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
[0523 13:56:53 @base.py:230] Creating the session ...
2019-05-23 13:56:53.397168: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3696000000 Hz
2019-05-23 13:56:53.397940: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55c00fcadf10 executing computations on platform Host. Devices:
2019-05-23 13:56:53.397966: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
2019-05-23 13:56:53.465983: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-05-23 13:56:53.466837: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55c016a85e70 executing computations on platform CUDA. Devices:
2019-05-23 13:56:53.466850: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): Quadro GV100, Compute Capability 7.0
2019-05-23 13:56:53.467506: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: Quadro GV100 major: 7 minor: 0 memoryClockRate(GHz): 1.627
pciBusID: 0000:01:00.0
totalMemory: 31.72GiB freeMemory: 31.41GiB
2019-05-23 13:56:53.467516: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-23 13:56:53.468483: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-23 13:56:53.468491: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-05-23 13:56:53.468494: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-05-23 13:56:53.469093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 32153 MB memory) -> physical GPU (device: 0, name: Quadro GV100, pci bus id: 0000:01:00.0, compute capability: 7.0)
[0523 13:56:55 @base.py:236] Initializing the session ...
[0523 13:56:55 @base.py:243] Graph Finalized.
[0523 13:56:55 @concurrency.py:37] Starting EnqueueThread QueueInput/input_queue ...
[0523 13:56:55 @graph.py:73] Running Op sync_variables/sync_variables_from_main_tower ...
[0523 13:56:56 @param.py:158] [HyperParamSetter] At global_step=0, learning_rate is set to 0.003300
0%| |0/500[00:00<?,?it/s][0523 13:56:56 @prof.py:273] [HostMemoryTracker] Free RAM in before_train() is 22.17 GB.
[0523 13:56:56 @eval.py:222] [EvalCallback] Will evaluate every 25 epochs
[0523 13:56:56 @base.py:275] Start Epoch 1 ...
2019-05-23 13:57:01.069220: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
Traceback (most recent call last):
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [128,1,2] vs. [47,1,2]
[[{{node tower0/encode_bbox_target_1/truediv_1}}]]
[[{{node apply_gradients/AccumGradOptimizer/cond/Momentum/update_group1/block0/conv1/W/ApplyMomentum/Switch_2}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/train.py", line 137, in <module>
launch_train_with_config(traincfg, trainer)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/train/interface.py", line 101, in launch_train_with_config
extra_callbacks=config.extra_callbacks)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/train/base.py", line 344, in train_with_defaults
steps_per_epoch, starting_epoch, max_epoch)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/train/base.py", line 316, in train
self.main_loop(steps_per_epoch, starting_epoch, max_epoch)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/utils/argtools.py", line 176, in wrapper
return func(*args, **kwargs)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/train/base.py", line 281, in main_loop
self.run_step() # implemented by subclass
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/train/base.py", line 181, in run_step
self.hooked_sess.run(self.train_op)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 676, in run
run_metadata=run_metadata)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1171, in run
run_metadata=run_metadata)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1270, in run
raise six.reraise(*original_exc_info)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1255, in run
return self._sess.run(*args, **kwargs)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1327, in run
run_metadata=run_metadata)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1091, in run
return self._sess.run(*args, **kwargs)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [128,1,2] vs. [47,1,2]
[[node tower0/encode_bbox_target_1/truediv_1 (defined at /home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/modeling/model_box.py:77) ]]
[[node apply_gradients/AccumGradOptimizer/cond/Momentum/update_group1/block0/conv1/W/ApplyMomentum/Switch_2 (defined at /home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/tfutils/optimizer.py:210) ]]
Caused by op 'tower0/encode_bbox_target_1/truediv_1', defined at:
File "/home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/train.py", line 137, in <module>
launch_train_with_config(traincfg, trainer)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/train/interface.py", line 91, in launch_train_with_config
model.build_graph, model.get_optimizer)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/utils/argtools.py", line 176, in wrapper
return func(*args, **kwargs)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/train/tower.py", line 224, in setup_graph
train_callbacks = self._setup_graph(input, get_cost_fn, get_opt_fn)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/train/trainers.py", line 189, in _setup_graph
grad_list = self._builder.call_for_each_tower(tower_fn)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/graph_builder/training.py", line 226, in call_for_each_tower
use_vs=[False] + [True] * (len(self.towers) - 1))
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/graph_builder/training.py", line 122, in build_on_towers
return DataParallelBuilder.call_for_each_tower(*args, **kwargs)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/graph_builder/training.py", line 117, in call_for_each_tower
ret.append(func())
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/train/tower.py", line 280, in get_grad_fn
return compute_grad_from_inputs(*inputs)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/train/tower.py", line 255, in compute_grad_from_inputs
cost = get_cost_fn(*inputs)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/tfutils/tower.py", line 290, in __call__
output = self._tower_fn(*args)
File "/home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/modeling/generalized_rcnn.py", line 73, in build_graph
head_losses = self.roi_heads(image, features, proposals, targets)
File "/home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/modeling/generalized_rcnn.py", line 152, in roi_heads
all_losses = fastrcnn_head.losses()
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/utils/argtools.py", line 200, in wrapper
value = func(*args, **kwargs)
File "/home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/modeling/model_frcnn.py", line 367, in losses
self.proposals.fg_boxes()) * self.bbox_regression_weights
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/tfutils/scope_utils.py", line 94, in wrapper
return func(*args, **kwargs)
File "/home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/modeling/model_box.py", line 77, in encode_bbox_target
twth = tf.log(wbhb / waha) # may contain -inf for invalid boxes
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 812, in binary_op_wrapper
return func(x, y, name=name)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 920, in _truediv_python3
return gen_math_ops.real_div(x, y, name=name)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 6897, in real_div
"RealDiv", x=x, y=y, name=name)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Incompatible shapes: [128,1,2] vs. [47,1,2]
[[node tower0/encode_bbox_target_1/truediv_1 (defined at /home/vstupakov/Projects/Gen3/gen3/detection/tensorflow_models/faster_rcnn_tensorpack/modeling/model_box.py:77) ]]
[[node apply_gradients/AccumGradOptimizer/cond/Momentum/update_group1/block0/conv1/W/ApplyMomentum/Switch_2 (defined at /home/vstupakov/miniconda3/envs/gen3_gpu/lib/python3.6/site-packages/tensorpack/tfutils/optimizer.py:210) ]]
Process finished with exit code 1
~~~~
(2) **Other observations, if any:**
For example, CPU/GPU utilization, output images, tensorboard curves, if relevant to your issue.
### 3. Your environment:
````
-------------------- -------------------------------------------------------------------
sys.platform linux
Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) [GCC 7.3.0]
Tensorpack v0.9.4-68-g2981c5d4-dirty
Numpy 1.16.3
TensorFlow 1.13.1/b'unknown'
TF Compiler Version 5.4.0
TF CUDA support True
TF MKL support False
Nvidia Driver /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.418.56
CUDA /home/vstupakov/miniconda3/envs/gen3_gpu/lib/libcudart.so.10.0.130
CUDNN /home/vstupakov/miniconda3/envs/gen3_gpu/lib/libcudnn.so.7.3.1
NCCL
CUDA_VISIBLE_DEVICES None
GPU 0 Quadro GV100
Free RAM 23.44/31.26 GB
CPU Count 12
cv2 4.1.0
msgpack 0.6.1
python-prctl True
-------------------- -------------------------------------------------------------------
````
|
closed
|
2019-05-23T10:58:01Z
|
2019-05-23T16:47:01Z
|
https://github.com/tensorpack/tensorpack/issues/1208
|
[
"examples"
] |
Red-Eyed
| 2
|
wkentaro/labelme
|
computer-vision
| 861
|
Add a way to load a predefined list of labels for a project [Feature]
|
Never mind, I found the --labels command line argument does what I want
|
closed
|
2021-04-28T02:59:57Z
|
2022-06-25T04:43:50Z
|
https://github.com/wkentaro/labelme/issues/861
|
[] |
hqm
| 1
|
feature-engine/feature_engine
|
scikit-learn
| 274
|
[Feat]: Log1p transformer
|
**Is your feature request related to a problem? Please describe.**
When you want to apply a logarithmic transformation and data contains a zero value, calculating `log(1+x)` is more convenient.
**Describe the solution you'd like**
This could be another argument in [LogTransformer](https://github.com/solegalli/feature_engine/blob/master/feature_engine/transformation/log.py#L144) that checks for zeros and apply `log(1 + x)`
```
from feature_engine import transformation as vt
tf = vt.LogTransformer(variables = ['LotArea', 'GrLivArea'], check_zeros=True)
```
under the hood basically applies
```
if check_zeros:
log_1p = True if (X[self.variables_] == 0).any().any() else False
```
or it can be a separate transformer
```
from feature_engine import transformation as vt
tf = vt.Log1PTransformer(variables = ['LotArea', 'GrLivArea'], check_zeros=True)
```
|
closed
|
2021-06-22T21:55:47Z
|
2021-07-13T14:02:59Z
|
https://github.com/feature-engine/feature_engine/issues/274
|
[] |
TremaMiguel
| 7
|
matplotlib/matplotlib
|
data-visualization
| 28,928
|
[ENH]: Is there a way to plot a long format data in matplotlib without pivoting the table ?
|
### Problem
I am currently reading this book called Hands On Data Analysis using Pandas by Stefie Molin. There are two formats of data, wide and long format data. The author uses matplotlib to plot a wide format data while seaborn package the long format data. I tried searching in the web and it seems to be the custom. I tried asking gpt as well, and I can plot the long format data in matplotlib but it seems that I have to pivot the dataset. Is there a way around it .
Wide Data Frame Sample
date | TMAX | TMIN | TOBS
-- | -- | -- | --
2018-10-28 | 8.3 | 5.0 | 7.2
2018-10-04 | 22.8 | 11.7 | 11.7
2018-10-20 | 15.0 | -0.6 | 10.6
2018-10-24 | 16.7 | 4.4 | 6.7
2018-10-23 | 15.6 | -1.1 | 10.0
Long Data Frame Sample
date | datatype | value
-- | -- | --
2018-10-01 | TMAX | 21.1
2018-10-01 | TMIN | 8.9
2018-10-01 | TOBS | 13.9
2018-10-02 | TMAX | 23.9
2018-10-02 | TMIN | 13.9
2018-10-02 | TOBS | 17.2
Long Data Frame after pivoting

plot command for wide df
`ax = wide_df.plot(
x='date',
y=['TMAX', 'TMIN', 'TOBS'],
figsize=(15, 5),
title='Temperature in NYC in October 2018'
)
`
plot command for long df after pivot
`ax=long_df.pivot(index='date',columns='datatype',values='value')`
plot command for long_df with seaborn
`ax=sns.lineplot(data=long_df,x='date',y='value',hue='datatype')`
Why isn't there a hue parameter or something similar in matplotlib for a long data format?
### Proposed solution
_No response_
|
closed
|
2024-10-03T07:40:28Z
|
2024-10-03T13:41:41Z
|
https://github.com/matplotlib/matplotlib/issues/28928
|
[
"New feature"
] |
infinity-void6
| 3
|
apachecn/ailearning
|
python
| 383
|
逻辑回归示例代码中有一处问题
|
## 上升or下降?
在 https://github.com/apachecn/MachineLearning/blob/master/src/py2.x/ML/5.Logistic/logistic.py 里有2个方法:
正常的梯度上升法
https://github.com/apachecn/MachineLearning/blob/f727eda8150e26cea0425b0bb17b0badb67d5b01/src/py2.x/ML/5.Logistic/logistic.py#L54
随机梯度下降
https://github.com/apachecn/MachineLearning/blob/f727eda8150e26cea0425b0bb17b0badb67d5b01/src/py2.x/ML/5.Logistic/logistic.py#L100
这2个函数的主要区别只有前者使用全量数据更新w, 后者使用一个样本更新w
那么在这里,上升和下降的区别在哪里? 是否注释错了?
如果可以,我觉得应该分别讲讲上升法和下降法的公式,这样也好理解下面这个代码是上升还是下降
```
weights = weights + alpha * dataMatrix.transpose() * error
```
|
closed
|
2018-05-29T02:41:54Z
|
2018-10-07T13:49:23Z
|
https://github.com/apachecn/ailearning/issues/383
|
[] |
eromoe
| 0
|
taverntesting/tavern
|
pytest
| 297
|
How to monitor the individual name ( sub- test case) in a given test case
|
Here in below example RESTAPI Testing have 3 name/sub-categories i.e POST,GET and GET.
Currently py.test test_test.tavern.yaml -vv says only about RESTAPI Testing PASS or FAIL.
How to monitor the individual name ( sub- test case) level scenarios or what is pytest option display
the each "name" in Test case is passed or failed?
For example in below if POST,GET passed and PUT is failed i,e how can we display only - name: "Update data for base class - PUT" is failed?
============================================
```yaml
test_name: "RESTAPI Testing"
stages:
- name: "inserting data for base class - POST"
request:
headers:
content-type: application/json
json:
createdDate: 14/2/2019
method: POST
url: "http://135.250.34.170:8080/api/wIRELEsses"
response:
body:
createdDate: 14/2/2019
save:
body:
link_var: _links.wIRELESS.href
status_code: 201
- name: "Get data for base class - GET"
request:
method: GET
url: "{link_var}"
response:
status_code: 200
- name: "Update data for base class - PUT"
request:
headers:
content-type: application/json
json:
mode: "true"
method: PUT
url: "{link_var}"
response:
body:
mode: "true"
status_code: 200
```
|
closed
|
2019-03-04T09:10:14Z
|
2019-03-10T10:53:37Z
|
https://github.com/taverntesting/tavern/issues/297
|
[] |
prasadkkl
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.