repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
inducer/pudb
|
pytest
| 333
|
No module named 'IPython.utils.warn'
|
I tried to use `%debug` in IPython, with this result:
```
In [3]: %debug
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-3-bc99e4ec804d> in <module>
----> 1 get_ipython().run_line_magic('debug', '')
/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py in run_line_magic(self, magic_name, line, _stack_depth)
2305 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
2306 with self.builtin_trap:
-> 2307 result = fn(*args, **kwargs)
2308 return result
2309
</usr/local/lib/python3.6/dist-packages/decorator.py:decorator-gen-57> in debug(self, line, cell)
/usr/local/lib/python3.6/dist-packages/IPython/core/magic.py in <lambda>(f, *a, **k)
185 # but it's overkill for just that one bit of state.
186 def magic_deco(arg):
--> 187 call = lambda f, *a, **k: f(*a, **k)
188
189 if callable(arg):
/usr/local/lib/python3.6/dist-packages/IPython/core/magics/execution.py in debug(self, line, cell)
459
460 if not (args.breakpoint or args.statement or cell):
--> 461 self._debug_post_mortem()
462 else:
463 code = "\n".join(args.statement)
/usr/local/lib/python3.6/dist-packages/IPython/core/magics/execution.py in _debug_post_mortem(self)
467
468 def _debug_post_mortem(self):
--> 469 self.shell.debugger(force=True)
470
471 def _debug_exec(self, code, breakpoint):
~/.local/lib/python3.6/site-packages/pudb/ipython.py in debugger(self, force)
42 def debugger(self, force=False):
43 """Call the PuDB debugger."""
---> 44 from IPython.utils.warn import error
45 if not (force or self.call_pdb):
46 return
ModuleNotFoundError: No module named 'IPython.utils.warn'
```
Versions: Python 3.6, PuDB 2018.1, IPython 7.4.0.
|
closed
|
2019-04-06T08:16:03Z
|
2019-04-19T00:13:25Z
|
https://github.com/inducer/pudb/issues/333
|
[] |
alexeyr
| 1
|
tflearn/tflearn
|
tensorflow
| 546
|
Python3 UnicodeDecodeError in oxflower17.load_data()
|
In python3, running the code:
```
>>> import tflearn.datasets.oxflower17 as oxflower17
>>> X, Y = oxflower17.load_data(one_hot=True, resize_pics=(227, 227))
```
Gives the error:
> File "/usr/local/lib/python3.4/dist-packages/tflearn/data_utils.py", line 339, in build_image_dataset_from_dir
> X, Y = pickle.load(open(dataset_file, 'rb'))
> **UnicodeDecodeError**: 'ascii' codec can't decode byte 0xf1 in position 0: ordinal not in range(128)
According to the solution suggested in [this reported issue](https://github.com/tflearn/tflearn/issues/57), I used:
`pickle.load(open(dataset_file, 'rb'), encoding='latin1')`
which solved the problem.
|
open
|
2017-01-04T21:21:55Z
|
2017-01-04T21:21:55Z
|
https://github.com/tflearn/tflearn/issues/546
|
[] |
rotemmairon
| 0
|
ResidentMario/missingno
|
data-visualization
| 37
|
min() arg is an empty sequence
|
I have the following code, which has worked well up until today:
with PdfPages('Missing Data Report.pdf') as pdf:
for segment in SegDict_H1.keys():
matrix_fig = msno.matrix(SegDict_H1[segment],fontsize=12,inline=False)
matrix_fig.text(0,1.5,'{0} Segment Missing Data Matrix'.format(segment),style='italic',
bbox = {'facecolor': 'blue','alpha':.25,'pad':10},fontsize=25)
pdf.savefig(bbox_inches='tight',pad_inches = 0.25)
plt.clf()
plt.close('all')
Executing this code provided me with a multipage .pdf file of a missing data matrix for each DataFrame in my Python dictionary. Just today, however, this code is no longer working properly and I am getting errors that I do not know how to interpret.
|
closed
|
2017-09-27T15:30:07Z
|
2017-10-11T15:52:38Z
|
https://github.com/ResidentMario/missingno/issues/37
|
[] |
crooks3184
| 1
|
postmanlabs/httpbin
|
api
| 207
|
Incorrect URI scheme reported
|
As of some point within the last week the HTTPS version httpbin has started to report an incorrect URL with HTTPS swapped for HTTP, in its response JSON.
At a guess this the result of some reverse proxying? (use X-Forwarded-Proto?)
See example below, where `https://httpbin.org/post` becomes `http://httpbin.org/post`.
```
$ curl -v -X POST "https://httpbin.org/post"
* Adding handle: conn: 0x6f7078
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x6f7078) send_pipe: 1, recv_pipe: 0
* About to connect() to httpbin.org port 443 (#0)
* Trying 54.175.222.246...
* Connected to httpbin.org (54.175.222.246) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: C:\Users\zool1112\AppData\Local\Atlassian\SourceTree\git_local\bin\curl-ca-bundle.crt
CApath: none
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using DHE-RSA-AES256-SHA
* Server certificate:
* subject: OU=Domain Control Validated; OU=EssentialSSL Wildcard; CN=*.httpbin.org
* start date: 2015-01-16 00:00:00 GMT
* expire date: 2016-01-16 23:59:59 GMT
* subjectAltName: httpbin.org matched
* issuer: C=GB; ST=Greater Manchester; L=Salford; O=COMODO CA Limited; CN=COMODO RSA Domain Validation Secure Server CA
* SSL certificate verify ok.
> POST /post HTTP/1.1
> User-Agent: curl/7.30.0
> Host: httpbin.org
> Accept: */*
>
< HTTP/1.1 200 OK
* Server nginx is not blacklisted
< Server: nginx
< Date: Mon, 26 Jan 2015 10:30:03 GMT
< Content-Type: application/json
< Content-Length: 280
< Connection: keep-alive
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Credentials: true
<
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"User-Agent": "curl/7.30.0",
"X-Forwarded-Ssl": "on"
},
"json": null,
"origin": "129.67.24.143",
"url": "http://httpbin.org/post"
}
* Connection #0 to host httpbin.org left intact
```
|
closed
|
2015-01-26T10:42:29Z
|
2018-04-26T17:51:05Z
|
https://github.com/postmanlabs/httpbin/issues/207
|
[] |
laurence-hudson-tessella
| 4
|
modelscope/modelscope
|
nlp
| 294
|
No module named 'kantts'
|
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
p = pipeline('text-to-speech', 'damo/speech_ptts_autolabel_16k')
`raceback (most recent call last):
File "/root/miniconda3/envs/modelscope/lib/python3.10/site-packages/modelscope/utils/import_utils.py", line 433, in _get_module
return importlib.import_module('.' + module_name, self.__name__)
File "/root/miniconda3/envs/modelscope/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/root/miniconda3/envs/modelscope/lib/python3.10/site-packages/modelscope/models/audio/tts/sambert_hifi.py", line 27, in <module>
from .voice import Voice
File "/root/miniconda3/envs/modelscope/lib/python3.10/site-packages/modelscope/models/audio/tts/voice.py", line 13, in <module>
from kantts.datasets.dataset import get_am_datasets, get_voc_datasets
ModuleNotFoundError: No module named 'kantts'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/miniconda3/envs/modelscope/lib/python3.10/site-packages/modelscope/pipelines/builder.py", line 140, in pipeline
return build_pipeline(cfg, task_name=task)
File "/root/miniconda3/envs/modelscope/lib/python3.10/site-packages/modelscope/pipelines/builder.py", line 56, in build_pipeline
return build_from_cfg(
File "/root/miniconda3/envs/modelscope/lib/python3.10/site-packages/modelscope/utils/registry.py", line 184, in build_from_cfg
LazyImportModule.import_module(sig)
File "/root/miniconda3/envs/modelscope/lib/python3.10/site-packages/modelscope/utils/import_utils.py", line 457, in import_module
importlib.import_module(module_name)
File "/root/miniconda3/envs/modelscope/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/root/miniconda3/envs/modelscope/lib/python3.10/site-packages/modelscope/pipelines/audio/text_to_speech_pipeline.py", line 9, in <module>
from modelscope.models.audio.tts import SambertHifigan
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "/root/miniconda3/envs/modelscope/lib/python3.10/site-packages/modelscope/utils/import_utils.py", line 416, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/root/miniconda3/envs/modelscope/lib/python3.10/site-packages/modelscope/utils/import_utils.py", line 435, in _get_module
raise RuntimeError(
RuntimeError: Failed to import modelscope.models.audio.tts.sambert_hifi because of the following error (look up to see its traceback):
No module named 'kantts'`
|
closed
|
2023-05-10T09:58:17Z
|
2023-07-29T01:50:06Z
|
https://github.com/modelscope/modelscope/issues/294
|
[] |
fuxishuyuan
| 2
|
mwaskom/seaborn
|
data-science
| 2,883
|
Scatterplot y-axis inverts when column has None (object type)
|
Similar to https://stackoverflow.com/q/65206970/10447904:
(searched through GH issues with terms "None y axis inverted" as well as throwing in a "None"/"object" term in there, before opening this issue)
The simplest reproducing example I could find was:
```python
df = pd.DataFrame({'x': [0.0, 0.0, 0.3], 'y': [0.0, None, 0.5]}, dtype='object')
sns.scatterplot(x='x', y='y', data=df) # y-axis inverted
```

^y axis upside down
Comparing with plt.scatter (expected, no inverting of y-axis):
```python
plt.scatter(x=df['x'], y=df['y'])
```

According to the stackoverflow post, the problem stems from the **y-axis containing Nones** (and of dtype `object`).
In my original dataframe, only y is of `dtype object` (because of the `None`s, I should have had it as `float64`), but for this reproducing example I had to cast both to `object` (in first line) to get it to work - I am confident I can find a reproducing example where x is `float64` and y `object` though if necessary.
Is this expected behaviour? Given that matplotlib doesn't flip the y axis.
In this case I forgot to cast my column to `float64` (or to use `np.nan`s) but I figured `None`s would be okay (/more or less equivalent) at the time :)
|
closed
|
2022-06-28T19:52:48Z
|
2022-07-05T22:33:56Z
|
https://github.com/mwaskom/seaborn/issues/2883
|
[] |
loodvn
| 2
|
pallets/flask
|
python
| 5,372
|
I'm seeing "GET /socket.io/?EIO=3&transport=polling&t=OozzqO3 HTTP/1.1" 404 errors running a simple flask app
|
I created a "Hello World" flask app (which can be found here: https://github.com/dhylands/flask-hello/tree/main
If I create a virtual environment and install flask and then do `flask run` I get the following output:
```
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:5000
Press CTRL+C to quit
127.0.0.1 - - [30/Dec/2023 19:42:28] "GET /socket.io/?EIO=3&transport=polling&t=OozzqO3 HTTP/1.1" 404 -
127.0.0.1 - - [30/Dec/2023 19:42:33] "GET /socket.io/?EIO=3&transport=polling&t=OozzrcF HTTP/1.1" 404 -
127.0.0.1 - - [30/Dec/2023 19:42:33] "GET /socket.io/?EIO=3&transport=polling&t=OozzrcI HTTP/1.1" 404 -
127.0.0.1 - - [30/Dec/2023 19:42:38] "GET /socket.io/?EIO=3&transport=polling&t=OozzsqT HTTP/1.1" 404 -
```
<!--
Describe the expected behavior that should have happened but didn't.
-->
I'd expect to not see errors/warnings about socket.io
Environment:
- Pop_OS! (aka ubuntu) 22.04 LTS
- Python version: Python 3.10.12
- Flask version: 3.0.0
pip freeze reports:
```
blinker==1.7.0
click==8.1.7
Flask==3.0.0
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.3
Werkzeug==3.0.1
```
|
closed
|
2023-12-31T03:47:40Z
|
2024-01-15T00:07:20Z
|
https://github.com/pallets/flask/issues/5372
|
[] |
dhylands
| 2
|
pydata/xarray
|
numpy
| 9,209
|
Some complex aggregations broken with `numbagg` installed
|
### What happened?
Some common aggregations (at least `min`, `max`, `var`, `std`) are broken with complex dtypes if `numbagg` is installed with the default `skipna=True`. Looks like this is the case since #8624.
### What did you expect to happen?
We either route to `bottleneck` or `numpy` [here](https://github.com/pydata/xarray/blob/main/xarray/core/nputils.py#L182) if these aggregation aren't supported in `numbagg`, or get them working with `numbagg`.
### Minimal Complete Verifiable Example
```Python
import xarray as xr
import numpy as np
da = xr.DataArray(np.ones((2,), dtype=np.complex_), dims=["x"])
da.min(skipna=False) # works
da.min() # fails
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
TypeError: ufunc '__numbagg_transformed_func' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
```
### Anything else we need to know?
cc @max-sixty
### Environment
<details>
xarray==2024.1.1
numbagg==0.8.1
</details>
|
closed
|
2024-07-05T17:59:12Z
|
2024-07-07T04:25:14Z
|
https://github.com/pydata/xarray/issues/9209
|
[
"bug",
"needs triage"
] |
slevang
| 1
|
lanpa/tensorboardX
|
numpy
| 444
|
Flush_secs not working.
|
**Describe the bug**
SummaryWriter's flush_secs doesn't work as expected. If you set flush_secs=5, then it still outputs every 120 secs.
The reason is that SummaryWriter doesn't pass max_queue and flush_secs to FileWriter.
**Minimal runnable code to reproduce the behavior**
```
from tensorboardX import SummaryWriter
writer = SummaryWriter(flush_secs = 10)
writer.add_scalar('data/test',5)
...
```
|
closed
|
2019-06-11T01:36:06Z
|
2019-06-18T18:14:11Z
|
https://github.com/lanpa/tensorboardX/issues/444
|
[] |
geelin
| 0
|
yt-dlp/yt-dlp
|
python
| 12,587
|
[BanBye.com] 404 error while video is available in browser
|
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
the Netherlands
### Provide a description that is worded well enough to be understood
Example video:
`https://banbye.com/watch/v_4iDAJIT4SfA7`
See also earlier issue #8584, resolved by #10332.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -f hls-300 --no-config -v https://banbye.com/watch/v_4iDAJIT4SfA7
[debug] Command-line config: ['-f', 'hls-300', '--no-config', '-v', 'https://banbye.com/watch/v_4iDAJIT4SfA7']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version master@2025.03.07.211453 from yt-dlp/yt-dlp-master-builds (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-5.15.0-134-generic-x86_64-with (OpenSSL 3.1.8 11 Feb 2025)
[debug] exe versions: ffmpeg N-118655-g696ea1c223-20250306 (fdk,setts), ffprobe N-118655-g696ea1c223-20250306, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2025.01.31, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.3.0, websockets-15.0.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Plugin directories: none
[debug] Loaded 1844 extractors
[BanBye] Extracting URL: https://banbye.com/watch/v_4iDAJIT4SfA7
[BanBye] v_4iDAJIT4SfA7: Downloading JSON metadata
[BanBye] v_4iDAJIT4SfA7: Downloading JSON metadata
[BanBye] v_4iDAJIT4SfA7: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[info] v_4iDAJIT4SfA7: Downloading 1 format(s): hls-300
[debug] Invoking hlsnative downloader on "https://tc2g.banbye.com/edge/video/v_4iDAJIT4SfA7/v/index_360/index.m3u8"
[hlsnative] Downloading m3u8 manifest
ERROR: unable to download video data: HTTP Error 404: Not Found
Traceback (most recent call last):
File "yt_dlp/YoutubeDL.py", line 3506, in process_info
File "yt_dlp/YoutubeDL.py", line 3226, in dl
File "yt_dlp/downloader/common.py", line 468, in download
File "yt_dlp/downloader/hls.py", line 82, in real_download
File "yt_dlp/YoutubeDL.py", line 4184, in urlopen
File "yt_dlp/networking/common.py", line 117, in send
File "yt_dlp/networking/_helper.py", line 208, in wrapper
File "yt_dlp/networking/common.py", line 359, in send
File "yt_dlp/networking/_requests.py", line 367, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 404: Not Found
```
|
open
|
2025-03-12T19:14:37Z
|
2025-03-13T06:33:52Z
|
https://github.com/yt-dlp/yt-dlp/issues/12587
|
[
"question"
] |
nicolaasjan
| 4
|
horovod/horovod
|
deep-learning
| 3,842
|
Problem with using MPI for training-related communications on Horovod + Ray
|
**Environment:**
1. Framework: PyTorch
2. Framework version: 1.5.0
3. Horovod version: 0.23.0
4. MPI version: MVAPICH2 2.3.7/mpi4py 3.1.4
5. CUDA version: 11.2
6. NCCL version: N/A
7. Python version: 3.7.15
8. Spark / PySpark version: N/A
9. Ray version: 2.2.0
10. OS and version: Centos 7
11. GCC version: 9.4.0
12. CMake version: 3.22.2
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
Issue #3055 is sort of related
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
N/A
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
N/A
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
N/A
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
I'm trying to run horovod + ray in such a way where I can leverage MPI for training-related communication. In the next lines, I will go through the build command I used, the error I got, and finally the program I'm running (which is really `pytorch_synthetic_benchmark.py` with ray). I'm running the application on a ray cluster of three nodes, a head node and two worker node.
-- The build command I used,
`HOROVOD_WITH_PYTORCH=1 HOROVOD_GPU_OPERATIONS=MPI HOROVOD_WITH_GLOO=1 HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITH_MPI=1 CC=$(which mpicc) CXX=$(which mp
icxx) pip install -e .`
-- command I'm running,
`mpirun_rsh --np 2 --hostfile hostfile python pytorch_synthetic_benchmark.py`
```
cat hostfile
gpu01
gpu02
gpu03
```
-- The output of the command,
```
2023-02-07 21:53:23,535 INFO worker.py:1352 -- Connecting to existing Ray cluster at address: 10.1.1.1:6379...
2023-02-07 21:53:23,543 INFO worker.py:1538 -- Connected to Ray cluster.
2023-02-07 21:53:24,353 INFO worker.py:1352 -- Connecting to existing Ray cluster at address: 10.1.1.1:6379...
2023-02-07 21:53:24,365 INFO worker.py:1538 -- Connected to Ray cluster.
Traceback (most recent call last):
File "pytorch_synthetic_benchmark.py", line 139, in <module>
executor.start()
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/ray/runner.py", line 320, in start
return self._maybe_call_ray(self.adapter.start, **kwargs_)
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/ray/runner.py", line 419, in _maybe_call_ray
return driver_func(**kwargs)
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/ray/runner.py", line 563, in start
node_workers=node_workers)
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/ray/utils.py", line 72, in detect_nics
settings)
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/ray/driver_service.py", line 59, in _driver_fn
return _run_probe(driver, settings, num_hosts)
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/runner/driver/driver_service.py", line 126, in _run_probe
driver.wait_for_initial_registration(settings.start_timeout)
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/runner/common/service/driver_service.py", line 166, in wait_for_initial_registration
timeout.check_time_out_for('tasks to start')
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/runner/common/util/timeout.py", line 37, in check_time_out_for
self._timeout
Exception: Timed out waiting for tasks to start. Please check connectivity between servers. You may need to increase the --start-timeout parameter if you have too many servers. Timeout after 30 seconds.
2023-02-07 21:53:54,964 ERROR worker.py:401 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): ray::BaseHorovodWorker.execute() (pid=24246, ip=10.1.1.2, repr=<horovod.ray.worker.BaseHorovodWorker object at 0x2b9471081ad0>)
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/ray/driver_service.py", line 11, in execute_task_fn
_task_fn(index, num_hosts, driver_addresses, settings)
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/runner/task_fn.py", line 31, in _task_fn
task.wait_for_initial_registration(settings.start_timeout)
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/runner/common/service/task_service.py", line 253, in wait_for_initial_registration
timeout.check_time_out_for('tasks to start')
File "/home/alattar.2/quentin-horvod-tar/deepinstrospect/horovod/runner/common/util/timeout.py", line 37, in check_time_out_for
self._timeout
Exception: Timed out waiting for tasks to start. Please check connectivity between servers. You may need to increase the --start-timeout parameter if you have too many servers. Timeout after 30 seconds.
[gpu01.cluster:mpispawn_0][child_handler] MPI process (rank: 0, pid: 25620) exited with status 1
```
The file I'm running,
```python
import argparse
#import torch.backends.cudnn as cudnn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data.distributed
from torchvision import models
import horovod.torch as hvd
import timeit
import numpy as np
# Benchmark settings
parser = argparse.ArgumentParser(description='PyTorch Synthetic Benchmark',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--fp16-allreduce', action='store_true', default=False,
help='use fp16 compression during allreduce')
parser.add_argument('--model', type=str, default='resnet50',
help='model to benchmark')
parser.add_argument('--batch-size', type=int, default=1,
help='input batch size')
parser.add_argument('--num-warmup-batches', type=int, default=1,
help='number of warm-up batches that don\'t count towards benchmark')
parser.add_argument('--num-batches-per-iter', type=int, default=1,
help='number of batches per benchmark iteration')
parser.add_argument('--num-iters', type=int, default=1,
help='number of benchmark iterations')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--use-adasum', action='store_true', default=False,
help='use adasum algorithm to do reduction')
args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()
def benchmark_step(optimizer, model, data, target):
optimizer.zero_grad()
output = model(data)
loss = F.cross_entropy(output, target)
loss.backward()
optimizer.step()
def log(s, nl=True):
if hvd.rank() != 0:
return
print(s, end='\n' if nl else '')
def start_bench():
hvd.init()
if args.cuda:
# Horovod: pin GPU to local rank.
torch.cuda.set_device(hvd.local_rank())
#cudnn.benchmark = False
# Set up standard model.
model = getattr(models, args.model)()
# By default, Adasum doesn't need scaling up learning rate.
lr_scaler = hvd.size() if not args.use_adasum else 1
optimizer = optim.SGD(model.parameters(), lr=0.01 * lr_scaler)
if args.cuda:
# Move model to GPU.
model.cuda()
# If using GPU Adasum allreduce, scale learning rate by local_size.
if args.use_adasum and hvd.nccl_built():
lr_scaler = hvd.local_size()
# Horovod: (optional) compression algorithm.
compression = hvd.Compression.fp16 if args.fp16_allreduce else hvd.Compression.none
# Horovod: wrap optimizer with DistributedOptimizer.
optimizer = hvd.DistributedOptimizer(optimizer,
named_parameters=model.named_parameters(),
compression=compression,
op=hvd.Adasum if args.use_adasum else hvd.Average)
# Horovod: broadcast parameters & optimizer state.
hvd.broadcast_parameters(model.state_dict(), root_rank=0)
hvd.broadcast_optimizer_state(optimizer, root_rank=0)
# Set up fixed fake data
data = torch.randn(args.batch_size, 3, 224, 224)
target = torch.LongTensor(args.batch_size).random_() % 1000
if args.cuda:
data, target = data.cuda(), target.cuda()
log('Model: %s' % args.model)
log('Batch size: %d' % args.batch_size)
device = 'GPU' if args.cuda else 'CPU'
log('Number of %ss: %d' % (device, hvd.size()))
# Warm-up
log('Running warmup...')
timeit.timeit(lambda: benchmark_step(optimizer, model, data, target), number=args.num_warmup_batches)
# Benchmark
log('Running benchmark...')
img_secs = []
for x in range(args.num_iters):
time = timeit.timeit(lambda: benchmark_step(optimizer, model, data, target), number=args.num_batches_per_iter)
img_sec = args.batch_size * args.num_batches_per_iter / time
log('Iter #%d: %.1f img/sec per %s' % (x, img_sec, device))
img_secs.append(img_sec)
# Results
img_sec_mean = np.mean(img_secs)
img_sec_conf = 1.96 * np.std(img_secs)
log('Img/sec per %s: %.1f +-%.1f' % (device, img_sec_mean, img_sec_conf))
log('Total img/sec on %d %s(s): %.1f +-%.1f' %
(hvd.size(), device, hvd.size() * img_sec_mean, hvd.size() * img_sec_conf))
if __name__ == '__main__':
from horovod.ray import RayExecutor
import ray
ray.init()
num_hosts=3
num_workers_per_host=1
settings=RayExecutor.create_settings(ssh_identity_file="./hostfile")
executor = RayExecutor(
settings,
#num_hosts=num_hosts,
#num_workers_per_host=num_workers_per_host,
gpus_per_worker=1,
num_workers=2,
use_gpu=True)
executor.start()
executor.run(start_bench)
executor.shutdown()
```
I'm not sure where I'm going wrong here. Would highly appreciate it if you can help identify anything obvious that I may be missing here.
|
closed
|
2023-02-08T02:46:57Z
|
2023-02-22T20:23:05Z
|
https://github.com/horovod/horovod/issues/3842
|
[
"bug",
"ray"
] |
KinanAlAttar
| 0
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,042
|
Messages are being queued and being sent only when the server returns a 200 response.
|
I am working on a chat platform and I am using flask_socketio for that.
I am able to receive the messages on the chat interface just fine, but when there are more than one messages instead of sending the messages one by one. It sends all the three messages in a go.
After carefully looking into the issue it seems like the messages are being queued and only being sent once a 200 response is sent from the server.
|
closed
|
2019-08-16T10:26:43Z
|
2019-10-17T09:59:45Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1042
|
[
"question"
] |
sajil13
| 1
|
netbox-community/netbox
|
django
| 18,893
|
Same device types with different part numbers
|
### NetBox version
v4.2.3
### Feature type
Data model extension
### Proposed functionality
We have some servers with same model (S2600WFT) and different part numbers (K91296-001 and K77844-002). I cannot add different device types because they are identified by device model, thus, Netbox considers them to be the same and does not allow adding the latter one having the former been added.
### Use case
Users will be able to add same server models with different part numbers as different device types.
### Database changes
_No response_
### External dependencies
_No response_
|
closed
|
2025-03-13T12:11:58Z
|
2025-03-20T19:27:08Z
|
https://github.com/netbox-community/netbox/issues/18893
|
[
"type: feature"
] |
wlnx
| 1
|
SYSTRAN/faster-whisper
|
deep-learning
| 1,128
|
no_speech_prob always returns 0.0.
|
I'm using large-v3, and when I convert the audio to a numpy array and pass it to the model for transcription, the no_speech_prob returned is 0.0 every time, but with large-v2 there is a correct return.I can't fix this.Here's my sample code:
```python
def transcribe_audio(self, audio_numpy):
try:
model = WhisperModel("large-v3", device="cuda", compute_type="float16", local_files_only=False)
result, info = model.transcribe(
audio_numpy,
initial_prompt="",
language="en",
task="transcribe",
vad_filter=self.vad,
vad_parameters={"threshold": 0.5}
)
all_segments = list(result)
print(all_segments)
except Exception as e:
print(f"An error occurred during transcription: {e}")
def send_audio_file(self, audio_file):
print("do me.....")
with open(audio_file, 'rb') as f:
audio_data = f.read()
audio_data = self. removewavhead(audio_data)
for i in range(0, len(audio_data), 32000):
chunk = audio_data[i:i + 32000]
sf = soundfile.SoundFile(io.BytesIO(chunk), channels=2, endian="LITTLE", samplerate=8000, subtype="PCM_16", format="RAW")
resampled_audio, _ = librosa.load(sf, sr=16000, dtype=np.float32)
self.transcribe_audio(resampled_audio)
time.sleep(0.1)
```
|
open
|
2024-11-12T06:21:42Z
|
2024-11-26T14:13:40Z
|
https://github.com/SYSTRAN/faster-whisper/issues/1128
|
[] |
giaoyyds
| 1
|
voila-dashboards/voila
|
jupyter
| 989
|
Wrong branch for latest version of documentation on `readthedocs`
|
Latest version of [Voila RTD](https://voila.readthedocs.io/en/latest/) is still pointing to `master` instead of `main`
|
closed
|
2021-10-05T08:26:00Z
|
2021-10-05T08:37:16Z
|
https://github.com/voila-dashboards/voila/issues/989
|
[
"documentation"
] |
trungleduc
| 2
|
horovod/horovod
|
deep-learning
| 3,994
|
tensorflow hvd.DistributedOptimizer bug
|
**Environment:**
1. Framework: (TensorFlow)
2. Framework version:2.12
3. Horovod version: master
hvd.DistributedOptimizer bug,when set groups parma can reproduce error.
opt = hvd.DistributedOptimizer(opt, op=hvd.Sum, groups=4)
https://github.com/horovod/horovod/blob/master/horovod/tensorflow/__init__.py#L322
**tensors** should be replaced with **indexed_slices_list**.
tensors could contains tensor and IndexedSlices ,only IndexedSlices has dense_shape attr.
```
new_indexed_slices = [tf.IndexedSlices(x, i, dense_shape=t.dense_shape) for x,i,t in zip(new_values, new_indices, tensors)]
->
new_indexed_slices = [tf.IndexedSlices(x, i, dense_shape=t.dense_shape) for x,i,t in zip(new_values, new_indices, indexed_slices_list)]
```
```
[0]<stderr>: File "/opt/apps/local/lib64/python3/dist-packages/horovod/tensorflow/__init__.py", line 764, in compute_gradients
[0]<stderr>: avg_grads = _filtered_reduce_grads(grads, vars)
[0]<stderr>: File "/opt/apps/local/lib64/python3/dist-packages/horovod/tensorflow/__init__.py", line 729, in _filtered_reduce_grads
[0]<stderr>: rg = self._allreduce_grads(rg, rv)
[0]<stderr>: File "/opt/apps/local/lib64/python3/dist-packages/horovod/tensorflow/__init__.py", line 601, in allreduce_grads
[0]<stderr>: ignore_name_scope=use_generic_names)
[0]<stderr>: File "/opt/apps/local/lib64/python3/dist-packages/horovod/tensorflow/__init__.py", line 411, in _grouped_allreduce_cond
[0]<stderr>: allreduce_fn, id_fn)
[0]<stderr>: File "/opt/apps/local/lib64/python3/dist-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
[0]<stderr>: raise e.with_traceback(filtered_tb) from None
[0]<stderr>: File "/opt/apps/local/lib64/python3/dist-packages/horovod/tensorflow/__init__.py", line 401, in allreduce_fn
[0]<stderr>: return grouped_allreduce(tensors, *args, process_set=process_set, **kwargs)
[0]<stderr>: File "/opt/apps/local/lib64/python3/dist-packages/horovod/tensorflow/__init__.py", line 324, in grouped_allreduce
[0]<stderr>: dense_shape=t.dense_shape) for x,i,t in zip(new_values, new_indices, tensors)]
[0]<stderr>: File "/opt/apps/local/lib64/python3/dist-packages/horovod/tensorflow/__init__.py", line 324, in <listcomp>
[0]<stderr>: dense_shape=t.dense_shape) for x,i,t in zip(new_values, new_indices, tensors)]
[0]<stderr>:AttributeError: 'Tensor' object has no attribute 'dense_shape'
```
|
open
|
2023-10-12T09:26:51Z
|
2023-10-12T09:26:51Z
|
https://github.com/horovod/horovod/issues/3994
|
[
"bug"
] |
Chenjingliang1
| 0
|
pywinauto/pywinauto
|
automation
| 1,426
|
QUESTION: Interfacing with 3 types of controls
|
Hi,
I am a novice in automation and still learning your software. At the moment I want to automate some music mastering software, specifically the window where the file export happens. I identify 3 elements, or controls. The **first** one is the filename field where you have to click on the text box, then enter a text string of the file. The **second** one is a dropdown selection where you choose audio format. The **third** one is the export path which allows you to click a button to choose the directory path then it shows the directory path in text above which looks uneditable so the button appears the only way to interact with it at the frontend (unless there is a backend way to hack the string text instead).

When I run print_ctrl_ids() I get the following infos:

The ones in red I think I will need for the first 2 elements. The ones in blue i'm not sure which ones to use for the third element. My code is very simple, I call application start and application connect and only use type_keys() and send_keys() functions so far, which are very basic. I have not found good help so far in how to interface with the child_window based on the 3 type of controls.
I see in your documentation under https://pywinauto.readthedocs.io/en/latest/controls_overview.html for control type "Edits" there are many functions so I tried the Select() function thinking it could click or select the edit box but nothing happens.
`instance1 = app.ProjectName.child_window(title="Filename Edit Box", control_type="Edit").select()`
How to interact with these 3 control types?
|
open
|
2025-01-23T10:02:47Z
|
2025-01-23T10:02:47Z
|
https://github.com/pywinauto/pywinauto/issues/1426
|
[] |
QuantumCrazy
| 0
|
Lightning-AI/pytorch-lightning
|
data-science
| 20,654
|
Fabric run CLI cannot launch python module
|
### Description & Motivation
In Fabric run CLI, the script argument always assumes the input is an existed file path. However, users might need to launch a python module.
https://github.com/Lightning-AI/pytorch-lightning/blob/08266a9673cddf2d6ca39dc8de04d04bdf10cc7f/src/lightning/fabric/cli.py#L63
For example, the CLI expects
```bash
fabric run src/run.py \
--strategy=ddp \
--devices=2 \
--accelerator=cuda
```
However, this would fail
```bash
fabric run \
--strategy=ddp \
--devices=2 \
--accelerator=cuda \
-m src.run
```
In huggingface accelearte, they provide an extra argument for definiing an module
https://github.com/huggingface/accelerate/blob/4b6be8991059f39a8df8893333d11c54bc51fc60/src/accelerate/commands/launch.py#L358
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
cc @lantiga @borda
|
open
|
2025-03-18T20:40:12Z
|
2025-03-18T20:40:32Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20654
|
[
"feature",
"needs triage"
] |
jhliu17
| 0
|
plotly/dash
|
dash
| 2,515
|
[BUG] tml : "In the callback for output(s):\n plot-update.id\nOutput 0 (plot-update.id) is already in use.\nTo resolve this, set `allow_duplicate=True` on\nduplicate outputs, or combine the outputs into\none callback function, distinguishing the trigger\nby using `dash.callback_context` if necessary." message : "Duplicate callback outputs"
|
When I run my in a docker container in production, I get this error in the browser console and the app is fozen:
sh_renderer.v2_9_3m1682621169.min.js:2 {message: 'Duplicate callback outputs', html: 'In the callback for output(s):\n plot-update.id\nOu…er\nby using `dash.callback_context` if necessary.'}
Qo @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
p @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
to @ dash_renderer.v2_9_3m1682621169.min.js:2
ks @ dash_renderer.v2_9_3m1682621169.min.js:2
Bh @ react-dom@16.v2_9_3m1682621169.14.0.min.js:126
Dj @ react-dom@16.v2_9_3m1682621169.14.0.min.js:162
unstable_runWithPriority @ react@16.v2_9_3m1682621169.14.0.min.js:25
Da @ react-dom@16.v2_9_3m1682621169.14.0.min.js:60
xb @ react-dom@16.v2_9_3m1682621169.14.0.min.js:162
(anonymous) @ react-dom@16.v2_9_3m1682621169.14.0.min.js:162
U @ react@16.v2_9_3m1682621169.14.0.min.js:16
B.port1.onmessage @ react@16.v2_9_3m1682621169.14.0.min.js:24
However, when I run my app on development and staging, I do not get this error. Any idea why that would be happening in different environments?
Prodcution env:
```
Package Version
------------------------- -----------
alembic 1.10.4
asttokens 2.2.1
attrs 23.1.0
backcall 0.2.0
beautifulsoup4 4.12.2
blosc2 2.0.0
brotlipy 0.7.0
cachelib 0.9.0
cffi 1.15.1
click 8.1.3
cloudpickle 2.2.1
colorlover 0.3.0
comm 0.1.3
contourpy 1.0.7
cycler 0.11.0
Cython 0.29.34
dash 2.9.3
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-extensions 0.1.3
dash-html-components 2.0.0
dash-table 5.0.0
dash-tabulator 0.4.2
dash-uploader 0.7.0a1
dask 2023.4.0
debugpy 1.6.7
decorator 5.1.1
dill 0.3.6
diskcache 5.6.1
dnspython 2.3.0
EditorConfig 0.12.3
email-validator 2.0.0.post2
entrypoints 0.4
et-xmlfile 1.1.0
executing 1.2.0
Flask 2.2.4
Flask-Caching 1.10.1
Flask-Login 0.6.2
Flask-Migrate 4.0.4
Flask-SQLAlchemy 3.0.3
Flask-WTF 1.1.1
fonttools 4.39.3
fsspec 2023.4.0
greenlet 2.0.2
h5py 3.8.0
hdf5plugin 4.1.1
idna 3.4
importlib-metadata 6.6.0
ipyfilechooser 0.6.0
ipykernel 6.22.0
ipython 8.12.0
ipywidgets 8.0.6
itsdangerous 2.1.2
jedi 0.18.2
Jinja2 3.1.2
joblib 1.2.0
jsbeautifier 1.14.7
jsonschema 4.17.3
jupyter_client 8.2.0
jupyter_core 5.3.0
jupyterlab-widgets 3.0.7
kiwisolver 1.4.4
locket 1.0.0
lxml 4.9.2
Mako 1.2.4
MarkupSafe 2.1.2
matplotlib 3.7.1
matplotlib-inline 0.1.6
molmass 2023.4.10
more-itertools 8.14.0
ms-mint 0.2.3
ms-mint-app 0.2.3.2
msgpack 1.0.5
multiprocess 0.70.14
nest-asyncio 1.5.6
numexpr 2.8.4
numpy 1.24.3
openpyxl 3.1.2
packaging 21.3
pandas 2.0.1
parso 0.8.3
partd 1.4.0
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.5.0
pip 23.1.2
platformdirs 3.5.0
plotly 5.14.1
prompt-toolkit 3.0.38
psutil 5.9.5
ptyprocess 0.7.0
pure-eval 0.2.2
py-cpuinfo 9.0.0
pyarrow 11.0.0
pycparser 2.21
Pygments 2.15.1
pymzml 2.5.2
pyparsing 3.0.9
pyrsistent 0.19.3
pyteomics 4.6
python-dateutil 2.8.2
pytz 2023.3
PyYAML 6.0
pyzmq 25.0.2
regex 2023.3.23
scikit-learn 1.2.2
scipy 1.10.1
seaborn 0.12.2
setuptools 65.5.1
six 1.16.0
soupsieve 2.4.1
SQLAlchemy 2.0.11
stack-data 0.6.2
tables 3.8.0
tenacity 8.2.2
threadpoolctl 3.1.0
toolz 0.12.0
tornado 6.3.1
tqdm 4.65.0
traitlets 5.9.0
typing_extensions 4.5.0
tzdata 2023.3
urllib3 2.0.0
waitress 2.1.2
wcwidth 0.2.6
Werkzeug 2.2.3
wget 3.2
wheel 0.40.0
widgetsnbextension 4.0.7
WTForms 3.0.1
XlsxWriter 3.1.0
zipp 3.15.0
```
Staging env:
```
Package Version
------------------------- ------------------------
alembic 1.10.4
asttokens 2.2.1
attrs 23.1.0
backcall 0.2.0
beautifulsoup4 4.12.2
blosc2 2.0.0
brotlipy 0.7.0
cachelib 0.9.0
cffi 1.15.1
click 8.1.3
cloudpickle 2.2.1
colorlover 0.3.0
comm 0.1.3
contourpy 1.0.7
cycler 0.11.0
Cython 0.29.34
dash 2.9.3
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-extensions 0.1.3
dash-html-components 2.0.0
dash-table 5.0.0
dash-tabulator 0.4.2
dash-uploader 0.7.0a1
dask 2023.4.0
debugpy 1.6.7
decorator 5.1.1
dill 0.3.6
diskcache 5.6.1
dnspython 2.3.0
EditorConfig 0.12.3
email-validator 2.0.0.post2
entrypoints 0.4
et-xmlfile 1.1.0
executing 1.2.0
Flask 2.2.4
Flask-Caching 1.10.1
Flask-Login 0.6.2
Flask-Migrate 4.0.4
Flask-SQLAlchemy 3.0.3
Flask-WTF 1.1.1
fonttools 4.39.3
fsspec 2023.4.0
greenlet 2.0.2
h5py 3.8.0
hdf5plugin 4.1.1
idna 3.4
importlib-metadata 6.6.0
ipyfilechooser 0.6.0
ipykernel 6.22.0
ipython 8.12.0
ipywidgets 8.0.6
itsdangerous 2.1.2
jedi 0.18.2
Jinja2 3.1.2
joblib 1.2.0
jsbeautifier 1.14.7
jsonschema 4.17.3
jupyter_client 8.2.0
jupyter_core 5.3.0
jupyterlab-widgets 3.0.7
kiwisolver 1.4.4
locket 1.0.0
lxml 4.9.2
Mako 1.2.4
MarkupSafe 2.1.2
matplotlib 3.7.1
matplotlib-inline 0.1.6
molmass 2023.4.10
more-itertools 8.14.0
ms-mint 0.2.3
ms-mint-app 0.2.3.1+0.gf86c0d7.dirty
msgpack 1.0.5
multiprocess 0.70.14
nest-asyncio 1.5.6
numexpr 2.8.4
numpy 1.24.3
openpyxl 3.1.2
packaging 21.3
pandas 2.0.1
parso 0.8.3
partd 1.4.0
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.5.0
pip 23.1.2
platformdirs 3.5.0
plotly 5.14.1
prompt-toolkit 3.0.38
psutil 5.9.5
ptyprocess 0.7.0
pure-eval 0.2.2
py-cpuinfo 9.0.0
pyarrow 11.0.0
pycparser 2.21
Pygments 2.15.1
pymzml 2.5.2
pyparsing 3.0.9
pyrsistent 0.19.3
pyteomics 4.6
python-dateutil 2.8.2
pytz 2023.3
PyYAML 6.0
pyzmq 25.0.2
regex 2023.3.23
scikit-learn 1.2.2
scipy 1.10.1
seaborn 0.12.2
setuptools 67.7.2
six 1.16.0
soupsieve 2.4.1
SQLAlchemy 2.0.11
stack-data 0.6.2
tables 3.8.0
tenacity 8.2.2
threadpoolctl 3.1.0
toolz 0.12.0
tornado 6.3.1
tqdm 4.65.0
traitlets 5.9.0
typing_extensions 4.5.0
tzdata 2023.3
urllib3 2.0.0
waitress 2.1.2
wcwidth 0.2.6
Werkzeug 2.2.3
wget 3.2
wheel 0.40.0
widgetsnbextension 4.0.7
WTForms 3.0.1
XlsxWriter 3.1.0
zipp 3.15.0
```
|
closed
|
2023-04-27T19:15:32Z
|
2024-05-23T10:11:03Z
|
https://github.com/plotly/dash/issues/2515
|
[] |
sorenwacker
| 1
|
iperov/DeepFaceLab
|
machine-learning
| 683
|
OOM error with training
|
Tried to run training H128. Got an error message saying
Error: OOM when allocating tensor with shape [3,3,512,2048] and type float on /job:localhost
[[{{node training/adam/mul_73}} Mul[T=DT_float]
Also [[{{node loss/model_2_loss_1/Mean 3/_597}}]]
The bottom code said Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_ allocation for current allocation info.
After that, it just sits there doing nothing else.
|
closed
|
2020-03-30T11:39:24Z
|
2020-03-30T11:44:41Z
|
https://github.com/iperov/DeepFaceLab/issues/683
|
[] |
coastiestevie
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 1,150
|
Dear Friends, I am facing segmentation fault
|
Dear Friends, I am facing segmentation fault
`
Segmentation fault (core dumped)`
while using
`$ python demo_toolbox.py`
There is nothing which happens other than this fault, no window popup or any other error. I tried several python versions and also torch versions. Same result.
I am using Centos 7. Earlier everything was fine. If anyone can help or hint me what could be the reason.
__Originally posted by @Tortoise17 in https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1143__
__Originally posted by @ImanuillKant1 in https://github.com/HTTPS-PhoenixEnterprise-com/HTTPS-PhoenixEnterprise-com/issues/3__
|
closed
|
2022-12-19T12:12:44Z
|
2024-12-22T13:21:01Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1150
|
[] |
ImanuillKant1
| 1
|
pytorch/vision
|
machine-learning
| 8,103
|
Scheduled workflow failed
|
Oh no, something went wrong in the scheduled workflow tests/download.
Please look into it:
https://github.com/pytorch/vision/actions/runs/6795955113
Feel free to close this if this was just a one-off error.
cc @pmeier
|
closed
|
2023-11-08T09:03:43Z
|
2023-11-09T11:13:56Z
|
https://github.com/pytorch/vision/issues/8103
|
[
"bug",
"module: datasets"
] |
github-actions[bot]
| 0
|
python-gino/gino
|
asyncio
| 476
|
Cascade delete?
|
### Description
Suppose you have a set of models like the following
```py
db = Gino()
class User(db.Model):
__tablename__ = "user"
id = db.Column(db.Numeric(), primary_key=True)
username = db.Column(db.Unicode())
class Mod(db.Model):
__tablename__ = "mod"
id = db.Column(db.Numeric(), primary_key=True)
title = db.Column(db.Unicode())
class UserMod(db.Model):
__tablename__ = "user_mod"
user_id = db.Column(None, db.ForeignKey("user.id"))
mod_id = db.Column(None, db.ForeignKey("mod.id"))
```
Trying to delete a user that has a relation in `UserMod` like
```py
await User.delete.where(User.id == user_id).gino.status()
# or
user = await User.get(user_id)
await user.delete()
```
results in an error message similar to this

Is there anyway to do a cascade delete (deleting other rows linked via keys)?
|
closed
|
2019-05-01T12:36:49Z
|
2019-05-11T03:28:37Z
|
https://github.com/python-gino/gino/issues/476
|
[
"question"
] |
Ovyerus
| 2
|
WZMIAOMIAO/deep-learning-for-image-processing
|
pytorch
| 700
|
yolov3-spp 多gpu训练问题 一直提醒系统中没有rank 这个程序是需要在liunx上运行的么
|
**System information**
* Have I written custom code:
* OS Platform(e.g., window10 or Linux Ubuntu 16.04):
* Python version:
* Deep learning framework and version(e.g., Tensorflow2.1 or Pytorch1.3):
* Use GPU or not:
* CUDA/cuDNN version(if you use GPU):
* The network you trained(e.g., Resnet34 network):
**Describe the current behavior**
**Error info / logs**
|
open
|
2022-11-28T07:39:58Z
|
2022-12-05T04:55:11Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/700
|
[] |
Lexiaotian
| 2
|
littlecodersh/ItChat
|
api
| 227
|
群聊用户的微信账号(id)如何获取呢
|
hi 大家好 群聊用户的微信账号(id)如何获取呢
|
closed
|
2017-02-11T10:42:56Z
|
2017-02-12T03:27:48Z
|
https://github.com/littlecodersh/ItChat/issues/227
|
[
"question"
] |
wwj718
| 2
|
NullArray/AutoSploit
|
automation
| 461
|
Unhandled Exception (4f5600abe)
|
Autosploit version: `3.0`
OS information: `Linux-4.4.0-142-generic-x86_64-with-Ubuntu-16.04-xenial`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/home/screamwre/wk/tool/AutoSploit-master/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/home/screamwre/wk/tool/AutoSploit-master/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
|
closed
|
2019-02-13T02:37:58Z
|
2019-03-03T03:31:06Z
|
https://github.com/NullArray/AutoSploit/issues/461
|
[] |
AutosploitReporter
| 0
|
xuebinqin/U-2-Net
|
computer-vision
| 41
|
How to change Input/Output image dimension from 320x320 to 640x640
|
Dear Nathan ,
I hope you are doing well . Your results are really stunning thank you for sharing the project . It would be deeply appreciated if you can kindly answer the following question for me .
I want to change the model input image and output prediction size from 320x320 to 640x640 . Can you please guide me as to how I can get this done .
Thanks a lot
Kinds Regards
Kamal Kanta Maity
|
open
|
2020-06-27T13:47:03Z
|
2022-04-26T18:23:08Z
|
https://github.com/xuebinqin/U-2-Net/issues/41
|
[] |
kamalkantamaity
| 11
|
n0kovo/fb_friend_list_scraper
|
web-scraping
| 5
|
Timed out waiting for friends count to load
|

- Is it not able to read friends from a private profile?
AND/OR
- Do i need to install some firefox plugins?
|
closed
|
2022-08-22T09:41:45Z
|
2022-12-15T15:47:11Z
|
https://github.com/n0kovo/fb_friend_list_scraper/issues/5
|
[] |
jayjupdhig
| 5
|
wger-project/wger
|
django
| 1,742
|
Unable to edit exercise
|
For some exercises that I created I am not able to edit them anymore.
## Steps to Reproduce
<!-- Please include as many steps to reproduce so that we can replicate the problem. -->
1. Go to an exercise page
2. Select Edit
**Expected results:** <!-- what did you expect to see? -->
The exercise editor form with previously input values.
**Actual results:** <!-- what did you see? -->
A blank page only with the surrounding layout (page navigation etc.).
Four errors are logged in the browser:
<details>
<summary>Browser console errors</summary>
```
TypeError: n.data.find(...) is undefined
getOptionLabel https://wger.mydomain.com/static/CACHE/js/output.1cfcbf2246c6.js:1
// ...
TypeError: e is undefined
// ...
TypeError: e is undefined
// ...
TypeError: n.data.find(...) is undefined
getOptionLabel https://wger.mydomain.com/static/CACHE/js/output.1cfcbf2246c6.js:1
// ...
```
</details>
The container does not log very much
<details>
<summary>Container logs</summary>
```
wger_nginx | 172.18.0.6 - - [26/Jul/2024:12:52:55 +0000] "GET /api/v2/check-permission/?permission=exercises.delete_exercisevideo HTTP/1.1" 200 15 "https://wger.mydomain.com/en/exercise/1220/view-base/lateral-raise-machine" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0" "91.3.121.228"
wger_nginx | 172.18.0.6 - - [26/Jul/2024:12:52:55 +0000] "GET /api/v2/check-permission/?permission=exercises.delete_exerciseimage HTTP/1.1" 200 15 "https://wger.mydomain.com/en/exercise/1220/view-base/lateral-raise-machine" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0" "91.3.121.228"
```
</details>
This may be related to exercises that are not saved correctly during creation, see #1738
wger version 2.3.0a2
|
closed
|
2024-07-26T13:05:47Z
|
2024-12-12T19:38:15Z
|
https://github.com/wger-project/wger/issues/1742
|
[] |
seyfeb
| 2
|
pallets/flask
|
python
| 5,537
|
`pyright` type checking seems to be unused. `mypy` still being used.
|
We added `pyright` in 3.0.3 (see: https://github.com/pallets/flask/pull/5457), but `mypy` is still being used for type checking https://github.com/pallets/flask/blob/main/tox.ini#L29-L32, and `pyright` is not.
If this was an intentional change of type checkers, then would expect that `mypy` be completely removed, and `pyright` be the command run in `tox typing`.
It should be noted, however, that running `pyright` yields 90 errors (71 tests/19 src). Is there an upgrade path or was this an oversight?
Environment:
- Python version:
- 3.12
- Flask version:
- 3.0.3
|
closed
|
2024-07-31T04:29:38Z
|
2024-11-08T00:07:41Z
|
https://github.com/pallets/flask/issues/5537
|
[] |
pygeek
| 1
|
marshmallow-code/apispec
|
rest-api
| 557
|
Components are not referenced when combined with oneOf, anyOf, allOf, not
|
When combining multiple schemas with `oneOf`, `anyOf`, `allOf` or `not` that where previously added to components, they will not get referenced. I.e. running
```python
from pprint import pprint
from apispec import APISpec
spec = APISpec(
title="Gisty",
version="1.0.0",
openapi_version="3.0.2",
info=dict(description="A minimal gist API"),
)
spec.components.schema(
"Gist",
{
"properties": {
"id": {"type": "integer", "format": "int64"},
"name": {"type": "string"},
}
},
)
spec.components.schema(
"AnotherGist",
{
"properties": {
"id": {"type": "integer", "format": "int64"},
"name": {"type": "string"},
"age": {"type": "integer", "format": "int32"},
}
},
)
spec.path(
path="/gist/{gist_id}",
operations=dict(
get=dict(
responses={
"200": {
"content": {
"application/json": {
"schema": {
"oneOf": [
"Gist", "AnotherGist"
]
}
}
}
}
}
)
),
)
if __name__ == "__main__":
pprint(spec.to_dict())
```
I get
```
{'components': {'schemas': {'AnotherGist': {'properties': {'age': {'format': 'int32',
'type': 'integer'},
'id': {'format': 'int64',
'type': 'integer'},
'name': {'type': 'string'}}},
'Gist': {'properties': {'id': {'format': 'int64',
'type': 'integer'},
'name': {'type': 'string'}}}}},
'info': {'description': 'A minimal gist API',
'title': 'Gisty',
'version': '1.0.0'},
'openapi': '3.0.2',
'paths': OrderedDict([('/gist/{gist_id}',
{'get': {'responses': OrderedDict([('200',
{'content': {'application/json': {'schema': {'oneOf': ['Gist',
'AnotherGist']}}}})])}})])}
```
whereas I expected that components would be referenced automatically, i.e.
```
{'components': {'schemas': {'AnotherGist': {'properties': {'age': {'format': 'int32',
'type': 'integer'},
'id': {'format': 'int64',
'type': 'integer'},
'name': {'type': 'string'}}},
'Gist': {'properties': {'id': {'format': 'int64',
'type': 'integer'},
'name': {'type': 'string'}}}}},
'info': {'description': 'A minimal gist API',
'title': 'Gisty',
'version': '1.0.0'},
'openapi': '3.0.2',
'paths': OrderedDict([('/gist/{gist_id}',
{'get': {'responses': OrderedDict([('200',
{'content': {'application/json': {'schema': {'oneOf': [{'$ref': '#/components/schemas/Gist'},
{'$ref': '#/components/schemas/AnotherGist'}]}}}})])}})])}
```
The expected result can be generated with a workaround using `spec.get_ref` as mentioned in #550
Analog results for `anyOf`, `allOf` and `not`.
(version 3.3.0 on python 3.7)
|
closed
|
2020-04-25T11:41:17Z
|
2021-07-26T15:03:37Z
|
https://github.com/marshmallow-code/apispec/issues/557
|
[] |
lfiedler
| 10
|
mitmproxy/pdoc
|
api
| 429
|
one of the pytest doctests in a docstring not rendered correctly
|
#### Problem Description
One of the doctests in the example below isn´t rendered correctly. The behavior depends on indentation and on the --docformat command argument: see example below.
#### Steps to reproduce the behavior:
test script doctest_google.py:
```python
def bla():
"""Print bla.
Examples:
>>> bla()
blu
>>> bla()
bla
"""
print("bla")
def bli():
"""Print bli.
Examples:
>>> bli()
bli
>>> bli()
blo
"""
print("bli")
bla()
bli()
```
Apply:
```bash
pdoc doctest_google.py
```
or
```bash
pdoc doctest_google.py --docformat restructuredtext
```
And get this:

Or apply
```bash
pdoc doctest_google.py --docformat google
```
and get this:

#### System Information
pdoc: 12.0.2
Python: 3.9.13
|
closed
|
2022-08-10T14:11:28Z
|
2022-08-23T17:04:05Z
|
https://github.com/mitmproxy/pdoc/issues/429
|
[
"bug",
"upstream"
] |
tmeyier
| 2
|
mitmproxy/mitmproxy
|
python
| 6,645
|
HTTP Request with Client Certificate doesn't work with IIS server (but works properly with other vendors)
|
#### Problem Description
Generate Client Certificate and use it to connect to IIS server get stuck (eventually it failed).
The same Client Certificate works properly with **NodeJS** Server and **Apache** Server.
#### Steps to reproduce the behavior:
1. Setup IIS Server
2. Generate Client Certificate
3. Try to record...
#### System Information
Mitmproxy: 10.2.2 binary
Python: 3.12.1
OpenSSL: OpenSSL 3.1.4 24 Oct 2023
Platform: Windows-11-10.0.22631-SP0
|
open
|
2024-02-04T20:13:23Z
|
2024-02-04T20:13:23Z
|
https://github.com/mitmproxy/mitmproxy/issues/6645
|
[
"kind/triage"
] |
Shnitzelil
| 0
|
zappa/Zappa
|
flask
| 994
|
Maybe it would be more productive to make joint efforts with serverless framework and move Zappa features into plugins for it?
|
There are plugin and using it it's possible to achieve 90% of the Zappa functionality:
https://github.com/logandk/serverless-wsgi (it's based on Zappa btw)
Docker containers with auto build - deploy to ecr - are supported for several months
A lot of third-party-plugins
Zappa missing some critical features like websockets, parallel warmups, etc
On the other side - serverless missing something simple to use like @task decorator
|
closed
|
2021-06-25T11:48:58Z
|
2022-07-16T04:30:36Z
|
https://github.com/zappa/Zappa/issues/994
|
[] |
ArtikUA
| 3
|
huggingface/datasets
|
tensorflow
| 7,440
|
IterableDataset raises FileNotFoundError instead of retrying
|
### Describe the bug
In https://github.com/huggingface/datasets/issues/6843 it was noted that the streaming feature of `datasets` is highly susceptible to outages and doesn't back off for long (or even *at all*).
I was training a model while streaming SlimPajama and training crashed with a `FileNotFoundError`. I can only assume that this was due to a momentary outage considering the file in question, `train/chunk9/example_train_3889.jsonl.zst`, [exists like all other files in SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B/blob/main/train/chunk9/example_train_3889.jsonl.zst).
```python
...
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 2226, in __iter__
for key, example in ex_iterable:
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1499, in __iter__
for x in self.ex_iterable:
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1067, in __iter__
yield from self._iter()
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1231, in _iter
for key, transformed_example in iter_outputs():
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1207, in iter_outputs
for i, key_example in inputs_iterator:
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1111, in iter_inputs
for key, example in iterator:
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 371, in __iter__
for key, pa_table in self.generate_tables_fn(**gen_kwags):
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/packaged_modules/json/json.py", line 99, in _generate_tables
for file_idx, file in enumerate(itertools.chain.from_iterable(files)):
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/utils/track.py", line 50, in __iter__
for x in self.generator(*self.args):
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/utils/file_utils.py", line 1378, in _iter_from_urlpaths
raise FileNotFoundError(urlpath)
FileNotFoundError: zstd://example_train_3889.jsonl::hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d5511943ca1f5ff0e3eb5e293543/train/chunk9/example_train_3889.jsonl.zst
```
That final `raise` is at the bottom of the following snippet:
https://github.com/huggingface/datasets/blob/f693f4e93aabafa878470c80fd42ddb10ec550d6/src/datasets/utils/file_utils.py#L1354-L1379
So clearly, something choked up in `xisfile`.
### Steps to reproduce the bug
This happens when streaming a dataset and iterating over it. In my case, that iteration is done in Trainer's `inner_training_loop`, but this is not relevant to the iterator.
```python
File "/miniconda3/envs/draft/lib/python3.11/site-packages/accelerate/data_loader.py", line 835, in __iter__
next_batch, next_batch_info = self._fetch_batches(main_iterator)
```
### Expected behavior
This bug and the linked issue have one thing in common: *when streaming fails to retrieve an example, the entire program gives up and crashes*. As users, we cannot even protect ourselves from this: when we are iterating over a dataset, we can't make `datasets` skip over a bad example or wait a little longer to retry the iteration, because when a Python generator/iterator raises an error, it loses all its context.
In other words: if you have something that looks like `for b in a: for c in b: for d in c:`, errors in the innermost loop can only be caught by a `try ... except` in `c.__iter__()`. There should be such exception handling in `datasets` and it should have a **configurable exponential back-off**: first wait and retry after 1 minute, then 2 minutes, then 4 minutes, then 8 minutes, ... and after a given amount of retries, **skip the bad example**, and **only after** skipping a given amount of examples, give up and crash. This was requested in https://github.com/huggingface/datasets/issues/6843 too, since currently there is only linear backoff *and* it is clearly not applied to `xisfile`.
### Environment info
- `datasets` version: 3.3.2 *(the latest version)*
- Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.7
- `huggingface_hub` version: 0.26.5
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2024.10.0
|
open
|
2025-03-07T19:14:18Z
|
2025-03-22T21:48:02Z
|
https://github.com/huggingface/datasets/issues/7440
|
[] |
bauwenst
| 5
|
numba/numba
|
numpy
| 9,721
|
@jit(target_backend='cpu') select loop mode for huge for loop
|
can i use it like:
```
# loop_mode = 'cuda' or 'cpu'
def looper(loop_mode):
......
@jit(target_backend=loop_mode)
for i in range (......
.....
```
i mean can i use it like :
`@jit(target_backend='cpu')`
@jit(target_backend=loop_mode)
for making library for huge for loop
|
closed
|
2024-09-10T06:37:22Z
|
2024-10-25T01:58:58Z
|
https://github.com/numba/numba/issues/9721
|
[
"question",
"no action required",
"stale"
] |
ganbaaelmer
| 5
|
Lightning-AI/pytorch-lightning
|
pytorch
| 20,358
|
load data sequence is confusing
|
### Bug description
I understand data consuming sequence in lightning is:
1, sanity check: call val_dataloader
2, training: call train_dataloader
3, validate: call val_dataloader
from above sequence I understand the cycle of a epoch is start from val_dataloader and end at train_dataloader, and the 3rd validate reuse val data from 1st val_dataloader.
but if if you check trainer.current_epoch: assume current_epoch is 1 at sanity check val_dataloader, then it increased to 2 at train_dataloader. in thise case it's seems the cycle of a epoch is start from train_dataloader and end at val_dataloader.
in this situation will confuse how to write code in val_dataloader when dynamic loading data. if infinite epoch, no problem. but at last epoch(I don't know now it's last one), should I ignore val_data is None or should I try to load it as if next round of cycle?
I think sanitcy check logic and validate logic should merge as one data-setup, but used twice for difference purpose. twice call val_dataloader and once call training_dataloader also make difficult to manage data load
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_
|
open
|
2024-10-22T14:08:12Z
|
2024-10-22T14:37:08Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20358
|
[
"bug",
"needs triage",
"ver: 2.4.x"
] |
workhours
| 2
|
comfyanonymous/ComfyUI
|
pytorch
| 7,025
|
[Announcement] The frontend will no longer be shipped in the main ComfyUI repo, it will be a separate pip package instead.
|
Due to the frontend being a [separate project](https://github.com/Comfy-Org/ComfyUI_frontend) it doesn't make sense to continue shipping the frontend as part of the main repo.
To make our lives easier and prevent the main ComfyUI repo from becoming too large in size over time we have decided to start shipping the frontend as a separate pip package instead.
https://github.com/comfyanonymous/ComfyUI/pull/7021
After this change is merged if you use the desktop build or the standalone package and update using the update/update_comfyui.bat script or the ComfyUI manager everything should continue working as usual.
If you update manually using "git pull" you will have to do the extra step of updating the frontend using: `pip install -r requirements.txt`
If you have any comments or concerns about this change, let us know.
|
open
|
2025-02-28T23:46:10Z
|
2025-03-22T23:36:38Z
|
https://github.com/comfyanonymous/ComfyUI/issues/7025
|
[
"Important"
] |
comfyanonymous
| 44
|
onnx/onnx
|
scikit-learn
| 6,298
|
Deprecation / Update Policy for onnx dependencies / Release Process?
|
Hello,
since Onnx has developed into a quasi-standard in some areas, I think it could makes sense
to write down how Onnx relates to other projects and what dependencies there are.
I want to avoid the expression _best practices_ and would therefore call it _good practices_ or _guidelines_.
When do we update/upgrade dependencies for e.g. _python_, _numpy_, _pybind_, _protobuf_ (and increase the version in requirements-min.txt?
For python versions, for example, I had the impression (I haven't checked now) that we will remove at EOL with the next ONNX release?
Could something like this be interesting for us or are we somehow oriented in this direction?
https://scientific-python.org/specs/spec-0000/
I don't want to introduce a new formal system, I just want to understand the current practice and I think it would make sense to write this down.
|
open
|
2024-08-16T13:45:49Z
|
2025-03-10T05:56:00Z
|
https://github.com/onnx/onnx/issues/6298
|
[
"topic: documentation"
] |
andife
| 15
|
fastapi/fastapi
|
asyncio
| 12,239
|
Sponsor Badge CSS overflow issue on the docs
|
### Discussed in https://github.com/fastapi/fastapi/discussions/12218
<div type='discussions-op-text'>
<sup>Originally posted by **nat236919** September 19, 2024</sup>
### First Check
- [X] I added a very descriptive title here.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/pydantic/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
announce-wrapper .sponsor-badge {
display: block;
position: absolute;
top: -10px;
right: 0;
font-size: 0.5rem;
color: #999;
background-color: #666;
border-radius: 10px;
padding: 0 10px;
z-index: 10;
}
```
### Description

Due to its absolute position and overflow setting, the text is trailing down creating an expected scroll. I think we can simply solve the issue by removing **position: absolute;**
### Operating System
Windows
### Operating System Details
_No response_
### FastAPI Version
NA
### Pydantic Version
NA
### Python Version
NA
### Additional Context
_No response_</div>
|
open
|
2024-09-20T19:37:07Z
|
2025-03-24T10:52:16Z
|
https://github.com/fastapi/fastapi/issues/12239
|
[
"question"
] |
Kludex
| 1
|
microsoft/MMdnn
|
tensorflow
| 599
|
error: Tensorflow to caffe "caffe_emitter:emit_Constant" doesn't handle one dim tensor.
|
# Enviroment
Platform (like ubuntu 16.04/win10): ubuntu 16.04
Python version: python3.5
Source framework with version (like Tensorflow 1.4.1 with GPU): Tensorflow 1.13
Destination framework with version (like CNTK 2.3 with GPU): caffe
Pre-trained model path (webpath or webdisk path):
Running scripts:
```bash
mmconvert\
-sf tensorflow\
-in ../facenet_0330/facenet_0330-eval-ckpt.meta\
-iw ../facenet_0330/facenet_0330-eval-ckpt\
--dstNodeName InceptionResnetV1/Bottleneck/BatchNorm/Reshape_1\
-df caffe\
-om tf_facenet_0330_se_inception_resnetv1
```
# Problem
Trying to convert a se_inception_resnetv1 from tensorflow to caffe(using master mmdnn), got an IndexError like this:
```
Traceback (most recent call last):
File "/usr/local/bin/mmconvert", line 11, in <module>
sys.exit(_main())
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/_script/convert.py", line 108, in _main
ret = IRToCode._convert(code_args)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/_script/IRToCode.py", line 61, in _convert
emitter.run(args.dstModelPath, args.dstWeightPath, args.phase)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/caffe/caffe_emitter.py", line 144, in run
super(CaffeEmitter, self).run(dstNetworkPath, dstWeightPath, phase)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/common/DataStructure/emitter.py", line 22, in run
self.save_code(dstNetworkPath, phase)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/common/DataStructure/emitter.py", line 59, in save_code
code = self.gen_code(phase)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/caffe/caffe_emitter.py", line 132, in gen_code
func(current_node)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/caffe/caffe_emitter.py", line 434, in emit_Constant
shape[1],
IndexError: list index out of range
```
and got such shape info by adding some print code:
```
nodename: InceptionResnetV1/Logits/Flatten/flatten/Reshape/shape/1
<class 'graph_pb2.TensorShape'> dim {
size: 2
}
<class 'list'> [2]
```
Any idea about this?
|
closed
|
2019-03-01T07:43:03Z
|
2019-03-05T03:30:03Z
|
https://github.com/microsoft/MMdnn/issues/599
|
[] |
Alnlll
| 3
|
miguelgrinberg/flasky
|
flask
| 160
|
how to initialize the "roles" table to insert the data?
|
I have encountered some error:
"application not registered on db instance and no application bound to current context"
when I call Role.insert_roles() to initialize the "roles" table. Can you help me to know how to do this in a right way? @miguelgrinberg
|
closed
|
2016-06-15T08:40:26Z
|
2017-03-17T19:01:54Z
|
https://github.com/miguelgrinberg/flasky/issues/160
|
[
"question"
] |
onlyanyz
| 2
|
deeppavlov/DeepPavlov
|
nlp
| 799
|
Remove KerasWrapper in favor of raw KerasModel
|
For now we have two basic classes: [KerasModel](https://github.com/deepmipt/DeepPavlov/blob/613d265f7371ba05365a7d44485066293c169674/deeppavlov/core/models/keras_model.py#L34-L36) and [KerasWrapper](https://github.com/deepmipt/DeepPavlov/blob/613d265f7371ba05365a7d44485066293c169674/deeppavlov/core/models/keras_model.py#L99-L102). `KerasWrapper` is used only once: for the [MorphoTagger](https://github.com/deepmipt/DeepPavlov/blob/613d265f7371ba05365a7d44485066293c169674/deeppavlov/models/morpho_tagger/network.py#L323-L329) class.
One should remove the `KerasWrapper` class and rewrite `MorphoTagger` to be inherited from `KerasModel`.
|
closed
|
2019-04-15T11:43:52Z
|
2019-07-09T07:37:13Z
|
https://github.com/deeppavlov/DeepPavlov/issues/799
|
[] |
yoptar
| 2
|
ray-project/ray
|
data-science
| 51,514
|
[Autoscaler] Add Support for BatchingNodeProvider in Autoscaler Config Option
|
### Description
[KubeRay](https://docs.ray.io/en/latest/cluster/kubernetes/user-guides/configuring-autoscaling.html#overview) currently uses the BatchingNodeProvider to manage clusters externally (using the KubeRay operator), which enables users to interact with external cluster management systems. However, to support custom providers with the BatchingNodeProvider, users must implement a module and integrate it as an external type provider, which leads to inconvenience.
On the other hand, [LocalNodeProvider](https://github.com/ray-project/ray/tree/master/python/ray/autoscaler/_private/local) offers the CoordinatorSenderNodeProvider to manage clusters externally through a coordinator server, [but the local type provider currently does not support updates for clusters](https://github.com/ray-project/ray/issues/39565).
To simplify custom cluster management, adding the BatchingNodeProvider and BatchingSenderNodeProvider would be highly beneficial. This would significantly assist users who wish to customize and use their own providers for managing clusters (on-premises or multi cloud environments).
For example, the following configuration could be used to add the BatchingNodeProvider to the provider type:
```yaml
provider:
type: batch
coordinator_address: "127.0.0.1:8000"
```
This would allow users to easily configure external cluster management with the BatchingNodeProvider, enhancing the flexibility and usability of the system.
### Use case
https://github.com/ray-project/ray/blob/8773682e49876627b9b4e10e2d2f4f32d961c0c9/python/ray/autoscaler/_private/providers.py#L184-L197
If the 'batch' type is additionally supported in the provider configuration, users will be able to manage the creation and deletion of cluster nodes externally in the coordinator server.
|
open
|
2025-03-19T06:51:24Z
|
2025-03-19T22:23:54Z
|
https://github.com/ray-project/ray/issues/51514
|
[
"enhancement",
"P2",
"core"
] |
nadongjun
| 0
|
pyqtgraph/pyqtgraph
|
numpy
| 2,663
|
AA
|
```python
import pyqtgraph.opengl as gl
from pyqtgraph.Qt import QtCore, QtGui
import numpy as np
# create a 3D scatter plot
app = QtGui.QApplication([])
w = gl.GLViewWidget()
w.opts['distance'] = 20
w.show()
w.setWindowTitle('3D Scatter Plot')
pos = np.random.random((1000, 3))
color = np.random.random((1000, 4))
size = np.random.randint(10, 30, size=1000)
scatter = gl.GLScatterPlotItem(pos=pos, color=color, size=size, pxMode=False)
w.addItem(scatter)
# define a function to handle mouse clicks
def mouse_clicked(event):
if event.button() == QtCore.Qt.LeftButton:
pos = event.pos()
point = w.camera().unproject(pos)
indices = scatter.pointsAt(pos)
if len(indices) > 0:
print('Selected points:', indices)
# connect the mouse clicked signal to the function
w.scene().sigMouseClicked.connect(mouse_clicked)
# start the event loop
app.exec()
```
|
closed
|
2023-03-25T20:42:53Z
|
2023-05-24T00:05:53Z
|
https://github.com/pyqtgraph/pyqtgraph/issues/2663
|
[] |
ISMAILJAOUA11
| 2
|
python-visualization/folium
|
data-visualization
| 1,196
|
Archiving documents containing folium maps
|
#### Problem description
I need to archive a Jupyter Notebook containing folium maps. Because technology changes and map tiles might move in 10 or 20 years, I would need to somehow save all the content into one file, which could be either XML, PDF or TIFF (or a mix of these). And I need to do this on a large scale, that means it should be some kind of batch processing doing the job.
#### Solution found so far
Some browsers allow headless printing, but the folium maps can easily be distorted and messed up inside Jupyter Notebooks. Furthermore, some map tiles are not fully loaded at the time of printing.
#### Question
Has anybody encountered a similar question? How can this be solved? I do not want to change the source code of the HTML/CSS/JS on my side as this might not be a stable and reliable solution.
#### Further sources
I tried to find answers at:
- https://superuser.com/questions/1463819/print-fully-loaded-website-as-pdf-with-chrome-in-headless-mode
- https://bugs.chromium.org/p/chromium/issues/detail?id=986740#c3
but there has been no good answer so far.
|
closed
|
2019-08-07T07:58:41Z
|
2019-08-12T11:44:02Z
|
https://github.com/python-visualization/folium/issues/1196
|
[] |
1kastner
| 7
|
JaidedAI/EasyOCR
|
machine-learning
| 1,266
|
Request for Updated Pre-trained Models and Guidance on Fine-tuning EasyOCR
|
Hello,
I want to fine-tune EasyOCR for French (easyocr.Reader([fr])), and I followed the instructions provided in[ this note](https://github.com/JaidedAI/EasyOCR/blob/master/custom_model.md) by @rkcosmos and[ this article](https://pub.towardsai.net/how-to-fine-tune-easyocr-to-achieve-better-ocr-performance-1540f5076428). However, I encountered a problem: the note suggests downloading the OCR pre-trained model from[ this Google Drive link](https://drive.google.com/drive/folders/15WPsuPJDCzhp2SvYZLRj8mAlT3zmoAMW), but the latest models available there were uploaded in 2020. Given that the last updates to EasyOCR were made 10 months ago, these models are outdated and do not perform as well as the latest EasyOCR version.
Additionally, I need to improve the French version specifically, but there is no option to specify which language to train. I tried to obtain a .pth file for the latest version of EasyOCR but wasn't sure how to proceed.
Could you please guide me on how to get the latest pre-trained model for EasyOCR, and how to fine-tune it specifically for French?
Thank you.
|
open
|
2024-06-10T08:44:30Z
|
2024-06-10T08:57:36Z
|
https://github.com/JaidedAI/EasyOCR/issues/1266
|
[] |
Meriem-DAHMANI
| 0
|
flairNLP/flair
|
pytorch
| 3,377
|
[Question]: Difference Between `train` and `fine_tune` Methods in `ModelTrainer`
|
### Question
I noticed the [documentation](https://github.com/flairNLP/flair/blob/master/docs/tutorial/tutorial-training/train-vs-fine-tune.md) on the difference between the ModelTrainer's train() and fine_tune() methods is empty. Based on this [example](https://github.com/flairNLP/flair/blob/master/docs/tutorial/tutorial-training/how-to-train-sequence-tagger.md), it seems like train() is for FLAIR embeddings, and fine_tune() is for Transformer embeddings. Is this the only distinction, or are there other differences? Any insights would be much appreciated!
|
closed
|
2023-11-27T15:58:18Z
|
2023-11-28T10:17:46Z
|
https://github.com/flairNLP/flair/issues/3377
|
[
"question"
] |
AAnirudh07
| 2
|
google-deepmind/sonnet
|
tensorflow
| 198
|
VQ-VAE training example(v2) returned NAN loss
|
Dear Team Deepmind,
I am really grateful that you shared a vqvae_example with sonnet2. However, when running it, I currently encounter a problem of NAN vqvae loss from the beginning. The outcome is:
100 train loss: nan recon_error: 1.010 perplexity: 1.031 vqvae loss: nan
and so on.
The plot of the training set is fine, but the reconstruction is pure grey. I tried vq_use_ema = False of True and got the same results.
I have slightly modified your code by replacing downloading and data loading with the previous version(https://github.com/deepmind/sonnet/blob/master/sonnet/examples/vqvae_example.ipynb) using a local directory. Also, I'm using TensorFlow version 2.2.0 Sonnet version 2.0.0. My code didn't return any error, just NAN loss.
I wonder if you could kindly help me with this problem.
Thanks a lot!
Sincerely,
Harold
My code:
import os
import subprocess
import tempfile
import matplotlib.pyplot as plt
import numpy as np
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
import tree
try:
import sonnet.v2 as snt
tf.enable_v2_behavior()
except ImportError:
import sonnet as snt
from six.moves import cPickle
from six.moves import urllib
from six.moves import xrange
#for plt dispaly
os.system('export DISPLAY=:0')
print("TensorFlow version {}".format(tf.__version__))
print("Sonnet version {}".format(snt.__version__))
local_data_dir='/home/harold/Documents/VQ-VAE'
'''
#Downloading cifar10
cifar10 = tfds.as_numpy(tfds.load("cifar10:3.0.2", split="train+test", batch_size=-1))
cifar10.pop("id", None)
cifar10.pop("label")
tree.map_structure(lambda x: f'{x.dtype.name}{list(x.shape)}', cifar10)
'''
#Data loading
'''
train_data_dict = tree.map_structure(lambda x: x[:40000], cifar10)
valid_data_dict = tree.map_structure(lambda x: x[40000:50000], cifar10)
test_data_dict = tree.map_structure(lambda x: x[50000:], cifar10)
def cast_and_normalise_images(data_dict):
"""Convert images to floating point with the range [-0.5, 0.5]"""
images = data_dict['image']
data_dict['image'] = (tf.cast(images, tf.float32) / 255.0) - 0.5
return data_dict
train_data_variance = np.var(train_data_dict['image'] / 255.0)
print('train data variance: %s' % train_data_variance)
'''
def unpickle(filename):
with open(filename, 'rb') as fo:
return cPickle.load(fo, encoding='latin1')
def reshape_flattened_image_batch(flat_image_batch):
return flat_image_batch.reshape(-1, 3, 32, 32).transpose([0, 2, 3, 1]) # convert from NCHW to NHWC
def combine_batches(batch_list):
images = np.vstack([reshape_flattened_image_batch(batch['data'])
for batch in batch_list])
labels = np.vstack([np.array(batch['labels']) for batch in batch_list]).reshape(-1, 1)
return {'images': images, 'labels': labels}
train_data_dict = combine_batches([
unpickle(os.path.join(local_data_dir,
'cifar-10-batches-py/data_batch_%d' % i))
for i in range(1,5)
])
valid_data_dict = combine_batches([
unpickle(os.path.join(local_data_dir,
'cifar-10-batches-py/data_batch_5'))])
test_data_dict = combine_batches([
unpickle(os.path.join(local_data_dir, 'cifar-10-batches-py/test_batch'))])
def cast_and_normalise_images(data_dict):
"""Convert images to floating point with the range [-0.5, 0.5]"""
images = data_dict['images']
data_dict['images'] = (tf.cast(images, tf.float32) / 255.0) - 0.5
return data_dict
train_data_variance = np.var(train_data_dict['images'] / 255.0)
print('train data variance: %s' % train_data_variance)
#Encoder & Decoder Architecture
class ResidualStack(snt.Module):
def __init__(self, num_hiddens, num_residual_layers, num_residual_hiddens,
name=None):
super(ResidualStack, self).__init__(name=name)
self._num_hiddens = num_hiddens
self._num_residual_layers = num_residual_layers
self._num_residual_hiddens = num_residual_hiddens
self._layers = []
for i in range(num_residual_layers):
conv3 = snt.Conv2D(
output_channels=num_residual_hiddens,
kernel_shape=(3, 3),
stride=(1, 1),
name="res3x3_%d" % i)
conv1 = snt.Conv2D(
output_channels=num_hiddens,
kernel_shape=(1, 1),
stride=(1, 1),
name="res1x1_%d" % i)
self._layers.append((conv3, conv1))
def __call__(self, inputs):
h = inputs
for conv3, conv1 in self._layers:
conv3_out = conv3(tf.nn.relu(h))
conv1_out = conv1(tf.nn.relu(conv3_out))
h += conv1_out
return tf.nn.relu(h) # Resnet V1 style
class Encoder(snt.Module):
def __init__(self, num_hiddens, num_residual_layers, num_residual_hiddens,
name=None):
super(Encoder, self).__init__(name=name)
self._num_hiddens = num_hiddens
self._num_residual_layers = num_residual_layers
self._num_residual_hiddens = num_residual_hiddens
self._enc_1 = snt.Conv2D(
output_channels=self._num_hiddens // 2,
kernel_shape=(4, 4),
stride=(2, 2),
name="enc_1")
self._enc_2 = snt.Conv2D(
output_channels=self._num_hiddens,
kernel_shape=(4, 4),
stride=(2, 2),
name="enc_2")
self._enc_3 = snt.Conv2D(
output_channels=self._num_hiddens,
kernel_shape=(3, 3),
stride=(1, 1),
name="enc_3")
self._residual_stack = ResidualStack(
self._num_hiddens,
self._num_residual_layers,
self._num_residual_hiddens)
def __call__(self, x):
h = tf.nn.relu(self._enc_1(x))
h = tf.nn.relu(self._enc_2(h))
h = tf.nn.relu(self._enc_3(h))
return self._residual_stack(h)
class Decoder(snt.Module):
def __init__(self, num_hiddens, num_residual_layers, num_residual_hiddens,
name=None):
super(Decoder, self).__init__(name=name)
self._num_hiddens = num_hiddens
self._num_residual_layers = num_residual_layers
self._num_residual_hiddens = num_residual_hiddens
self._dec_1 = snt.Conv2D(
output_channels=self._num_hiddens,
kernel_shape=(3, 3),
stride=(1, 1),
name="dec_1")
self._residual_stack = ResidualStack(
self._num_hiddens,
self._num_residual_layers,
self._num_residual_hiddens)
self._dec_2 = snt.Conv2DTranspose(
output_channels=self._num_hiddens // 2,
output_shape=None,
kernel_shape=(4, 4),
stride=(2, 2),
name="dec_2")
self._dec_3 = snt.Conv2DTranspose(
output_channels=3,
output_shape=None,
kernel_shape=(4, 4),
stride=(2, 2),
name="dec_3")
def __call__(self, x):
h = self._dec_1(x)
h = self._residual_stack(h)
h = tf.nn.relu(self._dec_2(h))
x_recon = self._dec_3(h)
return x_recon
class VQVAEModel(snt.Module):
def __init__(self, encoder, decoder, vqvae, pre_vq_conv1,
data_variance, name=None):
super(VQVAEModel, self).__init__(name=name)
self._encoder = encoder
self._decoder = decoder
self._vqvae = vqvae
self._pre_vq_conv1 = pre_vq_conv1
self._data_variance = data_variance
def __call__(self, inputs, is_training):
z = self._pre_vq_conv1(self._encoder(inputs))
vq_output = self._vqvae(z, is_training=is_training)
x_recon = self._decoder(vq_output['quantize'])
recon_error = tf.reduce_mean((x_recon - inputs) ** 2) / self._data_variance
loss = recon_error + vq_output['loss']
return {
'z': z,
'x_recon': x_recon,
'loss': loss,
'recon_error': recon_error,
'vq_output': vq_output,
}
#Build Model and train
#%%time
# Set hyper-parameters.
batch_size = 32
image_size = 32
# 100k steps should take < 30 minutes on a modern (>= 2017) GPU.
# 10k steps gives reasonable accuracy with VQVAE on Cifar10.
num_training_updates = 10000
num_hiddens = 128
num_residual_hiddens = 32
num_residual_layers = 2
# These hyper-parameters define the size of the model (number of parameters and layers).
# The hyper-parameters in the paper were (For ImageNet):
# batch_size = 128
# image_size = 128
# num_hiddens = 128
# num_residual_hiddens = 32
# num_residual_layers = 2
# This value is not that important, usually 64 works.
# This will not change the capacity in the information-bottleneck.
embedding_dim = 64
# The higher this value, the higher the capacity in the information bottleneck.
num_embeddings = 512
# commitment_cost should be set appropriately. It's often useful to try a couple
# of values. It mostly depends on the scale of the reconstruction cost
# (log p(x|z)). So if the reconstruction cost is 100x higher, the
# commitment_cost should also be multiplied with the same amount.
commitment_cost = 0.25
# Use EMA updates for the codebook (instead of the Adam optimizer).
# This typically converges faster, and makes the model less dependent on choice
# of the optimizer. In the VQ-VAE paper EMA updates were not used (but was
# developed afterwards). See Appendix of the paper for more details.
vq_use_ema = False
# This is only used for EMA updates.
decay = 0.99
learning_rate = 3e-4
# # Data Loading.
train_dataset = (
tf.data.Dataset.from_tensor_slices(train_data_dict)
.map(cast_and_normalise_images)
.shuffle(10000)
.repeat(-1) # repeat indefinitely
.batch(batch_size, drop_remainder=True)
.prefetch(-1))
valid_dataset = (
tf.data.Dataset.from_tensor_slices(valid_data_dict)
.map(cast_and_normalise_images)
.repeat(1) # 1 epoch
.batch(batch_size)
.prefetch(-1))
'''
train_batch = next(iter(train_dataset))
def convert_batch_to_image_grid(image_batch):
reshaped = (image_batch.reshape(4, 8, 32, 32, 3)
.transpose(0, 2, 1, 3, 4)
.reshape(4 * 32, 8 * 32, 3))
return reshaped + 0.5
f = plt.figure(figsize=(16,8))
ax = f.add_subplot(2,2,1)
ax.imshow(convert_batch_to_image_grid(train_batch['images'].numpy()),
interpolation='nearest')
ax.set_title('training data originals')
plt.axis('off')
plt.show()
'''
# # Build modules.
encoder = Encoder(num_hiddens, num_residual_layers, num_residual_hiddens)
decoder = Decoder(num_hiddens, num_residual_layers, num_residual_hiddens)
pre_vq_conv1 = snt.Conv2D(output_channels=embedding_dim,
kernel_shape=(1, 1),
stride=(1, 1),
name="to_vq")
if vq_use_ema:
vq_vae = snt.nets.VectorQuantizerEMA(
embedding_dim=embedding_dim,
num_embeddings=num_embeddings,
commitment_cost=commitment_cost,
decay=decay)
else:
vq_vae = snt.nets.VectorQuantizer(
embedding_dim=embedding_dim,
num_embeddings=num_embeddings,
commitment_cost=commitment_cost)
model = VQVAEModel(encoder, decoder, vq_vae, pre_vq_conv1,
data_variance=train_data_variance)
optimizer = snt.optimizers.Adam(learning_rate=learning_rate)
@tf.function
def train_step(data):
with tf.GradientTape() as tape:
model_output = model(data['images'], is_training=True)
trainable_variables = model.trainable_variables
grads = tape.gradient(model_output['loss'], trainable_variables)
optimizer.apply(grads, trainable_variables)
return model_output
train_losses = []
train_recon_errors = []
train_perplexities = []
train_vqvae_loss = []
for step_index, data in enumerate(train_dataset):
train_results = train_step(data)
train_losses.append(train_results['loss'])
train_recon_errors.append(train_results['recon_error'])
train_perplexities.append(train_results['vq_output']['perplexity'])
train_vqvae_loss.append(train_results['vq_output']['loss'])
if (step_index + 1) % 100 == 0:
print('%d train loss: %f ' % (step_index + 1,
np.mean(train_losses[-100:])) +
('recon_error: %.3f ' % np.mean(train_recon_errors[-100:])) +
('perplexity: %.3f ' % np.mean(train_perplexities[-100:])) +
('vqvae loss: %.3f' % np.mean(train_vqvae_loss[-100:])))
if step_index == num_training_updates:
break
#Plot loss
f = plt.figure(figsize=(16,8))
ax = f.add_subplot(1,2,1)
ax.plot(train_recon_errors)
ax.set_yscale('log')
ax.set_title('NMSE.')
ax = f.add_subplot(1,2,2)
ax.plot(train_perplexities)
ax.set_title('Average codebook usage (perplexity).')
plt.show()
#Visualization
# Reconstructions
train_batch = next(iter(train_dataset))
valid_batch = next(iter(valid_dataset))
# Put data through the model with is_training=False, so that in the case of
# using EMA the codebook is not updated.
train_reconstructions = model(train_batch['images'],
is_training=False)['x_recon'].numpy()
valid_reconstructions = model(valid_batch['images'],
is_training=False)['x_recon'].numpy()
def convert_batch_to_image_grid(image_batch):
reshaped = (image_batch.reshape(4, 8, 32, 32, 3)
.transpose(0, 2, 1, 3, 4)
.reshape(4 * 32, 8 * 32, 3))
return reshaped + 0.5
f = plt.figure(figsize=(16,8))
ax = f.add_subplot(2,2,1)
ax.imshow(convert_batch_to_image_grid(train_batch['images'].numpy()),
interpolation='nearest')
ax.set_title('training data originals')
plt.axis('off')
ax = f.add_subplot(2,2,2)
ax.imshow(convert_batch_to_image_grid(train_reconstructions),
interpolation='nearest')
ax.set_title('training data reconstructions')
plt.axis('off')
ax = f.add_subplot(2,2,3)
ax.imshow(convert_batch_to_image_grid(valid_batch['images'].numpy()),
interpolation='nearest')
ax.set_title('validation data originals')
plt.axis('off')
ax = f.add_subplot(2,2,4)
ax.imshow(convert_batch_to_image_grid(valid_reconstructions),
interpolation='nearest')
ax.set_title('validation data reconstructions')
plt.axis('off')
plt.show()
|
open
|
2021-02-13T06:41:58Z
|
2021-09-06T09:36:37Z
|
https://github.com/google-deepmind/sonnet/issues/198
|
[] |
EBGU
| 4
|
gee-community/geemap
|
streamlit
| 1,614
|
Possibly unused variables
|
`vis_params` and `url_format` variables e.g. [here](https://github.com/gee-community/geemap/blob/ccba6fd0185f5274c2a22ce5fdff8b0f9d04ee1e/geemap/ee_tile_layers.py#L70C5-L70C15) and [here](https://github.com/gee-community/geemap/blob/ccba6fd0185f5274c2a22ce5fdff8b0f9d04ee1e/geemap/ee_tile_layers.py#L109) appear unused.
I'm not positive of the convention, but it seems that the default for `vis_params` is set during initialization and `url_format` is set by `_get_tile_url_format`.
Can these be removed?
|
closed
|
2023-07-07T00:01:51Z
|
2023-07-07T02:45:42Z
|
https://github.com/gee-community/geemap/issues/1614
|
[
"cleanup"
] |
jdbcode
| 1
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 10,604
|
add "create_type" to sqltypes.Enum ?
|
this type has a "name" argument that only applies to PostgreSQL. So the type already has "backend specific" arguments, and it having a name implies directly that there's a separate CREATE statement. so we should have create_type directly present. See https://github.com/sqlalchemy/alembic/issues/1347
|
open
|
2023-11-08T17:20:09Z
|
2024-10-19T10:28:51Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/10604
|
[
"postgresql",
"datatypes"
] |
zzzeek
| 4
|
ClimbsRocks/auto_ml
|
scikit-learn
| 86
|
Allow the user to pass in a list of categorical variables as the value for a column
|
For example, item_ids in order.
We will then go through and manually one-hot-encode them.
So, we could pass in item_id: [5, 8, 12, 4, 6]. Then auto_ml would set row['item_id=5'] = True.
Obviously, this will not work for continuous variables (item cost). But it should at least be helpful for categorical variables we might have many of.
What this also does is avoid having them be positional. So it lets us avoid saying first_item_id = 5, second_item_id = 8, etc. There's really no reason for them to be ordered in some cases. All we care about is "has item id 8 in order", we don't really care whether it's first or third.
|
open
|
2016-09-20T17:30:47Z
|
2016-09-20T17:30:47Z
|
https://github.com/ClimbsRocks/auto_ml/issues/86
|
[] |
ClimbsRocks
| 0
|
coqui-ai/TTS
|
pytorch
| 3,246
|
[Feature request] Could add a switch to disabled the remote config check that in check_if_configs_are_equal function in TTS/utils/manage.py file
|
Sometimes in Intranet which could not connect to Internet, the **check_if_configs_are_equal** function (in TTS/utils/manage.py file) check the local config version and the remote config version will failed and halt on so much time. Would it be possible to add a parameter setting to enable or disable this version of the check? Thanks.
|
closed
|
2023-11-17T08:20:46Z
|
2023-12-28T18:39:45Z
|
https://github.com/coqui-ai/TTS/issues/3246
|
[
"wontfix",
"feature request"
] |
listeng
| 2
|
aeon-toolkit/aeon
|
scikit-learn
| 2,324
|
[ENH] Docker image for aeon with Numba caching
|
### Describe the feature or idea you want to propose
Aeon should provide a Dockerfile or Docker image that showcases how to install and run aeon within Docker.
Additionally, aeon uses a lot of Numba-JITed code that gets compiled on first usage. This means that when using Docker, the functions are getting recompiled every time the Docker container is restarted. @chrisholder implemented caching for the Numba-compiled code in our CI. We should support similar caching for Docker images.
### Describe your proposed solution
- [ ] Create example Dockerfiles (simple and with Numba-caching)
- [ ] Build and publish Docker images to the [GitHub registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) (ghcr.io/aeon-toolkit/aeon) for every release with GitHub Actions (CI)
- [ ] Document usage
### Describe alternatives you've considered, if relevant
Just document the problem with Docker-Numba-caching and how to solve it in our documentation.
### Additional context
_No response_
|
open
|
2024-11-08T14:29:19Z
|
2025-03-22T15:02:01Z
|
https://github.com/aeon-toolkit/aeon/issues/2324
|
[
"documentation",
"enhancement",
"maintenance"
] |
SebastianSchmidl
| 3
|
kymatio/kymatio
|
numpy
| 490
|
Add meta information to 2D and 3D
|
Right now, we have it in 1D by calling `Scattering1D.meta`. It would be good to have something similar in 2D and 3D.
|
open
|
2020-01-23T20:08:17Z
|
2020-03-03T16:59:58Z
|
https://github.com/kymatio/kymatio/issues/490
|
[] |
janden
| 4
|
redis/redis-om-python
|
pydantic
| 571
|
AnyUrl & HttpUrl validation error
|
### OS: Ubuntu 22.04.3 LTS (WSL2)
### Python: 3.11.6
### Redis: 7.2.2
Code sample:
from pydantic import HttpUrl, BaseModel
from redis_om import JsonModel
class Crush(BaseModel):
http: HttpUrl
class Test(JsonModel):
http: HttpUrl
Shell test:
kali@rglKali:~/$ python
Python 3.11.6 (main, Oct 17 2023, 16:29:19) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from test import Crush, Test
>>> Crush(http='https://avatars.githubusercontent.com/u/1529926?s=48&v=4')
Crush(http=Url('https://avatars.githubusercontent.com/u/1529926?s=48&v=4'))
>>> Test(http='https://avatars.githubusercontent.com/u/1529926?s=48&v=4')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/kali/venv/lib/python3.11/site-packages/redis_om/model/model.py", line 1683, in __init__
super().__init__(*args, **kwargs)
File "/home/kali/venv/lib/python3.11/site-packages/redis_om/model/model.py", line 1295, in __init__
super().__init__(**data)
File "/home/kali/venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Test
http
instance of Url expected (type=type_error.arbitrary_type; expected_arbitrary_type=Url)
|
closed
|
2023-10-21T23:24:09Z
|
2024-05-02T14:34:20Z
|
https://github.com/redis/redis-om-python/issues/571
|
[] |
rglKali
| 0
|
pallets-eco/flask-sqlalchemy
|
flask
| 1,177
|
'Lost connection to MySQL server during query' on database restart
|
When I restart my database (to simulate a timeout/network error/etc..), the next request my application process gets a 'Lost connection to MySQL server during query' error. Next requests process normally
Example code to reproduce:
```
import os
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import engine as eg
# Create Flask app
app = Flask(__name__)
db_url = eg.URL.create(
"mariadb+pymysql",
username="",
password="",
host="",
database=""
)
# Configure Database
app.config['SQLALCHEMY_DATABASE_URI'] = db_url
db = SQLAlchemy(app)
# Create model
class Test(db.Model):
id = db.Column(db.Integer, primary_key=True)
a = db.Column(db.String)
b = db.Column(db.String)
def __str__(self):
return "Test(%s, %s)" % (self.a, self.b)
def __repr__(self):
return self.__str__()
# Create route
@app.route('/')
def hello_world():
rows = Test.query.all()
return "|".join(str(row) for row in rows)
if __name__ == '__main__':
app.run(debug=True)
```
<!--
Describe the expected behavior that should have happened but didn't.
-->
Versions:
- Python: 3.10.9
- Flask-SQLAlchemy 3.0.2
- SQLAlchemy: 2.0.5.post1
- Flask: 2.2.2
|
closed
|
2023-03-07T14:40:02Z
|
2023-03-22T01:05:44Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/1177
|
[] |
iTrooz
| 4
|
robotframework/robotframework
|
automation
| 5,198
|
Require global settings to be defined before tests and keywords using them
|
Currently we support data like this so that the created test gets the setup and the template defined after the test itself:
```robotframework
*** Test Cases ***
Example
arg1 arg2
*** Settings ***
Test Setup Setup Keyword
Test Template Template Keyword
```
This syntax causes two problems:
1. It is not possible to change the value of the global test and keyword related settings. There has, for example, been several requests to to allow using different test templates in a single file with a syntax like this:
```robotframework
*** Settings ***
Test Template Something
# The following tests use the template specified above
*** Test Cases ***
T1
arg1.1 arg1.2
T2
arg2.1 arg2.2
# Change the template it and use it with the forthcoming tests
*** Settings ***
Test Template New
*** Test Cases ***
T3
arg3.1 arg3.2 arg3.3
```
2. It makes implementing parsers a lot harder. Parsers typically go through data line-by-line, but in our case the parser needs to process the whole file to find possible `Test Template` or it won't be able to handle test cases properly. Robot's own parser obviously handles that, but this syntax makes it a lot harder to implement custom parsers, for example, for syntax highlighting purposes or to implement a faster C/Rust/Go based parser for Robot itself.
The main benefit of allowing settings to be defined after tests and keywords using them is that it allows "hiding" the Setting section at the end of the file. That can be useful when tests are shown to non-technical people, but scrolling to the right place in the file or using code folding in an IDE ought to be good enough alternatives. Notice also that even after this change we'd allow suite related settings as well as imports to be defined after tests and keywords.
What do others think about this? Do you consider the benefits explained above higher than problems? The change is backwards incompatible, so it could only be done in a major release and would require a deprecation period. Please use :+1: / :-1: reactions to indicate how you feel about this and write a comment if you have something more to say.
|
open
|
2024-09-04T14:12:49Z
|
2024-11-05T15:31:44Z
|
https://github.com/robotframework/robotframework/issues/5198
|
[
"enhancement",
"priority: high",
"backwards incompatible"
] |
pekkaklarck
| 6
|
autogluon/autogluon
|
computer-vision
| 4,706
|
[BUG] AutoGluon v1.2 misconfigured libnvJitLink leads to Jupyter kernel crash on AL2 SageMaker notebooks
|
**To Reproduce**
Start a Jupyter notebook instance with the following startup script on SageMaker, and run AutoGluon-TimeSeries.
```
#!/bin/bash
set -e
ENV_NAME="ag"
PYTHON_VERSION="3.11"
source /home/ec2-user/anaconda3/bin/activate
conda create --yes --name $ENV_NAME python=$PYTHON_VERSION
source /home/ec2-user/anaconda3/bin/activate $ENV_NAME
conda activate $ENV_NAME
conda install --yes jupyterlab ipykernel
pip install -U uv
pip install -U ipywidgets
python -m uv pip install autogluon
python -m ipykernel install --user --name $ENV_NAME --display-name "autogluon python 3.11"
```
and run
```
from autogluon.timeseries import TimeSeriesPredictor, TimeSeriesDataFrame
df = TimeSeriesDataFrame("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_hourly_subset/train.csv")
pred = TimeSeriesPredictor(
eval_metric="WQL"
)
pred.fit(
df,
hyperparameters={
"SeasonalNaive": {},
},
verbosity=2,
)
```
**Screenshots / Logs**
```
Beginning AutoGluon training...
AutoGluon will save models to '/home/ec2-user/SageMaker/AutogluonModels/ag-20241203_105542'
=================== System Info ===================
AutoGluon Version: 1.2
Python Version: 3.11.10
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Oct 22 16:38:23 UTC 2024
CPU Count: 2
GPU Count: 0
Memory Avail: 2.63 GB / 3.76 GB (69.9%)
Disk Space Avail: 4.51 GB / 4.78 GB (94.4%)
WARNING: Available disk space is low and there is a risk that AutoGluon will run out of disk during fit, causing an exception.
We recommend a minimum available disk space of 10 GB, and large datasets may require more.
===================================================
Fitting with arguments:
{'enable_ensemble': True,
'eval_metric': WQL,
'hyperparameters': {'SeasonalNaive': {}},
'known_covariates_names': [],
'num_val_windows': 1,
'prediction_length': 1,
'quantile_levels': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
'random_seed': 123,
'refit_every_n_windows': 1,
'refit_full': False,
'skip_model_selection': False,
'target': 'target',
'verbosity': 2}
Inferred time series frequency: 'h'
Provided train_data has 148060 rows, 200 time series. Median time series length is 700 (min=700, max=960).
Provided data contains following columns:
target: 'target'
AutoGluon will gauge predictive performance using evaluation metric: 'WQL'
This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value.
===================================================
Starting training. Start time is 2024-12-03 10:55:43
Models that will be trained: ['SeasonalNaive']
# [KERNEL CRASHES]
```
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
# Replace this code with the output of the following:
from autogluon.core.utils import show_versions
show_versions()
```
</details>
|
open
|
2024-12-03T10:40:20Z
|
2025-01-14T22:58:42Z
|
https://github.com/autogluon/autogluon/issues/4706
|
[
"bug",
"install",
"dependency",
"priority: 1"
] |
canerturkmen
| 9
|
PokeAPI/pokeapi
|
graphql
| 678
|
Mega - Pokemon || Shadow - Pokemon
|
Hi I was wondering how do I get mega pokemon's info like a mega venusaur and like shadow pokemon is there anyway to get this info?
|
open
|
2021-12-20T18:21:15Z
|
2022-08-04T08:23:23Z
|
https://github.com/PokeAPI/pokeapi/issues/678
|
[] |
Flame101
| 5
|
biolab/orange3
|
data-visualization
| 6,576
|
Distributions outputs wrong data
|
**What's wrong**
The widget's output does not match the selection.
**How can we reproduce the problem?**
Load Zoo and pass it to Distributions. Show "type" and check "Sort categories by frequency".
When selecting the n-th column, the widget outputs data referring to the n-th value of the variable in the original, unsorted order.
It is pretty amazing that nobody noticed this so far.
**When fixing this** consider that the user can click "Sort categories by frequency" while something is selected. This shouldn't affect which values are selected (e.g. if one selects mammals and insects, they must still be selected).
**Don't forget** selection using keyboard, including, e.g. Shift-Right.
**What's your environment?**
- Operating system: macOS
- Orange version: latest master
- How you installed Orange: pip
|
closed
|
2023-09-14T14:45:06Z
|
2023-09-20T20:21:07Z
|
https://github.com/biolab/orange3/issues/6576
|
[
"bug",
"meal"
] |
janezd
| 4
|
jacobgil/pytorch-grad-cam
|
computer-vision
| 350
|
Hello! I used EigenCAM in YOLOX. I tried every layer of the network, and found that the effect was not good. I found that the distribution of the thermal map obtained by inputting different images was the same
|
open
|
2022-10-25T03:33:56Z
|
2022-10-25T03:33:56Z
|
https://github.com/jacobgil/pytorch-grad-cam/issues/350
|
[] |
Kyle-fang
| 0
|
|
LibrePhotos/librephotos
|
django
| 1,121
|
can not download models
|
we can not dowload models from dropbox in China, Can u store the models file elsewhere?
thx!
https://githubfast.com/LibrePhotos/librephotos/blob/dev/service/image_captioning/api/im2txt/README.md
|
open
|
2024-01-08T09:22:37Z
|
2024-01-08T09:22:37Z
|
https://github.com/LibrePhotos/librephotos/issues/1121
|
[
"enhancement"
] |
yutao1129
| 0
|
pytorch/pytorch
|
python
| 149,626
|
`Segmentation Fault` in `torch.lstm_cell`
|
### 🐛 Describe the bug
The following code snippet causes a `segmentation fault` when running torch.lstm_cell:
```
import torch
inp = torch.full((0, 8), 0, dtype=torch.float)
hx = torch.full((0, 9), 0, dtype=torch.float)
cx = torch.full((0, 9), 0, dtype=torch.float)
w_ih = torch.full((1, 8), 1.251e+12, dtype=torch.float)
w_hh = torch.full((1, 9), 1.4013e-45, dtype=torch.float)
b_ih = None
b_hh = None
torch.lstm_cell(inp, (hx, cx), w_ih, w_hh, b_ih, b_hh)
```
### Versions
torch 2.6.0
cc @mikaylagawarecki
|
open
|
2025-03-20T15:24:09Z
|
2025-03-21T07:08:54Z
|
https://github.com/pytorch/pytorch/issues/149626
|
[
"module: crash",
"module: rnn",
"triaged",
"bug",
"module: empty tensor",
"topic: fuzzer"
] |
vwrewsge
| 1
|
tensorpack/tensorpack
|
tensorflow
| 1,132
|
Request help on debuging the running speed
|
I'm using the `FasterRCNN` module in Tensorpack and I modified the network by adding some other branch. The training data was saved in a NFS file system (in `png` format and `xml` format) and I run the code in a node of a cluster. The node has 4 K80 GPUs. When I run the `FasterRCNN` in this system. It seems the executation is slow, please see the following log:
11%|# |53/500[03:00<25:20, 0.29it/s]
12%|#2 |61/500[03:20<24:53, 0.29it/s]
25%|##5 |125/500[06:03<18:34, 0.34it/s]
26%|##6 |130/500[06:20<18:19, 0.34it/s]
40%|###9 |198/500[09:04<13:43, 0.37it/s]
41%|####1 |205/500[09:20<13:24, 0.37it/s]
54%|#####4 |271/500[12:04<09:54, 0.38it/s]
56%|#####5 |278/500[12:20<09:36, 0.38it/s]
70%|####### |352/500[15:20<06:09, 0.40it/s]
85%|########4 |424/500[18:05<03:03, 0.41it/s]
86%|########6 |430/500[18:20<02:48, 0.41it/s]
100%|##########|500/500[21:29<00:00, 0.39it/s][32m[0407 08:33:58 @base.py:285][0m Epoch 1 (global_step 500) finished, time:21 minutes 29 seconds.
But when I check the GPU executation, it shows `100%` execution at most of time. But sometimes three of 4 GPUs has `0%` execution load and only one shows `100%` load.
I'm not sure how to speed up the network training, it seems the execution of the network is slow compared to the original `FasterRCNN` in Tensorpack. I'm not sure if this related to the data prefetching because of the busy NFS file system in the cluster.
And I've tried to save all the images and annotations in the memory to speed up the data loading, but it failed due to the memory problem. Could you please give some advice on how to speed up the network training in this case? Thanks.
|
closed
|
2019-04-07T08:39:10Z
|
2019-04-13T07:18:58Z
|
https://github.com/tensorpack/tensorpack/issues/1132
|
[
"performance"
] |
Remember2018
| 4
|
clovaai/donut
|
nlp
| 101
|
Document Information Extraction in Production - How to know the confidence of each field?
|
Good afternoon, thank you very much for the incredible work!
I just wanted to know if there was an automatic way to know if the field is OK. Especially when other applications use the data extracted by the model. That way we could reduce the work of people in the loop as much as possible.
Other architectures like LayoutLM returns the confidence of each field. I know that is difficult to get confidences of each prediction with this architecture. Would it be doable to apply some validations to check if prediction is properly extracted? Do you have other ideas?
Thanks in advance!
|
open
|
2022-12-01T17:19:32Z
|
2022-12-03T13:43:19Z
|
https://github.com/clovaai/donut/issues/101
|
[] |
WaterKnight1998
| 3
|
Farama-Foundation/Gymnasium
|
api
| 652
|
[Proposal] Inclusion of ActionRepeat wrapper
|
### Proposal
I would like to propose `ActionRepeat` wrapper that would allow the wrapped environment to repeat `step()` for the specified number of times.
### Motivation
I am working on implementing models like PlaNet and Dreamer, and I'm working with MuJoCo environments mostly. In these implementations, there is almost always a term like `action_repeat`. I think the proposed wrapper would simplify this line of implementation.
### Pitch
Assuming that the overridden `step()` is called at time-step `t`, it would return the followings
> `observation[t + n_repeat], sum(reward[t: t + n_repeat + 1]), terminated, truncated, info[t + n_repeat]`
That means, the overridden `step()` would call the parent `step()` at least once and it would assert that `n_repeat` is positive (>=0).
If `terminal` or `truncation` is reached within the action repetition, the loop would be exited, and the reward would be summed unto that point while observation and info from that time step would be returned.
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
|
open
|
2023-08-04T21:36:34Z
|
2025-02-25T01:22:07Z
|
https://github.com/Farama-Foundation/Gymnasium/issues/652
|
[
"enhancement",
"good first issue"
] |
smmislam
| 3
|
explosion/spaCy
|
nlp
| 13,196
|
Spacy dependency requirements are not met for pydantic versions ^2
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
Example pyproject.toml
```
[tool.poetry.dependencies]
python = "^3.7"
pydantic = "^2.0.0"
spacy = "^3.2.0"
```
When running `poetry update --lock` there is a build dependency error between pydantic version ^2 and typing-extensions.
```
And because spacy (3.7.2) depends on typing-extensions (>=3.7.4.1,<4.5.0)
and pydantic (2.0) depends on typing-extensions (>=4.6.1), pydantic (2.0) is incompatible with spacy (>=3.2.0,<4.0.0).
```
The restriction on typing-extensions is in the `requirements.txt`, which are run via `setuptools` in `pyproject.toml`
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: ubuntu 22.04
* Python Version Used: 3.10
* spaCy Version Used:3.7.2
* Environment Information:
|
closed
|
2023-12-11T18:03:59Z
|
2023-12-12T06:53:05Z
|
https://github.com/explosion/spaCy/issues/13196
|
[
"install"
] |
jacobmanderson
| 1
|
django-import-export/django-import-export
|
django
| 1,130
|
Repeated ID in imported file does not cause error
|
Just to experiment, I created an XL with a repeated ID to see whether the import process would throw an error. My DB table already contains **IDs upto 4211**, and the table I created for import contained many rows with IDs as follows:
id
____
4210
4209
4210
4211
4212
4213
4214
...
When running the import, the preview shows that the first row with ID 4210 is an UPDATE, then 4209 is an UPDATE (this is fine), then the _next_ 4210 is also UPDATE and then 4211 is an UPDATE (this is fine) and 4212 onwards is NEW (this is fine).
I would expect that since each is an atomic operation, at the end of the import, the DB record with ID 4210 will have contents of the 2nd 4210 in the XL sheet. Right?
Now, why is this not flagged off as an import error?
PS: I hope I've been able to explain the above clearly!
|
closed
|
2020-05-06T17:54:12Z
|
2020-05-08T08:40:10Z
|
https://github.com/django-import-export/django-import-export/issues/1130
|
[] |
zehawki
| 4
|
microsoft/nni
|
tensorflow
| 5,767
|
NNI console won't show after trialGpuNumber was set to 1
|
**Describe the issue**:
NNI console won't show after trialGpuNumber was set to 1
**Environment**:
- NNI version: 3.0
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version: 2.1
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?: N/A
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**: N/A
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
closed
|
2024-04-02T21:54:41Z
|
2024-04-03T01:56:05Z
|
https://github.com/microsoft/nni/issues/5767
|
[] |
yiqiaoc11
| 0
|
PokemonGoF/PokemonGo-Bot
|
automation
| 6,116
|
catch chance display
|
I have noticed that the bot displays the catch chance in a pretty weird manner:
[2017-07-23 20:33:18] [PokemonCatchWorker] [INFO] Threw a Pinap Berry! Catch rate with Greatball is now: 26.72
[2017-07-23 20:33:28] [PokemonCatchWorker] [INFO] Great Curveball throw! Used Ultraball, with chance 26.72 (42 left)
As far as I understand, this means that the chance of catching the pokemon is the same, no matter if a Greatball or a Ultraball is used. But this is not true.
My guess is that internall the bot uses correct values, but the value displayed is not updated.
Can somebody fix the displayed value?
|
open
|
2017-07-23T17:42:02Z
|
2017-07-24T07:42:42Z
|
https://github.com/PokemonGoF/PokemonGo-Bot/issues/6116
|
[] |
guenterneust
| 1
|
piskvorky/gensim
|
data-science
| 3,387
|
n/a
|
n/a
|
closed
|
2022-09-07T10:48:16Z
|
2022-09-07T14:35:02Z
|
https://github.com/piskvorky/gensim/issues/3387
|
[] |
alabidi
| 0
|
benbusby/whoogle-search
|
flask
| 679
|
[QUESTION] How to get the suggestions URL right?
|
I recently downloaded all of my data from Google Takeout and I noticed a peculiar field that Chrome doesn't show when adding/editing search engines, namely `suggestions_url` which in this case is set to `http://localhost:5000/autocomplete?q={searchTerms}`. Since chrome shows no way to change this field whatsoever I'm assuming it's something that's exchanged automatically between whoogle and chrome as part of the opensearch standard or w/e, but my question is how do I tell whoogle to use my instance URL instead of the localhost:port one? It's in a docker container behind nginx so obviuously that doesn't work
|
closed
|
2022-03-13T11:33:26Z
|
2022-08-01T16:22:11Z
|
https://github.com/benbusby/whoogle-search/issues/679
|
[
"question"
] |
nocturn9x
| 3
|
teamhide/fastapi-boilerplate
|
fastapi
| 40
|
Potential Performance Issue with Session Management in Repository Methods
|
I noticed that in the current implementation of the UserSQLAlchemyRepo, each repository method creates a new session using the session_factory() context manager. It can lead to potential performance issues when multiple repository methods are called sequentially within the same service layer function.
https://github.com/teamhide/fastapi-boilerplate/blob/ca3bb3fa5686cb031d6cd99d45b34fd7d2b12494/core/db/session.py#L51-L76
https://github.com/teamhide/fastapi-boilerplate/blob/ca3bb3fa5686cb031d6cd99d45b34fd7d2b12494/app/user/adapter/output/persistence/sqlalchemy/user.py#L29-L39
|
open
|
2024-08-26T08:11:45Z
|
2025-03-19T03:16:14Z
|
https://github.com/teamhide/fastapi-boilerplate/issues/40
|
[] |
vuxmai
| 1
|
vitalik/django-ninja
|
rest-api
| 373
|
Parent URL resolver match parameters aren't handled (except when they are)
|
I have a soft tenant system where clients of the system each have their own `/company-slug/...` base URL. Probably not how I'd approach this today but this is where we are. All our URLs have a component that gets extracted and churned into a Company record (so we can do permissions checks, etc). I'm mounting my ninja API underneath that.
path('<cs:company_slug>/', include([
# ...
path('api/', api.urls),
])
That works great —I can access the API— but there are two problems.
The first is accessing parent parameters (eg `company_slug`) from within the API views. I need to know what the company is. I could probably ship this out to middleware, stuff the company object into the request (and possibly will, long term) but in other scenarios that might not be desirable. For now I'm accessing it via `request.resolver_match.kwargs['company_slug']`. It'd be nice if I could access this from the view more directly. Maybe.
The _buggier_ problem is that docs don't work when mounted underneath additional parameters. The docs view at `/mycompany/api/docs` gets an extra, unexpected keyword argument (`company_slug` in my example here) and just pops.
|
open
|
2022-02-25T10:38:40Z
|
2023-09-05T08:43:39Z
|
https://github.com/vitalik/django-ninja/issues/373
|
[] |
oliwarner
| 4
|
pallets/flask
|
flask
| 4,795
|
flask crashed
|
Dears,
Hope all is well.
I have a project, you can say a big project has more than 168 API routes (/x /y /yyy ..etc)
each of these API routes has a specific function in my project I am facing a big problem that the flask crashed or the flask slower after sending about 1500 - 2000 req/second.
and I notched that the system resource not consumed (I,e my RAM 8 GB the total used is 4 GB and CPU utilization is 40 to 65%)
so what the issue here?
and do you have real-world examples company is using your API for a big project?
and what is your recommendations for the production apps
|
closed
|
2022-08-31T17:19:22Z
|
2022-08-31T17:27:55Z
|
https://github.com/pallets/flask/issues/4795
|
[] |
themeswordpress
| 0
|
docarray/docarray
|
fastapi
| 1,071
|
docarray proto module cant be imported by child Processes
|
**Describe the bug**
docarray proto module cant be imported my child Processes
**To Reproduce**
script.py
```python
import torch
import multiprocessing as mp
from docarray import DocumentArray, BaseDocument
from docarray.typing import TorchTensor
class Meow(BaseDocument):
tensor: TorchTensor
def deserialize(pb_msg):
return DocumentArray[Meow].from_protobuf(pb_msg)
with mp.Pool(2) as p:
a = p.map(deserialize, [DocumentArray[Meow]([Meow(tensor=torch.randn(x)) for x in range(4)]).to_protobuf() for _ in range(2)])
print(a)
```
Output
```console
Traceback (most recent call last):
File "dragon.py", line 13, in <module>
a = p.map(deserialize, [DocumentArray[Meow]([Meow(tensor=torch.randn(x)) for x in range(4)]).to_protobuf() for _ in range(2)])
File "/home/jackmin/miniconda3/envs/docarray/lib/python3.8/multiprocessing/pool.py", line 364, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/jackmin/miniconda3/envs/docarray/lib/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
File "/home/jackmin/miniconda3/envs/docarray/lib/python3.8/multiprocessing/pool.py", line 537, in _handle_tasks
put(task)
File "/home/jackmin/miniconda3/envs/docarray/lib/python3.8/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/jackmin/miniconda3/envs/docarray/lib/python3.8/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'docarray_pb2.DocumentArrayProto'>: import of module 'docarray_pb2' failed
```
**Expected behavior**
No error
|
closed
|
2023-02-01T11:27:19Z
|
2023-02-07T13:41:22Z
|
https://github.com/docarray/docarray/issues/1071
|
[] |
Jackmin801
| 1
|
flairNLP/flair
|
nlp
| 3,283
|
[Question]: Evaluating as a multi-label problem: False
|
### Question
I'm training a SequenceTagger model for NER. My Corpus contains 13 different Labels. Training works and everything goes fine, but the evaluation during epochs seem off.
The Message "Evaluating as a multi-label problem: False" shows up and I think that's the reason why the F1-Scores are so bad. When I cancel after a few epochs and check the report, the F1-Scores for all entities are above 0.80.
How can I change the evaluation method during epochs? I did not have the problem with flair 0.10, but I can't downgrade because we have scans running on our containers and old version are listed as security threats and thus the containers are deleted.
I am using the latest flair version.
Code looks like this:
`def train_model(data_path, embeddings_path, model_path):
# GPU-Speicher leeren um Memory-Errors zu vermeiden
torch.cuda.empty_cache()
# GPU festlegen
flair.device = torch.device('cuda:0')
# Festlegen des Formats der Datensätze, also Text in der ertsen Spalte, Entitäten in der zweiten
columns = {0: 'text', 1: 'ner'}
# Setzen des Pfades der Trainings- und Testdateien
data_folder = data_path
# Erstellen eines Korpus der das oben angelegte Format der Datensätze und deren Pfade erhält
corpus: Corpus = ColumnCorpus(data_folder, columns,
train_file='train.txt',
test_file='test.txt',
dev_file='dev.txt')
label_type = 'ner'
# Erstellen des Label-Dictionaries.
label_dict = corpus.make_label_dictionary(label_type=label_type)
print(label_dict)
# Laden der Embeddings
embeddings = FlairEmbeddings(embeddings_path)
# Struktur des Modells erstellen
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=label_dict,
tag_type=label_type,
use_crf=True)
# ModelTrainer-Klasse initiieren
trainer = ModelTrainer(tagger, corpus)
# Vor Training nach Checkpoint suchen
folder_content = []
for cp in os.listdir(model_path):
folder_content.append(cp)
if 'checkpoint.pt' in folder_content:
trained_model = SequenceTagger.load(model_path + '/checkpoint.pt')
trainer.resume(trained_model,
base_path=model_path + '-resume',
learning_rate=0.1,
mini_batch_size=4,
max_epochs=150
)
else:
# Start des Trainings
trainer.train(model_path,
learning_rate=0.1,
mini_batch_size=4,
max_epochs=150,
checkpoint=True)`
|
closed
|
2023-07-19T09:13:22Z
|
2023-07-24T06:30:14Z
|
https://github.com/flairNLP/flair/issues/3283
|
[
"question"
] |
sorth1992
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
pytorch
| 1,120
|
Victoria
|
LOVE
|
closed
|
2022-09-29T01:01:14Z
|
2022-09-29T01:01:32Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1120
|
[] |
ImanuillKant1
| 0
|
marcomusy/vedo
|
numpy
| 66
|
Points being plot as Lines
|
Hi,
First of all thanks you for such a great library with so many examples.
I am trying to plot points from an array. However, the resulting pic is always a line for this specific datapoints.
```python
from vtkplotter import show, Points
positions = array([[0.16621395, 0.28789101, 0. ],
[0. , 0. , 0. ],
[0.24386514, 0. , 0. ],
[0.07767649, 0.04484654, 0.12684517],
[0. , 0. , 0. ],
[0.02198013, 0.01269023, 0.0358934 ],
[0.13108585, 0.00570134, 0.01612581],
[0.36077809, 0. , 0. ],
[0.22941974, 0.3399008 , 0.08126898],
[0.15449964, 0.01249786, 0.03534928]])
plot_cba = Plotter()
points_cba = Points(positions, c='black', r=0.01)
plot_cba += points_cba
show(points_cba)
```

When I try different datapoints without 0 in x, y, z coordinates the points seem to be fine !.
Not sure its a bug or is there any parameter to be modified to draw points with 0 in any of the coordinates.
Thanks in advance for your help.
|
closed
|
2019-10-13T14:05:41Z
|
2019-10-14T13:20:55Z
|
https://github.com/marcomusy/vedo/issues/66
|
[
"bug",
"in-progress"
] |
mahendra-ramajayam
| 3
|
plotly/dash
|
jupyter
| 2,457
|
[BUG] DataTable containing List behaves inconsistently between display and export
|
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.8.1
dash-bootstrap-components 1.4.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-renderer 1.5.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: Ubuntu 20.04
- Browser: Firefox
- Version: 111.0
**Describe the bug**
Create a pandas DataFrame where one cell is a python List.
The items in the list are displayed concatenated in the front end, but when Exported to csv, the cell is blank.
**Expected behavior**
The exported CSV should be the same as the display in the front end.
**Screenshots**
Minimal example:
```
import pandas
from dash import Dash, html, dcc, dash_table
app = Dash(__name__)
df = pandas.DataFrame({"Column": ["one", "two", ["three","four"]]})
app.layout = html.Div(children=[
dash_table.DataTable(
id="tableid",
columns=[{"name": col, "id": col} for col in df.columns],
data=df.to_dict('records'),
export_format="csv"
)
])
if __name__ == '__main__':
app.run_server("0.0.0.0", 5000, debug=False)
```
Browser display:

Data.csv:
```
Column
one
two
```
Please note the blank line after 'two'
|
closed
|
2023-03-16T13:01:47Z
|
2024-07-25T13:03:09Z
|
https://github.com/plotly/dash/issues/2457
|
[
"dash-data-table"
] |
avarsava
| 2
|
graphql-python/graphene-django
|
graphql
| 618
|
How to provide an argument from DjangoFilterConnectionField to a node
|
```
items = DjangoFilterConnectionField(ItemNode, userId=graphene.String())
...
class ItemNode(DjangoObjectType):
def resolve_something(self, info):
userId = ... ??? ...
if userId:
return self.something.filter(version__isnull=True)
return self.something.filter(version__isnull=False)
...
```
|
closed
|
2019-04-15T11:33:28Z
|
2021-04-14T20:05:25Z
|
https://github.com/graphql-python/graphene-django/issues/618
|
[
"question"
] |
ulkoart
| 2
|
scikit-learn-contrib/metric-learn
|
scikit-learn
| 173
|
Ensure that CalibratedClassifierCV works
|
Related to #168
scikit-learn's `CalibratedClassifierCV` should work with pairs classifier:
We should ensure that it return coherent results with our estimators (could be done in #168)
This could be finalized when `CalibratedClassifierCV` accept 3d arrays (see issue https://github.com/scikit-learn/scikit-learn/issues/13077)
TODO:
- [ ] See comment https://github.com/metric-learn/metric-learn/pull/168#discussion_r265468473
|
open
|
2019-02-22T11:06:16Z
|
2019-04-02T13:50:04Z
|
https://github.com/scikit-learn-contrib/metric-learn/issues/173
|
[] |
wdevazelhes
| 0
|
NVlabs/neuralangelo
|
computer-vision
| 209
|
Error wandb
|
I'm trying to run Neuralangelo with the test set "lego," but I haven't been able to get past the point where I invoke the command:
torchrun --nproc_per_node=${GPUS} train.py \
--logdir=logs/${GROUP}/${NAME} \
--config=${CONFIG} \
--show_pbar
This command throws an error for which I haven't been able to find a solution. I've tried changing many of the parameters in the project's files, but I still can't find a fix. Below is the error I'm encountering, in case anyone has a solution.
Thank you.
Error:
torchrun --nproc_per_node=${GPUS} train.py --logdir=logs/${GROUP}/${NAME} --config=${CONFIG} --show_pbar
(Setting affinity with NVML failed, skipping...)
[W Utils.hpp:135] Warning: Environment variable NCCL_ASYNC_ERROR_HANDLING is deprecated; use TORCH_NCCL_ASYNC_ERROR_HANDLING instead (function getCvarInt)
Training with 1 GPUs.
Using random seed 0
Make folder logs/example_group/example_name
* checkpoint:
* save_epoch: 9999999999
* save_iter: 5000
* save_latest_iter: 9999999999
* save_period: 9999999999
* strict_resume: True
* cudnn:
* benchmark: True
* deterministic: False
* data:
* name: dummy
* num_images: None
* num_workers: 4
* preload: True
* readjust:
* center: [0.0, 0.0, 0.0]
* scale: 1.0
* root: datasets/lego_ds2
* train:
* batch_size: 2
* image_size: [801, 801]
* subset: None
* type: projects.neuralangelo.data
* use_multi_epoch_loader: True
* val:
* batch_size: 2
* image_size: [300, 300]
* max_viz_samples: 16
* subset: 4
* image_save_iter: 9999999999
* inference_args:
* local_rank: 0
* logdir: logs/example_group/example_name
* logging_iter: 9999999999999
* max_epoch: 9999999999
* max_iter: 500000
* metrics_epoch: None
* metrics_iter: None
* model:
* appear_embed:
* dim: 4
* enabled: False
* background:
* enabled: True
* encoding:
* levels: 10
* type: fourier
* encoding_view:
* levels: 3
* type: spherical
* mlp:
* activ: relu
* activ_density: softplus
* activ_density_params:
* activ_params:
* hidden_dim: 256
* hidden_dim_rgb: 128
* num_layers: 8
* num_layers_rgb: 2
* skip: [4]
* skip_rgb: []
* view_dep: True
* white: False
* object:
* rgb:
* encoding_view:
* levels: 3
* type: spherical
* mlp:
* activ: relu_
* activ_params:
* hidden_dim: 256
* num_layers: 4
* skip: []
* weight_norm: True
* mode: idr
* s_var:
* anneal_end: 0.1
* init_val: 3.0
* sdf:
* encoding:
* coarse2fine:
* enabled: True
* init_active_level: 4
* step: 5000
* hashgrid:
* dict_size: 21
* dim: 4
* max_logres: 11
* min_logres: 5
* range: [-2, 2]
* levels: 16
* type: hashgrid
* gradient:
* mode: numerical
* taps: 4
* mlp:
* activ: softplus
* activ_params:
* beta: 100
* geometric_init: True
* hidden_dim: 256
* inside_out: False
* num_layers: 1
* out_bias: 0.5
* skip: []
* weight_norm: True
* render:
* num_sample_hierarchy: 4
* num_samples:
* background: 32
* coarse: 64
* fine: 16
* rand_rays: 512
* stratified: True
* type: projects.neuralangelo.model
* nvtx_profile: False
* optim:
* fused_opt: False
* params:
* lr: 0.001
* weight_decay: 0.01
* sched:
* gamma: 10.0
* iteration_mode: True
* step_size: 9999999999
* two_steps: [300000, 400000]
* type: two_steps_with_warmup
* warm_up_end: 5000
* type: AdamW
* pretrained_weight: None
* source_filename: projects/neuralangelo/configs/custom/lego.yaml
* speed_benchmark: False
* test_data:
* name: dummy
* num_workers: 0
* test:
* batch_size: 1
* is_lmdb: False
* roots: None
* type: imaginaire.datasets.images
* timeout_period: 9999999
* trainer:
* amp_config:
* backoff_factor: 0.5
* enabled: False
* growth_factor: 2.0
* growth_interval: 2000
* init_scale: 65536.0
* ddp_config:
* find_unused_parameters: False
* static_graph: True
* depth_vis_scale: 0.5
* ema_config:
* beta: 0.9999
* enabled: False
* load_ema_checkpoint: False
* start_iteration: 0
* grad_accum_iter: 1
* image_to_tensorboard: False
* init:
* gain: None
* type: none
* loss_weight:
* curvature: 0.0005
* eikonal: 0.1
* render: 1.0
* type: projects.neuralangelo.trainer
* validation_iter: 5000
* wandb_image_iter: 10000
* wandb_scalar_iter: 100
cudnn benchmark: True
cudnn deterministic: False
Setup trainer.
Using random seed 0
/home/miguel12/miniconda3/envs/neuralangelo/lib/python3.9/site-packages/torch/nn/utils/weight_norm.py:28: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
model parameter count: 99,705,900
Initialize model weights using type: none, gain: None
Using random seed 0
[rank0]:[W Utils.hpp:108] Warning: Environment variable NCCL_ASYNC_ERROR_HANDLING is deprecated; use TORCH_NCCL_ASYNC_ERROR_HANDLING instead (function getCvarString)
Allow TensorFloat32 operations on supported devices
Train dataset length: 100
Val dataset length: 4
Training from scratch.
Initialize wandb
[rank0]: Traceback (most recent call last):
[rank0]: File "/mnt/d/Documents/neuralangelo/train.py", line 104, in <module>
[rank0]: main()
[rank0]: File "/mnt/d/Documents/neuralangelo/train.py", line 85, in main
[rank0]: trainer.init_wandb(cfg,
[rank0]: File "/mnt/d/Documents/neuralangelo/imaginaire/trainers/base.py", line 269, in init_wandb
[rank0]: wandb.watch(self.model_module)
[rank0]: File "/home/miguel12/miniconda3/envs/neuralangelo/lib/python3.9/site-packages/wandb/sdk/wandb_watch.py", line 49, in watch
[rank0]: tel.feature.watch = True
[rank0]: File "/home/miguel12/miniconda3/envs/neuralangelo/lib/python3.9/site-packages/wandb/sdk/lib/telemetry.py", line 42, in __exit__
[rank0]: self._run._telemetry_callback(self._obj)
[rank0]: File "/home/miguel12/miniconda3/envs/neuralangelo/lib/python3.9/site-packages/wandb/sdk/wandb_run.py", line 799, in _telemetry_callback
[rank0]: self._telemetry_obj.MergeFrom(telem_obj)
[rank0]: AttributeError: 'Run' object has no attribute '_telemetry_obj'
E0822 21:49:57.518840 139941045491520 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 25214) of binary: /home/miguel12/miniconda3/envs/neuralangelo/bin/python
Traceback (most recent call last):
File "/home/miguel12/miniconda3/envs/neuralangelo/bin/torchrun", line 10, in <module>
sys.exit(main())
File "/home/miguel12/miniconda3/envs/neuralangelo/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/home/miguel12/miniconda3/envs/neuralangelo/lib/python3.9/site-packages/torch/distributed/run.py", line 879, in main
run(args)
File "/home/miguel12/miniconda3/envs/neuralangelo/lib/python3.9/site-packages/torch/distributed/run.py", line 870, in run
elastic_launch(
File "/home/miguel12/miniconda3/envs/neuralangelo/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/miguel12/miniconda3/envs/neuralangelo/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-08-22_21:49:57
host : DESKTOP-Q0DS9I2.
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 25214)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
|
open
|
2024-08-23T03:25:11Z
|
2024-08-26T03:19:45Z
|
https://github.com/NVlabs/neuralangelo/issues/209
|
[] |
P-UnKnow08
| 1
|
tqdm/tqdm
|
pandas
| 705
|
Publish objects.inv file for use with intersphinx
|
It's possible to link to between Sphinx documentation sets by using the [intersphinx extension](http://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html), but when I try to do this with the tqdm docs, I get a file not found error for objects.inv.
|
open
|
2019-03-28T20:29:39Z
|
2019-04-22T21:11:36Z
|
https://github.com/tqdm/tqdm/issues/705
|
[
"p4-enhancement-future 🧨"
] |
tswast
| 1
|
ccxt/ccxt
|
api
| 25,151
|
submit sell orders fails in gemini since Jan 31
|
### Operating System
Debian
### Programming Languages
Python
### CCXT Version
4.4.52
### Description
The order submission both buy and sell worked on Jan 30 and before, but does not work since .
### Code
```
This call returns json, but precision section in it is all null
markets = await exchange.loadMarkets(reload=True)
"precision": {
"amount": null,
"price": null,
"cost": null,
"base": null,
"quote": null
},
if I provide it as a parameter, it does not help ====> params={'precision': {'amount': 2}}
order_output = await self.get_exch().create_limit_sell_order(market, qty, price, params={ 'precision': { 'amount': 2 } } )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxxxxxxx/lib/python3.11/site-packages/ccxt/async_support/base/exchange.py", line 1599, in create_limit_sell_order
return await self.create_order(symbol, 'limit', 'sell', amount, price, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxxxxxx/lib/python3.11/site-packages/ccxt/async_support/gemini.py", line 1484, in create_order
amountString = self.amount_to_precision(symbol, amount)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxxxxxx/lib/python3.11/site-packages/ccxt/base/exchange.py", line 5407, in amount_to_precision
result = self.decimal_to_precision(amount, TRUNCATE, market['precision']['amount'], self.precisionMode, self.paddingMode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxxxxx/lib/python3.11/site-packages/ccxt/base/decimal_to_precision.py", line 37, in decimal_to_precision
assert precision is not None
^^^^^^^^^^^^^^^^^^^^^
AssertionError
|
closed
|
2025-02-02T19:35:14Z
|
2025-02-19T21:39:15Z
|
https://github.com/ccxt/ccxt/issues/25151
|
[] |
vigorIv2
| 3
|
OFA-Sys/Chinese-CLIP
|
nlp
| 105
|
请问demo中的图到图检索是如何实现的?
|
我想实现图到图减速,但不知道怎么弄
|
closed
|
2023-05-11T10:25:08Z
|
2023-11-27T07:26:25Z
|
https://github.com/OFA-Sys/Chinese-CLIP/issues/105
|
[] |
yourfathermyson
| 1
|
explosion/spaCy
|
deep-learning
| 13,536
|
Fail to call Python code using Matlab
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
Python code: File name: test_spacy.py
```
import spacy
nlp = spacy.load("en_core_web_lg")
doc = nlp("This is a sentence.")
```
No error while run the python code using Pycharm IDE.
Matlab code for calling the python code
```
pyenv;
py.importlib.import_module('test_spacy');
path_add = fileparts(which('test_spacy.py'));
if count(py.sys.path, path_add) == 0
insert(py.sys.path, int64(0), path_add);
end
```
Error occurred while run the Matlab code: Error using numpy_ops>init thinc.backends.numpy_ops
Python Error: ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from
PyObject
Matlab and python codes all are in the same folder. Created a python project using PyCharm. Then crated the Matlab file and saved the file in the Python project file
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Windows 11
* Python Version Used: 3.12.3
* spaCy Version Used: 3.7.5
* Environment Information: C:\Users\cse_s\AppData\Local\Programs\Python\Python312\Lib\site-packages\spacy
|
open
|
2024-06-20T16:19:16Z
|
2024-06-27T12:50:22Z
|
https://github.com/explosion/spaCy/issues/13536
|
[] |
Saswati-Project
| 1
|
unit8co/darts
|
data-science
| 2,193
|
[QUESTION] Croston and add_encoders
|
Hi,
I cant get add_encoders functionality (for Croston) to work, see below

temporal_cv_lf basically just calls
m = Croston(**params)
**m.historical_forecasts**(series=series,
forecast_horizon=forecast_horizon,
last_points_only=False,
overlap_end=False,
stride=1,
retrain=True))
Any help would be appreciated
|
closed
|
2024-01-27T11:07:01Z
|
2024-09-02T09:48:21Z
|
https://github.com/unit8co/darts/issues/2193
|
[
"bug",
"good first issue",
"pr_welcome"
] |
sebros-sandvik
| 5
|
axnsan12/drf-yasg
|
django
| 877
|
Showing the read_only fields in the serializer in the post /swagger template in (ForignKeyFields)
|
class MatchSerializer(serializers.ModelSerializer):
winner = AthleteWinnerSerializer(read_only=True)
class Meta:
model = Match
fields = ['uid', 'round_one', 'round_two', 'round_three', 'winner']
In this example the winner is generated in the body of the post api like the example below :
{
"round_one": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"round_two": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"round_three": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"winner": {
"username": "string"
}
}
|
open
|
2023-12-03T13:37:33Z
|
2025-03-09T10:27:45Z
|
https://github.com/axnsan12/drf-yasg/issues/877
|
[
"question",
"unanswered"
] |
SamiTalebian
| 1
|
ckan/ckan
|
api
| 8,429
|
UI Revamp: Displaying facet accordion with active options as expanded by default
|
Hello!
By request of the designers of the UI revamp, they would like the facets to appear as accordions. They have requested by default if there are no active options within a facet, it should be collapsed, otherwise it should be expanded.
Screenshots for context:
Default, facets with no active item

Default, facets with 1 or more active items

I have a working version of this, that is probably not efficient and uses jinja filters right now.
I'm using `{% set active_items = items|selectattr('active')|list|length >= 1 %}` to see if there are any active items within a facet and if `active_items` is empty, I'm collapsing that facet accordion. For full code, see [facet_list.html](https://github.com/aleeexgreeen/ckan/blob/7180b7140c3862cd8f3700222b1e5ddfcb346244/ckan/templates/snippets/facet_list.html)
CKAN currently can indicate whether an item in a facet is active (`item.active` -> happening in `{% for item in items %}`), but ultimately, I need to indicate whether there are any active items in the facet before the for loop to collapse and expand the accordions accordingly (🤣 ) and I'm wondering what is the best way to do so?
|
open
|
2024-09-11T17:15:12Z
|
2024-09-16T00:04:23Z
|
https://github.com/ckan/ckan/issues/8429
|
[] |
aleeexgreeen
| 2
|
tensorflow/tensor2tensor
|
deep-learning
| 1,735
|
Evaluation all X steps
|
### Description
Hi, I'm trying to do an evaluation of all X steps.
Preferably with the best method, save the best 5 for example.
I found the export_saved_model switch but it doesn't work (since loss is only present in a variable, which is currently not the problem i think i can solve it, but i want the eval test more often).
Currently I can only do one evaluation at the beginning and one at the end, not every x Steps, only the checkpoint saving.
the call looks like this:
t2t_trainer.py \
--model=transformer \
--hparams_set=transformer_librispeech \
--problem=librispeech_clean_small \
--train_steps=2000 \
--eval_steps=20 \
--local_eval_frequency=100 \
--data_dir=/content/t2t_records \
--output_dir=/content/current_training4 \
--export_saved_model=True
### Environment information
```
OS: Google Colab - Python3 GPU
$ pip3 install -q -U tensor2tensor
$ pip3 install -q matplotlib
$ pip3 install -q gast==0.2.2
$ apt-get install sox
$ apt-get install libsox-fmt-mp3
$ python -V
# Python 3.6.8
|
closed
|
2019-11-04T00:14:54Z
|
2019-11-04T19:58:24Z
|
https://github.com/tensorflow/tensor2tensor/issues/1735
|
[] |
DevZiegler
| 1
|
dunossauro/fastapi-do-zero
|
sqlalchemy
| 253
|
Trocar Poetry + Pyenv + pipx por UV
|
Não gostaria, mas o curso é para vocês e vocês querem!
:sob:
## Tarefas no texto
- [ ] Aula 01: Complexa, precisa reavaliar a aula toda
- [ ] Aula 04: Menções em instalações e lock
- [ ] poetry add sqlalchemy
- [ ] poetry add pydantic-settings
- [ ] poetry add alembic
- [ ] poetry.lock (2x)
- [ ] Aula 06:
- [ ] poetry add pyjwt "pwdlib[argon2]"
- [ ] Aula 08:
- [ ] poetry add --group dev factory-boy
- [ ] poetry add --group dev freezegun
- [ ] Aula 10:
- [ ] `poetry run` define o comando que será executado no ambiente virtual criado pelo Poetry.
- [ ] poetry add "psycopg[binary]"
- [ ] docker exec -it fastzeroapp poetry run alembic upgrade head
- [ ] poetry run alembic upgrade head
- [ ] poetry run uvicorn --host 0.0.0.0 --port 8000 fast_zero.app:app
- [ ] `poetry run alembic upgrade head`: roda as migrações do banco de dados até a última versão.
- [ ] `poetry run uvicorn --host 0.0.0.0 --port 8000 fast_zero.app:app`: inicia a aplicação. Este é o comando que normalmente estaria no `CMD` do Dockerfile, mas agora está incluído no `entrypoint` para garantir que as migrações sejam executadas antes do servidor iniciar.
- [ ] poetry add --group dev testcontainers
- [ ] Aula 11: O CI todo está construído sob o poetry, aqui precisa de uma análise maior que matchs
- [ ] Aula 12:
- [ ] flyctl ssh console -a fastzeroapp -C "poetry run alembic upgrade head"
- [ ] flyctl ssh console -a fastzeroapp -C "poetry run alembic upgrade head"
- [ ] O subcomando que utilizamos `ssh console` nos fornece acesso ao shell do container. Por isso tivemos que especificar com a flag `-a` o nome da nossa aplicação (poderíamos acessar o console do banco de dados, também). E a flag `-C` é o comando que queremos que seja executado no console do container. Nesse caso, o comando completo representa: "Acesse o console do app fastzeroapp via SSH e execute o comando `poetry run alembic upgrade head`".
## Em templates
- [ ] Dockerfile (templates/dockerfile.md)
- [ ] Poetry (templates/poetry.md)
## Em apêndices
- [ ] Apêndice A
- [ ] Apêndice B
## Em Códigos das aulas
- [ ] Aula 01
- [ ] Aula 02
- [ ] Aula 03
- [ ] Aula 04
- [ ] Aula 05
- [ ] Aula 06
- [ ] Aula 07
- [ ] Aula 08
- [ ] Aula 09
- [ ] Aula 10
- [ ] Aula 11
- [ ] Aula 12
## Na estrutura do projeto
- [ ] tasks.py - Reformular tudo para o uv
|
closed
|
2024-10-05T02:00:27Z
|
2024-10-05T18:00:30Z
|
https://github.com/dunossauro/fastapi-do-zero/issues/253
|
[] |
dunossauro
| 1
|
huggingface/transformers
|
python
| 36,642
|
Is it correct that the repetition penalty is applied to the input_ids encompassing all inputs and outputs, rather than solely on the generated tokens?
|
### System Info
- `transformers` version: 4.44.2
- Platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.14
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
### Who can help?
@gante
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L101
def enforce_repetition_penalty_(self, lprobs, batch_size, num_beams, prev_output_tokens, repetition_penalty):
"""repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858). """
for i in range(batch_size * num_beams):
for previous_token in set(prev_output_tokens[i].tolist()):
# if score < 0 then repetition penalty has to multiplied to reduce the previous token probability
if lprobs[i, previous_token] < 0:
lprobs[i, previous_token] *= repetition_penalty
else:
lprobs[i, previous_token] /= repetition_penalty
def postprocess_next_token_scores(
self,
scores,
input_ids,
no_repeat_ngram_size,
bad_words_ids,
cur_len,
min_length,
max_length,
eos_token_id,
repetition_penalty,
batch_size,
num_beams,
):
# repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858)
if repetition_penalty != 1.0:
self.enforce_repetition_penalty_(
scores, batch_size, num_beams, input_ids, repetition_penalty,
)
...
### Expected behavior
Maybe it should just rely on generated tokens?
|
open
|
2025-03-11T07:28:12Z
|
2025-03-11T07:28:12Z
|
https://github.com/huggingface/transformers/issues/36642
|
[
"bug"
] |
Ostrichpie818
| 0
|
jupyterlab/jupyter-ai
|
jupyter
| 907
|
support multiple contexts within embeddings (configurable paths to embedding database).
|
### Problem
The embedding interface (`/learn`, `/ask`) is really nice and remarkably useful. Currently though, it seems that a user can have only one active context at a time. This is fine for small contexts where I can, say, clone some GitHub project and start with `/learn` the docs of the project, and then `/learn -d` when I'm done. But larger corpuses can take a while to tokenize. If we tokenize something we no longer think is useful, we simply have to unlearn it and start from scratch. lastly, one might imagine wanting to persist the vector db over sessions, or move between hubs, without re-learning from scratch.
### Proposed Solution
Allow `/learn` and `/ask` to take an optional `--db` argument or something like that, specifying a (possibly persistent) path to where the vector database would live. by altering the argument, a user could optionally have distinct contexts.
### Additional context
I'm still new to this feature and experimenting with it, so perhaps there are good reasons this idea does not make sense.
|
open
|
2024-07-20T02:42:01Z
|
2024-07-24T20:45:26Z
|
https://github.com/jupyterlab/jupyter-ai/issues/907
|
[
"enhancement",
"scope:RAG"
] |
cboettig
| 0
|
tensorflow/tensor2tensor
|
machine-learning
| 956
|
cloud_tpu fails while another tpu is being created (ip = default)
|
when a tpu is being created it does not have an ip yet. if you try to create another one at the same time it will fail as cloud_tpu will try to parse 'default' as a valid ip
INFO:tensorflow:VM eyal-vm5-vm already exists, reusing.
Creating TPU instance eyal-vm5-tpu
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/utils/cloud_tpu.py", line 285, in create_tpu
tpu_ip = unique_tpu_ip(tpu_names_and_ips)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/utils/cloud_tpu.py", line 270, in unique_tpu_ip
inuse = [el[1].split(".")[2] for el in tpu_names_and_ips]
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/utils/cloud_tpu.py", line 270, in <listcomp>
inuse = [el[1].split(".")[2] for el in tpu_names_and_ips]
IndexError: list index out of range
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/t2t-trainer", line 32, in <module>
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/usr/local/bin/t2t-trainer", line 28, in main
t2t_trainer.main(argv)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 354, in main
with maybe_cloud_tpu():
File "/usr/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 328, in maybe_cloud_tpu
skip_confirmation=FLAGS.cloud_skip_confirmation) as tpu_master:
File "/usr/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/utils/cloud_tpu.py", line 143, in cloud_tpu
skip_confirmation=skip_confirmation)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/utils/cloud_tpu.py", line 386, in create_vm_tpu_pair
tpu_ip = tpu_res.get()
File "/usr/lib/python3.5/multiprocessing/pool.py", line 608, in get
raise self._value
IndexError: list index out of range
|
open
|
2018-07-25T01:28:54Z
|
2018-07-25T01:29:04Z
|
https://github.com/tensorflow/tensor2tensor/issues/956
|
[] |
eyaler
| 0
|
ultralytics/ultralytics
|
pytorch
| 18,688
|
Not all parameters in YOLO-World pretrained weights can be loaded
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Other
### Bug
When i init a YOLOv8s-World-v2 model from a .yaml configuration file and load pretrained weights, it seems that three parameters can not be loaded. The code and outputs are as follows:
```python
from ultralytics import YOLOWorld
model_path = './yolov8s-worldv2.pt'
model_cfg = "./yolov8s-worldv2.yaml"
model = YOLOWorld(model_cfg).load(model_path)
# outputs
Transferred 409/412 items from pretrained weights
```
The yolov8s-worldv2.yaml configuration file is copied from `ultralytics/ultralytics/cfg/models/v8/yolov8-worldv2.yaml` without any modifications except for the file name.
And i find a solution, in `ultralytics/ultralytics/nn/modules/block.py`, modify the `class BNContrastiveHead(nn.Module)` as follows:
```python
# self.bias = nn.Parameter(torch.tensor([-10.0]))
self.bias = nn.Parameter(-10.0 * torch.ones([]))
```
Rerun the above code and output as:
```python
Transferred 412/412 items from pretrained weights
```
### Environment
```
Ultralytics 8.3.59 🚀 Python-3.10.14 torch-2.2.2+cu121 CUDA:0 (NVIDIA GeForce RTX 3090, 24259MiB)
Setup complete ✅ (32 CPUs, 62.8 GB RAM, 686.0/915.3 GB disk)
OS Linux-5.15.0-130-generic-x86_64-with-glibc2.31
Environment Linux
Python 3.10.14
Install git
RAM 62.79 GB
Disk 686.0/915.3 GB
CPU Hygon C86 3285 8-core Processor
CPU count 32
GPU NVIDIA GeForce RTX 3090, 24259MiB
GPU count 1
CUDA 12.1
numpy ✅ 1.26.4>=1.23.0
numpy ✅ 1.26.4<2.0.0; sys_platform == "darwin"
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 10.3.0>=7.1.2
pyyaml ✅ 6.0.1>=5.3.1
requests ✅ 2.31.0>=2.23.0
scipy ✅ 1.13.0>=1.4.1
torch ✅ 2.2.2+cu121>=1.8.0
torch ✅ 2.2.2+cu121!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.17.2+cu121>=0.9.0
tqdm ✅ 4.66.4>=4.64.0
psutil ✅ 5.9.8
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.13>=2.0.0
```
### Minimal Reproducible Example
```python
from ultralytics import YOLOWorld
model_path = './yolov8s-worldv2.pt'
model_cfg = "./yolov8s-worldv2.yaml"
model = YOLOWorld(model_cfg).load(model_path)
# outputs
Transferred 409/412 items from pretrained weights
```
The yolov8s-worldv2.yaml configuration file :
```
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8-World-v2 object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/tasks/detect
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs
s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs
m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs
l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs
x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs
# YOLOv8.0n backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
- [-1, 3, C2f, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 9
# YOLOv8.0n head
head:
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 6], 1, Concat, [1]] # cat backbone P4
- [-1, 3, C2fAttn, [512, 256, 8]] # 12
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 4], 1, Concat, [1]] # cat backbone P3
- [-1, 3, C2fAttn, [256, 128, 4]] # 15 (P3/8-small)
- [15, 1, Conv, [256, 3, 2]]
- [[-1, 12], 1, Concat, [1]] # cat head P4
- [-1, 3, C2fAttn, [512, 256, 8]] # 18 (P4/16-medium)
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 9], 1, Concat, [1]] # cat head P5
- [-1, 3, C2fAttn, [1024, 512, 16]] # 21 (P5/32-large)
- [[15, 18, 21], 1, WorldDetect, [nc, 512, True]] # Detect(P3, P4, P5)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
open
|
2025-01-15T02:10:37Z
|
2025-01-15T04:34:17Z
|
https://github.com/ultralytics/ultralytics/issues/18688
|
[
"bug",
"detect"
] |
chongkuiqi
| 2
|
vitalik/django-ninja
|
rest-api
| 797
|
[BUG] Nested routers will overwrite the response if they are on the same path.
|
**Describe the bug**
If I define the response schema for the api path `/test` on router 1 as ATest,
and the response schema for the same path `/test` with the prefix `/api` on router 2 as BTest,
and then register to the router,
I get ATest as BTest in API Swagger.
My guess is that if the path is the same, the response is overwritten, but I don't know how to fix that.
I'm reporting this as a possible Django Ninja issue.
```python
# ---> API Swagger Response BTest (Error!)
@test_router.get(
"/test",
response=ATest,
)
def test_api(request):
...
# ---> API Swagger Response BTest
@api_test_router.get(
"/test",
response=BTest,
)
def api_test_api(request):
...
api.add_router("", test_router, tags=["test"])
api.add_router("/api", api_test_router, tags=["api_test"])
```
**Versions (please complete the following information):**
- Python version: 3.11.1
- Django version: 4.2.1
- Django-Ninja version: 0.22.2
- Pydantic version: 1.10.2
|
closed
|
2023-07-19T02:23:53Z
|
2023-07-29T06:18:04Z
|
https://github.com/vitalik/django-ninja/issues/797
|
[] |
ndjman7
| 3
|
pyeve/eve
|
flask
| 1,011
|
No primary available for writes
|
I have trouble on write in mongo cluster in `Eve 0.7.2`
On `Eve 0.6.4` works well but on I update this error occurs:
```
Traceback (most recent call last):
File "/var/www/articles-api/env/lib/python3.4/site-packages/eve/methods/patch.py", line 205, in patch_internal
resource, object_id, updates, original)
File "/var/www/articles-api/env/lib/python3.4/site-packages/eve/io/mongo/mongo.py", line 538, in update
return self._change_request(resource, id_, {"$set": updates}, original)
File "/var/www/articles-api/env/lib/python3.4/site-packages/eve/io/mongo/mongo.py", line 474, in _change_request
coll.update_one(filter_, changes)
File "/var/www/articles-api/env/lib/python3.4/site-packages/pymongo/collection.py", line 890, in update_one
with self._socket_for_writes() as sock_info:
File "/opt/rh/rh-python34/root/usr/lib64/python3.4/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/var/www/articles-api/env/lib/python3.4/site-packages/pymongo/mongo_client.py", line 823, in _get_socket
server = self._get_topology().select_server(selector)
File "/var/www/articles-api/env/lib/python3.4/site-packages/pymongo/topology.py", line 214, in select_server
address))
File "/var/www/articles-api/env/lib/python3.4/site-packages/pymongo/topology.py", line 189, in select_servers
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: No primary available for writes
```
My connection string is like this:
`mongodb://articles:xxxx@10.155.32.110:27017,10.155.32.111:27017/articles?replicaSet=rs0&readPreference=nearest`
I also check this question but I guess that is not my case:
http://stackoverflow.com/questions/26091481/write-to-mongo-db-replica-set-take-care-of-primary-when-writing
pip freeze before update:
```
$ pip freeze
Eve==0.6.4
Flask==0.10.1
Flask-PyMongo==0.4.1
idna==2.1
itsdangerous==0.24
packaging==16.8
py==1.4.31
pyasn1==0.1.9
pycparser==2.14
PyJWT==1.4.0
pylint==1.4.4
pymongo==3.2.2
pyparsing==2.1.10
python-slugify==1.2.1
pytz==2016.3
PyYAML==3.12
requests==2.10.0
simplejson==3.8.2
six==1.10.0
tzlocal==1.2.2
Unidecode==0.4.20
urllib3==1.15.1
Werkzeug==0.11.3
wrapt==1.10.7
```
After update:
```
$ pip freeze
Eve==0.7.2
Flask==0.12
Flask-PyMongo==0.4.1
idna==2.5
itsdangerous==0.24
Jinja2==2.9.6
packaging==16.8
py==1.4.33
pyasn1==0.2.3
pycparser==2.17
PyJWT==1.4.2
pymongo==3.4.0
pyparsing==2.2.0
PyYAML==3.12
requests==2.11.1
simplejson==3.10.0
six==1.10.0
urllib3==1.20
Werkzeug==0.11.15
```
|
closed
|
2017-04-18T16:38:41Z
|
2018-05-18T17:19:47Z
|
https://github.com/pyeve/eve/issues/1011
|
[
"stale"
] |
luiscoms
| 4
|
AirtestProject/Airtest
|
automation
| 766
|
命令行/iOS真机/运行脚本时报错 json.decoder.JSONDecodeError
|
**详细描述问题bug:**
```
======================================================================
ERROR: setUpClass (__main__.CustomAirtestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/airtest/cli/runner.py", line 28, in setUpClass
setup_by_args(args)
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/airtest/cli/runner.py", line 130, in setup_by_args
auto_setup(dirpath, devices, args.log, project_root, args.compress)
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/airtest/core/api.py", line 113, in auto_setup
connect_device(dev)
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/airtest/core/api.py", line 63, in connect_device
dev = init_device(platform, uuid, **params)
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/airtest/core/api.py", line 33, in init_device
dev = cls(uuid, **kwargs)
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/airtest/core/ios/ios.py", line 86, in __init__
self.rotation_watcher = RotationWatcher(self)
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/airtest/core/ios/rotation.py", line 22, in __init__
self.session = iosHandle.session
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/airtest/core/ios/ios.py", line 101, in session
self.defaultSession = self.driver.session()
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/wda/__init__.py", line 252, in session
sid = self.status()['sessionId']
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/wda/__init__.py", line 191, in status
res = self.http.get('status')
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/wda/__init__.py", line 101, in fetch
return self._fetch_no_alert(method, url, data)
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/wda/__init__.py", line 107, in _fetch_no_alert
return httpdo(target_url, method, data)
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/wda/__init__.py", line 77, in httpdo
retjson = response.json()
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/site-packages/requests/models.py", line 898, in json
return complexjson.loads(self.text, **kwargs)
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Users/xxx/anaconda3/envs/airtest_py3/lib/python3.7/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
----------------------------------------------------------------------
Ran 0 tests in 0.340s
FAILED (errors=1)
```
**复现步骤:**
参照了[5.3 Airtest的启动器介绍(试用)](http://airtest.netease.com/docs/cn/5_airtest_framework/3_launcherpy.html?highlight=%E5%90%AF%E5%8A%A8%E5%99%A8)
```
test.run_air(os.path.join(PROJECT_PATH, 'testcases/'), run_device=['ios:///http://127.0.0.1:8100'])
```
**python 版本:** `python3.x`
**操作系统:** `macOS Mojave 10.14.6`
**手机设备信息:**
- 手机型号: iPhone 6Plus
- iOS系统版本号: 11.1.1
-- | --
**其他相关信息补充**
(例如在linux ubuntu16.04上运行异常,在windows上正常等信息。)
- AirtestIDE版本号:1.2.0
- 当前操作系统、手机、脚本在AirtestIDE中能正常运行(系统python2.7),但命令行启动就如上报错(python3.7.7)。当前手机、脚本、命令行启动,在另一台macOS(相同系统),python3.6.9,也能正常运行。
**感谢解答**
|
closed
|
2020-07-02T13:41:55Z
|
2020-07-10T08:56:45Z
|
https://github.com/AirtestProject/Airtest/issues/766
|
[] |
IcyW
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.