repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
Nemo2011/bilibili-api
|
api
| 241
|
【提问】关于comment模块无法爬取所有评论
|
**Python 版本:** 3.8
**模块版本:** 15.3.1
**运行环境:** Windows
---
依照文档里的范例直接爬取,试过不同视频都只显示总评论在200多条左右,实际评论都超过这个数量。
而且爬取下来的也只有部分评论,请问是接口那边更新了导致失效吗



代码和示例里的一致 只加了个写入文本
|
closed
|
2023-03-20T05:29:33Z
|
2023-03-22T13:04:59Z
|
https://github.com/Nemo2011/bilibili-api/issues/241
|
[
"question"
] |
andoninari
| 4
|
brightmart/text_classification
|
nlp
| 2
|
sess.run() blocks
|
Hello! I am new to tensorflow and when I run your model TextCNN, I get a issue, that is, sess.run() blocks.
I can only get the print before the code: "curr_loss,curr_acc,_=sess.run([textCNN.loss_val,textCNN.accuracy,textCNN.train_op],feed_dict=feed_dict)" and then , the program blocks! I already make sure the input data exists and I fail to figure it out.
Hope you can give me the answer, thanks for your patience.
|
closed
|
2017-07-16T15:11:14Z
|
2017-07-17T01:13:17Z
|
https://github.com/brightmart/text_classification/issues/2
|
[] |
scutzck033
| 2
|
kaliiiiiiiiii/Selenium-Driverless
|
web-scraping
| 160
|
[TODO] serve docs
|
github pages seems to be active
but i dont see the result at
[kaliiiiiiiiii.github.io/Selenium-Driverless](https://kaliiiiiiiiii.github.io/Selenium-Driverless/)
where is it...?
the link should be in the "about" section of
[github.com/kaliiiiiiiiii/Selenium-Driverless](https://github.com/kaliiiiiiiiii/Selenium-Driverless)
also, currently, the github pages action runs jekyll
but [docs/](https://github.com/kaliiiiiiiiii/Selenium-Driverless/tree/master/docs) looks like static html files, generated by [sphinx](https://www.sphinx-doc.org/en/master/)
[pypi.org/project/selenium-driverless](https://pypi.org/project/selenium-driverless/)
has the same url for homepage, docs, source
|
closed
|
2024-02-01T13:37:43Z
|
2024-02-02T10:02:57Z
|
https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/160
|
[
"documentation"
] |
milahu
| 4
|
Buuntu/fastapi-react
|
sqlalchemy
| 86
|
[Feature Request] Add Storybook support
|
Any thoughts on adding [Storybook](https://storybook.js.org/) support as part of a development workflow?
|
closed
|
2020-07-15T13:33:48Z
|
2020-07-24T22:00:55Z
|
https://github.com/Buuntu/fastapi-react/issues/86
|
[] |
inactivist
| 3
|
huggingface/datasets
|
tensorflow
| 7,196
|
concatenate_datasets does not preserve shuffling state
|
### Describe the bug
After concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156
This means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623
I also noticed that the number of shards is the same after concatenation, which I found surprising, but I don't understand the internals well enough to know whether this is actually surprising or not
### Steps to reproduce the bug
```python
import datasets
import torch.utils.data
def gen(shards):
yield {"shards": shards}
def main():
dataset1 = datasets.IterableDataset.from_generator(
gen, gen_kwargs={"shards": list(range(25))} # TODO: how to understand this?
)
dataset2 = datasets.IterableDataset.from_generator(
gen, gen_kwargs={"shards": list(range(25, 50))} # TODO: how to understand this?
)
dataset1 = dataset1.shuffle(buffer_size=1)
dataset2 = dataset2.shuffle(buffer_size=1)
print(dataset1.n_shards)
print(dataset2.n_shards)
dataset = datasets.concatenate_datasets(
[dataset1, dataset2]
)
print(dataset.n_shards)
# dataset = dataset1
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=0,
)
for i, batch in enumerate(dataloader):
print(batch)
print("\nNew epoch")
dataset = dataset.set_epoch(1)
for i, batch in enumerate(dataloader):
print(batch)
if __name__ == "__main__":
main()
```
### Expected behavior
Shuffling state should be preserved
### Environment info
Latest datasets
|
open
|
2024-10-03T14:30:38Z
|
2025-03-18T10:56:47Z
|
https://github.com/huggingface/datasets/issues/7196
|
[] |
alex-hh
| 1
|
explosion/spaCy
|
nlp
| 13,252
|
Vocab Issue
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->

I am trying to find a word in the vocab and testing the example provided in the documentation. However I see that word apple is not in the vocab. Am I doing something wrong here? How can I check if a word exist in the vocab?
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
## Info about spaCy
- **spaCy version:** 3.7.2
- **Platform:** Linux-5.15.133+-x86_64-with-glibc2.31
- **Python version:** 3.10.12
- **Pipelines:** en_core_web_sm (3.7.1), en_core_web_lg (3.7.1)
|
closed
|
2024-01-19T13:33:16Z
|
2024-02-26T00:02:24Z
|
https://github.com/explosion/spaCy/issues/13252
|
[
"docs",
"feat / vectors"
] |
lordsoffallen
| 4
|
ray-project/ray
|
machine-learning
| 51,272
|
[core][gpu-objects] Driver tries to get the data from in-actor store
|
### Description
The driver is not allowed to be in an NCCL group in Ray GPU objects. Hence, if the driver wants to retrieve data from the in-actor store, we can move the data from the in-actor store to the object store so that the driver can access it.
### Use case
_No response_
|
open
|
2025-03-11T22:06:53Z
|
2025-03-21T22:09:42Z
|
https://github.com/ray-project/ray/issues/51272
|
[
"enhancement",
"P1",
"core",
"gpu-objects"
] |
kevin85421
| 0
|
autokey/autokey
|
automation
| 368
|
v0.95.10 randomly consume 100% of the CPU and also silently crashs
|
## Classification:
Bug and crash
## Reproducibility:
sometimes
## Version
AutoKey version: 0.95.10-1
Used GUI (Gtk, Qt, or both):
both
AUR repo: https://aur.archlinux.org/packages/autokey
Linux Distribution:
Manjaro Linux XFCE 64 bit, kernel 4.14.170-1-MANJARO
## What happens:
Yestarday I updated AutoKey from 0.95.9-1 to 0.95.10-1
And I encounter some issues: the first is that it randomly starts to increase the CPU usage to 100% (without using the GUI and nothing else).
The second issue is that will also randomly stop to work (I cannot use the shortcuts and scripts) because silently crashs: I investigated in the Journal logs:
```
feb 18 06:59:07 systemd-coredump[2611]: Process 1166 (autokey-gtk) of user 1000 dumped core.
Stack trace of thread 2606:
#0 0x00007fb454f33f25 raise (libc.so.6)
#1 0x00007fb454f1d897 abort (libc.so.6)
#2 0x00007fb454f1d767 __assert_fail_base.cold (libc.so.6)
#3 0x00007fb454f2c526 __assert_fail (libc.so.6)
#4 0x00007fb4530229f9 n/a (libX11.so.6)
#5 0x00007fb453022a9e n/a (libX11.so.6)
#6 0x00007fb453022f12 _XReadEvents (libX11.so.6)
#7 0x00007fb45300a356 XIfEvent (libX11.so.6)
#8 0x00007fb45288626f gdk_x11_get_server_time (libgdk-3.so.0)
#9 0x00007fb451fd38a4 n/a (libgtk-3.so.0)
#10 0x00007fb451fd3aa8 n/a (libgtk-3.so.0)
#11 0x00007fb453794d5a g_closure_invoke (libgobject-2.0.so.0)
#12 0x00007fb4537829e4 n/a (libgobject-2.0.so.0)
#13 0x00007fb45378698a g_signal_emit_valist (libgobject-2.0.so.0)
#14 0x00007fb4537877f0 g_signal_emit (libgobject-2.0.so.0)
#15 0x00007fb451fd1cdd gtk_widget_realize (libgtk-3.so.0)
#16 0x00007fb45207102e n/a (libgtk-3.so.0)
#17 0x00007fb453794d5a g_closure_invoke (libgobject-2.0.so.0)
#18 0x00007fb4537829e4 n/a (libgobject-2.0.so.0)
#19 0x00007fb45378698a g_signal_emit_valist (libgobject-2.0.so.0)
#20 0x00007fb4537877f0 g_signal_emit (libgobject-2.0.so.0)
#21 0x00007fb45208b5da gtk_widget_show (libgtk-3.so.0)
#22 0x00007fb452027c97 gtk_status_icon_set_visible (libgtk-3.so.0)
#23 0x00007fb4504c6c0e n/a (libappindicator3.so.1)
#24 0x00007fb453794d5a g_closure_invoke (libgobject-2.0.so.0)
#25 0x00007fb45378288e n/a (libgobject-2.0.so.0)
#26 0x00007fb45378698a g_signal_emit_valist (libgobject-2.0.so.0)
#27 0x00007fb4537877f0 g_signal_emit (libgobject-2.0.so.0)
#28 0x00007fb4504c555c app_indicator_set_status (libappindicator3.so.1)
#29 0x00007fb45375e69a ffi_call_unix64 (libffi.so.6)
#30 0x00007fb45375dfb6 ffi_call (libffi.so.6)
#31 0x00007fb4537fb392 n/a (_gi.cpython-38-x86_64-linux-gnu.so)
#32 0x00007fb4537fa972 n/a (_gi.cpython-38-x86_64-linux-gnu.so)
#33 0x00007fb45380049e n/a (_gi.cpython-38-x86_64-linux-gnu.so)
#34 0x00007fb454c6fad2 _PyObject_MakeTpCall (libpython3.8.so.1.0)
#35 0x00007fb454d2c7f4 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#36 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#37 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#38 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#39 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#40 0x00007fb454d14e3b _PyEval_EvalCodeWithName (libpython3.8.so.1.0)
#41 0x00007fb454d1624b _PyFunction_Vectorcall (libpython3.8.so.1.0)
#42 0x00007fb454c6730d PyObject_Call (libpython3.8.so.1.0)
#43 0x00007fb454d29d03 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#44 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#45 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#46 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#47 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#48 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#49 0x00007fb454d16a7b n/a (libpython3.8.so.1.0)
#50 0x00007fb454c6730d PyObject_Call (libpython3.8.so.1.0)
#51 0x00007fb454d7c4e1 n/a (libpython3.8.so.1.0)
#52 0x00007fb454d368f4 n/a (libpython3.8.so.1.0)
#53 0x00007fb454b204cf start_thread (libpthread.so.0)
#54 0x00007fb454ff72d3 __clone (libc.so.6)
Stack trace of thread 1166:
#0 0x00007fb454fe84cf write (libc.so.6)
#1 0x00007fb454ba2380 _Py_write_noraise (libpython3.8.so.1.0)
#2 0x00007fb454bb0d60 n/a (libpython3.8.so.1.0)
#3 0x00007fb454f33fb0 __restore_rt (libc.so.6)
#4 0x00007fb454f33f25 raise (libc.so.6)
#5 0x00007fb454f1d897 abort (libc.so.6)
#6 0x00007fb454f1d767 __assert_fail_base.cold (libc.so.6)
#7 0x00007fb454f2c526 __assert_fail (libc.so.6)
#8 0x00007fb453022984 n/a (libX11.so.6)
#9 0x00007fb453022add n/a (libX11.so.6)
#10 0x00007fb453022d92 _XEventsQueued (libX11.so.6)
#11 0x00007fb453014782 XPending (libX11.so.6)
#12 0x00007fb45289aa00 n/a (libgdk-3.so.0)
#13 0x00007fb453910a00 g_main_context_prepare (libglib-2.0.so.0)
#14 0x00007fb453911046 n/a (libglib-2.0.so.0)
#15 0x00007fb4539120c3 g_main_loop_run (libglib-2.0.so.0)
#16 0x00007fb4521cb9ef gtk_main (libgtk-3.so.0)
#17 0x00007fb45375e69a ffi_call_unix64 (libffi.so.6)
#18 0x00007fb45375dfb6 ffi_call (libffi.so.6)
#19 0x00007fb4537fb392 n/a (_gi.cpython-38-x86_64-linux-gnu.so)
#20 0x00007fb4537fa972 n/a (_gi.cpython-38-x86_64-linux-gnu.so)
#21 0x00007fb454c673a0 PyObject_Call (libpython3.8.so.1.0)
#22 0x00007fb454d29d03 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#23 0x00007fb454d14e3b _PyEval_EvalCodeWithName (libpython3.8.so.1.0)
#24 0x00007fb454d1624b _PyFunction_Vectorcall (libpython3.8.so.1.0)
#25 0x00007fb454d2c3c8 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#26 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#27 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#28 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#29 0x00007fb454d27c8c _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#30 0x00007fb454d14e3b _PyEval_EvalCodeWithName (libpython3.8.so.1.0)
#31 0x00007fb454d9e3d3 PyEval_EvalCode (libpython3.8.so.1.0)
#32 0x00007fb454d9e428 n/a (libpython3.8.so.1.0)
#33 0x00007fb454da2623 n/a (libpython3.8.so.1.0)
#34 0x00007fb454c3d3e7 PyRun_FileExFlags (libpython3.8.so.1.0)
#35 0x00007fb454c47f4a PyRun_SimpleFileExFlags (libpython3.8.so.1.0)
#36 0x00007fb454daf8be Py_RunMain (libpython3.8.so.1.0)
#37 0x00007fb454daf9a9 Py_BytesMain (libpython3.8.so.1.0)
#38 0x00007fb454f1f153 __libc_start_main (libc.so.6)
#39 0x000055ea57db605e _start (python3.8)
Stack trace of thread 1256:
#0 0x00007fb454fec9ef __poll (libc.so.6)
#1 0x00007fb453911120 n/a (libglib-2.0.so.0)
#2 0x00007fb4539111f1 g_main_context_iteration (libglib-2.0.so.0)
#3 0x00007fb453911242 n/a (libglib-2.0.so.0)
#4 0x00007fb4538edbb1 n/a (libglib-2.0.so.0)
#5 0x00007fb454b204cf start_thread (libpthread.so.0)
#6 0x00007fb454ff72d3 __clone (libc.so.6)
Stack trace of thread 1263:
#0 0x00007fb454feee7b __select (libc.so.6)
#1 0x00007fb454d7505e n/a (libpython3.8.so.1.0)
#2 0x00007fb454c75f4f n/a (libpython3.8.so.1.0)
#3 0x00007fb454d2c3c8 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#4 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#5 0x00007fb454d16a7b n/a (libpython3.8.so.1.0)
#6 0x00007fb454c6730d PyObject_Call (libpython3.8.so.1.0)
#7 0x00007fb454d29d03 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#8 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#9 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#10 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#11 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#12 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#13 0x00007fb454d16a7b n/a (libpython3.8.so.1.0)
#14 0x00007fb454c6730d PyObject_Call (libpython3.8.so.1.0)
#15 0x00007fb454d7c4e1 n/a (libpython3.8.so.1.0)
#16 0x00007fb454d368f4 n/a (libpython3.8.so.1.0)
#17 0x00007fb454b204cf start_thread (libpthread.so.0)
#18 0x00007fb454ff72d3 __clone (libc.so.6)
Stack trace of thread 1267:
#0 0x00007fb454fec9ef __poll (libc.so.6)
#1 0x00007fb4541206d4 n/a (select.cpython-38-x86_64-linux-gnu.so)
#2 0x00007fb454c76104 n/a (libpython3.8.so.1.0)
#3 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#4 0x00007fb454d14e3b _PyEval_EvalCodeWithName (libpython3.8.so.1.0)
#5 0x00007fb454d1624b _PyFunction_Vectorcall (libpython3.8.so.1.0)
#6 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#7 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#8 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#9 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#10 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#11 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#12 0x00007fb454d16a7b n/a (libpython3.8.so.1.0)
#13 0x00007fb454c6730d PyObject_Call (libpython3.8.so.1.0)
#14 0x00007fb454d7c4e1 n/a (libpython3.8.so.1.0)
#15 0x00007fb454d368f4 n/a (libpython3.8.so.1.0)
#16 0x00007fb454b204cf start_thread (libpthread.so.0)
#17 0x00007fb454ff72d3 __clone (libc.so.6)
Stack trace of thread 1257:
#0 0x00007fb454fec9ef __poll (libc.so.6)
#1 0x00007fb453911120 n/a (libglib-2.0.so.0)
#2 0x00007fb4539120c3 g_main_loop_run (libglib-2.0.so.0)
#3 0x00007fb4535fcbc8 n/a (libgio-2.0.so.0)
#4 0x00007fb4538edbb1 n/a (libglib-2.0.so.0)
#5 0x00007fb454b204cf start_thread (libpthread.so.0)
#6 0x00007fb454ff72d3 __clone (libc.so.6)
Stack trace of thread 1265:
#0 0x00007fb454b29704 do_futex_wait.constprop.0 (libpthread.so.0)
#1 0x00007fb454b297f8 __new_sem_wait_slow.constprop.0 (libpthread.so.0)
#2 0x00007fb454c83a0e PyThread_acquire_lock_timed (libpython3.8.so.1.0)
#3 0x00007fb454d7c9a1 n/a (libpython3.8.so.1.0)
#4 0x00007fb454d95e1b n/a (libpython3.8.so.1.0)
#5 0x00007fb454d0bac9 n/a (libpython3.8.so.1.0)
#6 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#7 0x00007fb454d14e3b _PyEval_EvalCodeWithName (libpython3.8.so.1.0)
#8 0x00007fb454d1624b _PyFunction_Vectorcall (libpython3.8.so.1.0)
#9 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#10 0x00007fb454d14e3b _PyEval_EvalCodeWithName (libpython3.8.so.1.0)
#11 0x00007fb454d1624b _PyFunction_Vectorcall (libpython3.8.so.1.0)
#12 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#13 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#14 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#15 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#16 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#17 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#18 0x00007fb454d16a7b n/a (libpython3.8.so.1.0)
#19 0x00007fb454c6730d PyObject_Call (libpython3.8.so.1.0)
#20 0x00007fb454d7c4e1 n/a (libpython3.8.so.1.0)
#21 0x00007fb454d368f4 n/a (libpython3.8.so.1.0)
#22 0x00007fb454b204cf start_thread (libpthread.so.0)
#23 0x00007fb454ff72d3 __clone (libc.so.6)
Stack trace of thread 1264:
#0 0x00007fb454feee7b __select (libc.so.6)
#1 0x00007fb45412043e n/a (select.cpython-38-x86_64-linux-gnu.so)
#2 0x00007fb454c75e37 n/a (libpython3.8.so.1.0)
#3 0x00007fb454d2c3c8 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#4 0x00007fb454d14e3b _PyEval_EvalCodeWithName (libpython3.8.so.1.0)
#5 0x00007fb454d16892 n/a (libpython3.8.so.1.0)
#6 0x00007fb454d28a9c _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#7 0x00007fb454d167a6 n/a (libpython3.8.so.1.0)
#8 0x00007fb454d2c3c8 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#9 0x00007fb454d14e3b _PyEval_EvalCodeWithName (libpython3.8.so.1.0)
#10 0x00007fb454d1624b _PyFunction_Vectorcall (libpython3.8.so.1.0)
#11 0x00007fb454c67418 PyObject_Call (libpython3.8.so.1.0)
#12 0x00007fb454d29d03 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#13 0x00007fb454d14e3b _PyEval_EvalCodeWithName (libpython3.8.so.1.0)
#14 0x00007fb454d1624b _PyFunction_Vectorcall (libpython3.8.so.1.0)
#15 0x00007fb454d179ab n/a (libpython3.8.so.1.0)
#16 0x00007fb454c6f962 _PyObject_MakeTpCall (libpython3.8.so.1.0)
#17 0x00007fb454d2c9f1 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#18 0x00007fb454d167a6 n/a (libpython3.8.so.1.0)
#19 0x00007fb454d2c3c8 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#20 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#21 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#22 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#23 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#24 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#25 0x00007fb454d16a7b n/a (libpython3.8.so.1.0)
#26 0x00007fb454c6730d PyObject_Call (libpython3.8.so.1.0)
#27 0x00007fb454d7c4e1 n/a (libpython3.8.so.1.0)
#28 0x00007fb454d368f4 n/a (libpython3.8.so.1.0)
#29 0x00007fb454b204cf start_thread (libpthread.so.0)
#30 0x00007fb454ff72d3 __clone (libc.so.6)
Stack trace of thread 1262:
#0 0x00007fb454b29704 do_futex_wait.constprop.0 (libpthread.so.0)
#1 0x00007fb454b297f8 __new_sem_wait_slow.constprop.0 (libpthread.so.0)
#2 0x00007fb454c83a0e PyThread_acquire_lock_timed (libpython3.8.so.1.0)
#3 0x00007fb454d7c9a1 n/a (libpython3.8.so.1.0)
#4 0x00007fb454d95e1b n/a (libpython3.8.so.1.0)
#5 0x00007fb454d0bac9 n/a (libpython3.8.so.1.0)
#6 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#7 0x00007fb454d14e3b _PyEval_EvalCodeWithName (libpython3.8.so.1.0)
#8 0x00007fb454d1624b _PyFunction_Vectorcall (libpython3.8.so.1.0)
#9 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#10 0x00007fb454d14e3b _PyEval_EvalCodeWithName (libpython3.8.so.1.0)
#11 0x00007fb454d1624b _PyFunction_Vectorcall (libpython3.8.so.1.0)
#12 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#13 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#14 0x00007fb454d16a7b n/a (libpython3.8.so.1.0)
#15 0x00007fb454c6730d PyObject_Call (libpython3.8.so.1.0)
#16 0x00007fb454d29d03 _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#17 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#18 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#19 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#20 0x00007fb454d280ce _PyEval_EvalFrameDefault (libpython3.8.so.1.0)
#21 0x00007fb454d1606d _PyFunction_Vectorcall (libpython3.8.so.1.0)
#22 0x00007fb454d16a7b n/a (libpython3.8.so.1.0)
#23 0x00007fb454c6730d PyObject_Call (libpython3.8.so.1.0)
#24 0x00007fb454d7c4e1 n/a (libpython3.8.so.1.0)
#25 0x00007fb454d368f4 n/a (libpython3.8.so.1.0)
#26 0x00007fb454b204cf start_thread (libpthread.so.0)
#27 0x00007fb454ff72d3 __clone (libc.so.6)
```
Then I restart it but the described issues will occurs again: furthermore sometimes, despite the fact that the scripts properly works, a notification of AutoKey is displayed on desktop: "This "scriptname" has encountered an error" or something similar.
Eg with a script which is very simple:
`output = system.exec_command("gedit '/home/dave/Documents/textfile'")`
|
closed
|
2020-02-18T07:10:01Z
|
2023-05-07T19:45:09Z
|
https://github.com/autokey/autokey/issues/368
|
[] |
MR-Diamond
| 8
|
plotly/dash
|
data-science
| 2,803
|
[Feature Request] Global set_props in backend callbacks.
|
Add a global `dash.set_props` to be used in callbacks to set arbitrary props not defined in the callbacks outputs, similar to the clientside `dash_clientside.set_props`.
Example:
```
app.layout = html.Div([
html.Div(id="output"),
html.Div(id="secondary-output"),
html.Button("click", id="clicker"),
])
@app.callback(
Output("output", "children"),
Input("clicker", "n_clicks"),
prevent_initial_call=True,
)
def on_click(n_clicks):
set_props("secondary-output", {"children": "secondary"})
return f"Clicked {n_clicks} times"
```
|
closed
|
2024-03-20T16:26:14Z
|
2024-05-03T13:20:34Z
|
https://github.com/plotly/dash/issues/2803
|
[] |
T4rk1n
| 2
|
ultralytics/yolov5
|
pytorch
| 13,141
|
how to convert pt to onnx to trt
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
how to convert pt to onnx to trt
### Additional
im doing this
python export.py --weights best.pt --include onnx --opset 12
after trtexec --onnx=best.onnx --saveEngine=best.trt
after I try to load the model I get this

I used to be able to do it, but six months later I forgot how I did it.
Please help
|
closed
|
2024-06-27T03:11:08Z
|
2024-12-16T10:28:50Z
|
https://github.com/ultralytics/yolov5/issues/13141
|
[
"question",
"Stale"
] |
gdfapokgdpafog
| 8
|
pyro-ppl/numpyro
|
numpy
| 1,744
|
Adding HMCECS proxy functions
|
Hi,
I'm working on a neural proxy function for HMCECS and have a Taylor expansion proxy with an approximate Hessian. However, the file is becoming somewhat unruly as the proxies are currently in `hmc_gibbs.py`. If I move (only) the proxy functions to a separate file under `contrib` and keep using the static method interface for HMCECS (i.e., `HMCECS.taylor_proxy`), there is no change to the user interface, and I think it would be easier to work with.
Let me know what you think.
edit: change would look like [this](https://github.com/aleatory-science/numpyro/pull/3) (moved PR to aleatory)
|
closed
|
2024-02-23T11:08:06Z
|
2024-02-28T08:07:15Z
|
https://github.com/pyro-ppl/numpyro/issues/1744
|
[
"discussion"
] |
OlaRonning
| 2
|
neuml/txtai
|
nlp
| 585
|
Add support for binary indexes to Faiss ANN
|
This change will add support for [Faiss binary indexes](https://github.com/facebookresearch/faiss/wiki/Binary-indexes). Binary indexes will be used to index scalar quantized data.
|
closed
|
2023-10-27T09:52:56Z
|
2023-10-27T19:21:38Z
|
https://github.com/neuml/txtai/issues/585
|
[] |
davidmezzetti
| 1
|
tfranzel/drf-spectacular
|
rest-api
| 1,326
|
Weird issue when generating the schema
|
**Describe the bug**
I'm having a hard time but when I generate the schema from the Swagger UI, I got the correct schema (that includes the filter fields that I defined in a FilteSet class).

Then also the Swagger UI gives me a snippet to make the requests, but when I do the request the schema is different, and I don't really know what is happening.

**To Reproduce**
Try to add a Filter class to a View and get the schema from the UI (check the schema generated), and then copy the snippet provided by the Swagger UI, save the curl response and compare the schemas, they are different.
**Expected behavior**
I expect the schema file should be the same if I get it from the Swagger UI, than doing a curl request.
Thank you so much for your help, and for this amazing project.
|
closed
|
2024-11-07T07:53:41Z
|
2024-11-10T13:16:58Z
|
https://github.com/tfranzel/drf-spectacular/issues/1326
|
[] |
yoelfme
| 7
|
lepture/authlib
|
flask
| 328
|
PKCE check
|
https://github.com/lepture/authlib/blob/51261de795cddb93d5e5206d8206bfd87917c5b3/authlib/oauth2/rfc6749/grants/authorization_code.py#L207
Hi.
here when the request is "PKCE token request" as client is public and client credentials are not sent, shouldn't we skip client authentication and just check client_id instead? or i'm missing something in the request?
Thanks in advance
request parameters:
--------------------------
client_id
scope
redirect_uri
state
code
code_verifier
grant_type=authorization_code
response:
-------------
{
"error": "invalid_client",
"state": "345tfdgsut7i"
}
|
closed
|
2021-03-05T21:49:14Z
|
2021-03-06T03:56:58Z
|
https://github.com/lepture/authlib/issues/328
|
[] |
shahabGh77
| 1
|
custom-components/pyscript
|
jupyter
| 492
|
Response Data
|
With the recent introduction for Home Assistant allowing service calls to respond with data, I am curious if this is a planned feature for pyscript?
|
closed
|
2023-07-18T20:48:48Z
|
2023-07-30T18:02:41Z
|
https://github.com/custom-components/pyscript/issues/492
|
[] |
Sian-Lee-SA
| 1
|
gradio-app/gradio
|
python
| 10,711
|
Should not try to get_node_path() if SSR mode is disabled.
|
### Describe the bug
In Gradio code, the lines https://github.com/gradio-app/gradio/blob/54fd90703e74bd793668dda62fd87c4ef2cfff03/gradio/blocks.py#L2560 and https://github.com/gradio-app/gradio/blob/54fd90703e74bd793668dda62fd87c4ef2cfff03/gradio/routes.py#L1737 call `get_node_path()` prematurely. The call to `get_node_path()` should only happen only if SSR mode is set to true.
This is because this call breaks the application from launching if `get_node_path()` fails. The `get_node_path()` call fails due to failing to launch a subprocess (calling `which` to check the path of `node`) because it is not allowed. Note that SSR mode is set to false and is not required. This is a very niche use-case but this can happen, for instance, if the app is running inside a trusted platform module where forking new processes will fail.
I suggest a change along the following lines. I can submit a pull request if this is okay.
```
self.node_path = os.environ.get(
"GRADIO_NODE_PATH", "" if wasm_utils.IS_WASM else get_node_path()
)
```
be moved to within the following if block `if self.ssr_mode:`.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
It is less to do with the code that launches Gradio and more to do with the environment where it is launched.
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.20.0
gradio_client version: 1.7.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.5.0
gradio-client==1.7.2 is not installed.
groovy: 0.1.2
httpx: 0.28.1
huggingface-hub: 0.29.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.0.2
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.2
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.9
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.0
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.2.0
httpx: 0.28.1
huggingface-hub: 0.29.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 15.0
```
### Severity
Blocking usage of gradio
|
closed
|
2025-03-03T03:12:59Z
|
2025-03-04T03:11:47Z
|
https://github.com/gradio-app/gradio/issues/10711
|
[
"bug",
"good first issue"
] |
anirbanbasu
| 2
|
scikit-multilearn/scikit-multilearn
|
scikit-learn
| 194
|
Getting ValueError: Can only tuple-index with a MultiIndex
|
I am trying to stratify my multi-label data, `total` is all my data. contains 20 columns, 1st column is text(X) and rest of 19 cols are labels( each col represent a class, if present for an example,set to 1 else set to 0). `total` is a csv file if this info is needed
```
from skmultilearn.model_selection import iterative_train_test_split
X_train, y_train, X_test, y_test = iterative_train_test_split(total.iloc[:,0], total.iloc[:,1:], test_size = 0.5)
```
i am getting the following Error:
```
ValueError: Can only tuple-index with a MultiIndex
```
Here is the traceback:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-9-98829124e697> in <module>
1 from skmultilearn.model_selection import iterative_train_test_split
----> 2 X_train, y_train, X_test, y_test = iterative_train_test_split(total.iloc[:,0], total.iloc[:,1:], test_size = 0.5)
~/virtualenvs/anaconda3/envs/tf1/lib/python3.6/site-packages/skmultilearn/model_selection/iterative_stratification.py in iterative_train_test_split(X, y, test_size)
93 train_indexes, test_indexes = next(stratifier.split(X, y))
94
---> 95 X_train, y_train = X[train_indexes, :], y[train_indexes, :]
96 X_test, y_test = X[test_indexes, :], y[test_indexes, :]
97
~/virtualenvs/anaconda3/envs/tf1/lib/python3.6/site-packages/pandas/core/series.py in __getitem__(self, key)
1111 key = check_bool_indexer(self.index, key)
1112
-> 1113 return self._get_with(key)
1114
1115 def _get_with(self, key):
~/virtualenvs/anaconda3/envs/tf1/lib/python3.6/site-packages/pandas/core/series.py in _get_with(self, key)
1125 elif isinstance(key, tuple):
1126 try:
-> 1127 return self._get_values_tuple(key)
1128 except Exception:
1129 if len(key) == 1:
~/virtualenvs/anaconda3/envs/tf1/lib/python3.6/site-packages/pandas/core/series.py in _get_values_tuple(self, key)
1170
1171 if not isinstance(self.index, MultiIndex):
-> 1172 raise ValueError("Can only tuple-index with a MultiIndex")
1173
1174 # If key is contained, would have returned by now
ValueError: Can only tuple-index with a MultiIndex
```
Thanks in advance.
|
closed
|
2019-12-30T07:09:55Z
|
2023-03-14T17:05:30Z
|
https://github.com/scikit-multilearn/scikit-multilearn/issues/194
|
[] |
adiv5
| 6
|
tensorflow/tensor2tensor
|
machine-learning
| 1,914
|
Error: AttributeError: module 'tensorflow.compat.v2.__internal__' has no attribute 'monitoring'
|
### How to resolve this error?
I am running this code on Jupyter notebook.
I have imported `tensor2tensor` and `tensorflow` packages, however, this error arises. Can anyone assist what is the reason?
```
from tensor2tensor.data_generators.problem import Problem
...
```
```
ttributeError Traceback (most recent call last)
Input In [13], in <cell line: 11>()
9 from sklearn.metrics import mean_squared_error
10 from tensor2tensor.utils import contrib
---> 11 from tensor2tensor.data_generators.problem import Problem
12 from tensor2tensor.data_generators.text_encoder import TokenTextEncoder
13 from tqdm import tqdm
File ~/anaconda3/envs/tf-env/lib/python3.9/site-packages/tensor2tensor/data_generators/problem.py:27, in <module>
24 import random
25 import six
---> 27 from tensor2tensor.data_generators import generator_utils
28 from tensor2tensor.data_generators import text_encoder
29 from tensor2tensor.utils import contrib
File ~/anaconda3/envs/tf-env/lib/python3.9/site-packages/tensor2tensor/data_generators/generator_utils.py:1171, in <module>
1166 break
1167 return tmp_dir
1170 def tfrecord_iterator_for_problem(problem, data_dir,
-> 1171 dataset_split=tf.estimator.ModeKeys.TRAIN):
1172 """Iterate over the records on disk for the Problem."""
1173 filenames = tf.gfile.Glob(problem.filepattern(data_dir, mode=dataset_split))
File ~/anaconda3/envs/tf-env/lib/python3.9/site-packages/tensorflow/python/util/lazy_loader.py:62, in LazyLoader.__getattr__(self, item)
61 def __getattr__(self, item):
---> 62 module = self._load()
63 return getattr(module, item)
File ~/anaconda3/envs/tf-env/lib/python3.9/site-packages/tensorflow/python/util/lazy_loader.py:45, in LazyLoader._load(self)
43 """Load the module and insert it into the parent's globals."""
44 # Import the target module and insert it into the parent's namespace
---> 45 module = importlib.import_module(self.__name__)
46 self._parent_module_globals[self._local_name] = module
48 # Emit a warning if one was specified
File ~/anaconda3/envs/tf-env/lib/python3.9/importlib/__init__.py:127, in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File ~/anaconda3/envs/tf-env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/api/_v1/estimator/__init__.py:10, in <module>
6 from __future__ import print_function as _print_function
8 import sys as _sys
---> 10 from tensorflow_estimator.python.estimator.api._v1.estimator import experimental
11 from tensorflow_estimator.python.estimator.api._v1.estimator import export
12 from tensorflow_estimator.python.estimator.api._v1.estimator import inputs
File ~/anaconda3/envs/tf-env/lib/python3.9/site-packages/tensorflow_estimator/__init__.py:10, in <module>
6 from __future__ import print_function as _print_function
8 import sys as _sys
---> 10 from tensorflow_estimator._api.v1 import estimator
12 del _print_function
14 from tensorflow.python.util import module_wrapper as _module_wrapper
File ~/anaconda3/envs/tf-env/lib/python3.9/site-packages/tensorflow_estimator/_api/v1/estimator/__init__.py:13, in <module>
11 from tensorflow_estimator._api.v1.estimator import export
12 from tensorflow_estimator._api.v1.estimator import inputs
---> 13 from tensorflow_estimator._api.v1.estimator import tpu
14 from tensorflow_estimator.python.estimator.canned.baseline import BaselineClassifier
15 from tensorflow_estimator.python.estimator.canned.baseline import BaselineEstimator
File ~/anaconda3/envs/tf-env/lib/python3.9/site-packages/tensorflow_estimator/_api/v1/estimator/tpu/__init__.py:14, in <module>
12 from tensorflow_estimator.python.estimator.tpu.tpu_config import RunConfig
13 from tensorflow_estimator.python.estimator.tpu.tpu_config import TPUConfig
---> 14 from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimator
15 from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimatorSpec
17 del _print_function
File ~/anaconda3/envs/tf-env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py:108, in <module>
105 _WRAP_INPUT_FN_INTO_WHILE_LOOP = False
107 # Track the adoption of TPUEstimator
--> 108 _tpu_estimator_gauge = tf.compat.v2.__internal__.monitoring.BoolGauge(
109 '/tensorflow/api/tpu_estimator',
110 'Whether the program uses tpu estimator or not.')
112 if ops.get_to_proto_function('{}_{}'.format(_TPU_ESTIMATOR,
113 _ITERATIONS_PER_LOOP_VAR)) is None:
114 ops.register_proto_function(
115 '{}_{}'.format(_TPU_ESTIMATOR, _ITERATIONS_PER_LOOP_VAR),
116 proto_type=variable_pb2.VariableDef,
117 to_proto=resource_variable_ops._to_proto_fn, # pylint: disable=protected-access
118 from_proto=resource_variable_ops._from_proto_fn) # pylint: disable=protected-access
AttributeError: module 'tensorflow.compat.v2.__internal__' has no attribute 'monitoring'
...
```
|
open
|
2022-07-18T08:02:14Z
|
2022-07-18T08:02:14Z
|
https://github.com/tensorflow/tensor2tensor/issues/1914
|
[] |
qm-intel
| 0
|
jacobgil/pytorch-grad-cam
|
computer-vision
| 391
|
Explanation scores using Remove and debias is none
|
During the reproduction of the blog: [https://jacobgil.github.io/pytorch-gradcam-book/CAM%20Metrics%20And%20Tuning%20Tutorial.html#road-remove-and-debias], I found the scores calculated by cam_metric using remove and debias is always none. I have located the source of the error, the "NoisyLinearImputer" function will produce a super large output tensor that will cause the target model to create a very large posterior. Do you have any ideas on how to solve this problem? I guess it may be due to the following code:
res = torch.tensor(spsolve(csc_matrix(A), b), dtype=torch.float)
Will this equation vulnerable to some numerical instability, that will output large values?
Thanks in advance!
|
open
|
2023-02-19T00:36:44Z
|
2025-03-11T10:35:55Z
|
https://github.com/jacobgil/pytorch-grad-cam/issues/391
|
[] |
MasterEndless
| 1
|
Lightning-AI/pytorch-lightning
|
machine-learning
| 19,526
|
Model stuck after saving a checkpoing when using the FSDPStrategy
|
### Bug description
I'm training a GPT model using Fabric. Below are the setups for Fabric
It works well if I'm running without saving a checkpoint. However, if I save a checkpoint using ethier `torch.save` with `fabric.barrier()` or with `fabric.save()` the training will stuck.
I saw `torch.distributed.barrier()` have a [similar issue](https://github.com/pytorch/pytorch/issues/54059). I don't have a similar utilities in my code. Not sure if there is a same usage in `Fabric`.
### What version are you seeing the problem on?
v2.1
### How to reproduce the bug
```python
strategy = FSDPStrategy(
auto_wrap_policy={Block},
activation_checkpointing_policy={Block},
state_dict_type="full",
limit_all_gathers=True,
cpu_offload=False,
)
self.fabric = L.Fabric(accelerator=device, devices=n_devices, strategy=strategy, precision=precision)
```
Saving model with
```python
state = {"model": model}
full_save_path = os.path.abspath(get_path(base_dir, base_name, '.pt'))
fabric.save(full_save_path, state)
```
### Error messages and logs
```
# Error messages and logs here please
```
No errors, only stuck!
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow): lightning, mainly using Fabric
#- PyTorch Lightning Version (e.g., 1.5.0): 2.1.3
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0): 2.1.2+cu118
#- Python version (e.g., 3.9): 3.10.13
#- OS (e.g., Linux): Ubuntu
#- CUDA/cuDNN version: 11.8
#- GPU models and configuration: A100 40Gx2
#- How you installed Lightning(`conda`, `pip`, source): pip
#- Running environment of LightningApp (e.g. local, cloud): local
```
</details>
### More info
I think it relates to the communications betweeen the systems.
cc @awaelchli @carmocca
|
closed
|
2024-02-24T20:07:41Z
|
2024-07-27T12:44:27Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19526
|
[
"bug",
"strategy: fsdp",
"ver: 2.1.x",
"repro needed"
] |
charlesxu90
| 3
|
deezer/spleeter
|
tensorflow
| 508
|
no tag entry for v2[Bug] name your bug
|
you have not a releases entry for v2. and when models was updated at last time?
|
open
|
2020-10-27T18:01:11Z
|
2020-10-27T18:01:11Z
|
https://github.com/deezer/spleeter/issues/508
|
[
"bug",
"invalid"
] |
ilyapashuk
| 0
|
Asabeneh/30-Days-Of-Python
|
matplotlib
| 392
|
day 4_result is not correct
|
https://github.com/Asabeneh/30-Days-Of-Python/blame/c8656171d69e79b5dfc743f425991f46b7d1423e/04_Day_Strings/04_strings.md#L331
For this program the result should be 5 for ('y') and 0 for ('th')
challenge = 'thirty days of python'
print(challenge.find('y')) # 16
print(challenge.find('th')) # 17
|
closed
|
2023-05-09T18:09:54Z
|
2023-07-08T21:47:18Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/392
|
[] |
Galio54
| 1
|
LAION-AI/Open-Assistant
|
python
| 3,368
|
Add Support for Language Dialect Consistency in Conversations
|
Hello,
I’m reaching out to propose a new feature that I believe would enhance the user experience.
### Issue
Currently, when selecting a language, the system does not differentiate between language variants. For instance, when Spanish is selected, dialogues are mixed between European Spanish (Spain) and Latin American Spanish. Similarly, with Catalan, where there is a mix of dialects (Catalan, Valencia, Balearic), and with Portuguese (Brazil, Portugal). This occasionally results in sentences and phrases that, while technically correct, can be perceived as "off" or "weird" by native speakers. I presume this happens with many other languages as well.
### Proposed Solution
I would like to suggest adding a more granular control over the language setting so that we can choose a specific variant (e.g. European Spanish, Mexican Spanish, etc.) for each language. Ideally, once a variant is chosen, the conversation thread should maintain consistency in the use of that variant throughout.
**Suggested Implementation:**
- Include a dropdown or a set of options under the language selection for users to choose the desired variant.
- Store the language variant selection and use it to keep the conversation thread consistent.
### Benefits
- **Enhanced Readability**: Ensuring consistency in language variants makes the conversation more readable and relatable for native speakers.
- **Greater Precision**: Some dialects have unique expressions or terminology. Maintaining consistency in language variants allows for more precise communication.
- **Cultural Sensitivity**: Respecting and utilizing the correct language variant reflects cultural awareness and sensitivity.
|
closed
|
2023-06-09T19:20:23Z
|
2023-06-10T09:10:38Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3368
|
[] |
salvacarrion
| 0
|
horovod/horovod
|
deep-learning
| 3,883
|
Installation failure
|
**Environment:**
1. Framework: PyTorch
2. Framework version: 1.11.0
3. Horovod version: 0.27.0
4. MPI version: openmpi-4.1.4
5. CUDA version: 11.3
6. NCCL version: 2.9.9_1
7. Python version: 3.9
8. Spark / PySpark version:
9. Ray version:
10. OS and version: ubuntu18.04
11. GCC version:9.3.0
12. CMake version:3.26.3
**Checklist:**
1. Did you search issues to find if somebody asked this question before? i dont konw
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide] (https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? i cant actually describe what my problem is so i dont know how to check
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
this is my first time to install horovod, i have followed the requirements before installing horovod, but i still meet install failure, i dont know what is wrong, if anyone can give me a help?
this is my command:
HOROVOD_NCCL_HOME=/usr/local/nccl/nccl_2.9.9-1+cuda11.3_x86_64 HOROVOD_GPU_ALLREDUCE=NCCL HOROVOD_WITH_PYTORCH=1 pip install --no-cache-dir horovod/dist/horovod-0.27.0.tar.gz
my error outputs is following:
Looking in indexes: https://repo.huaweicloud.com/repository/pypi/simple
Processing ./horovod/dist/horovod-0.27.0.tar.gz
Preparing metadata (setup.py) ... done
Requirement already satisfied: cloudpickle in ./miniconda3/envs/DL/lib/python3.9/site-packages (from horovod==0.27.0) (2.1.0)
Requirement already satisfied: psutil in ./miniconda3/envs/DL/lib/python3.9/site-packages (from horovod==0.27.0) (5.9.4)
Requirement already satisfied: pyyaml in ./miniconda3/envs/DL/lib/python3.9/site-packages (from horovod==0.27.0) (6.0)
Requirement already satisfied: packaging in ./miniconda3/envs/DL/lib/python3.9/site-packages (from horovod==0.27.0) (23.0)
Requirement already satisfied: cffi>=1.4.0 in ./miniconda3/envs/DL/lib/python3.9/site-packages (from horovod==0.27.0) (1.15.1)
Requirement already satisfied: pycparser in ./miniconda3/envs/DL/lib/python3.9/site-packages (from cffi>=1.4.0->horovod==0.27.0) (2.21)
Building wheels for collected packages: horovod
Building wheel for horovod (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [307 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-39
creating build/lib.linux-x86_64-cpython-39/horovod
copying horovod/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod
creating build/lib.linux-x86_64-cpython-39/horovod/_keras
copying horovod/_keras/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/_keras
copying horovod/_keras/callbacks.py -> build/lib.linux-x86_64-cpython-39/horovod/_keras
copying horovod/_keras/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/_keras
creating build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/basics.py -> build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/exceptions.py -> build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/process_sets.py -> build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/util.py -> build/lib.linux-x86_64-cpython-39/horovod/common
creating build/lib.linux-x86_64-cpython-39/horovod/data
copying horovod/data/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/data
copying horovod/data/data_loader_base.py -> build/lib.linux-x86_64-cpython-39/horovod/data
creating build/lib.linux-x86_64-cpython-39/horovod/keras
copying horovod/keras/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/keras
copying horovod/keras/callbacks.py -> build/lib.linux-x86_64-cpython-39/horovod/keras
copying horovod/keras/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/keras
creating build/lib.linux-x86_64-cpython-39/horovod/mxnet
copying horovod/mxnet/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/mxnet
copying horovod/mxnet/compression.py -> build/lib.linux-x86_64-cpython-39/horovod/mxnet
copying horovod/mxnet/functions.py -> build/lib.linux-x86_64-cpython-39/horovod/mxnet
copying horovod/mxnet/mpi_ops.py -> build/lib.linux-x86_64-cpython-39/horovod/mxnet
creating build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/adapter.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/driver_service.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/elastic_v2.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/ray_logger.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/runner.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/strategy.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/utils.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/worker.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
creating build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/gloo_run.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/js_run.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/launch.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/mpi_run.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/run_task.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/task_fn.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
creating build/lib.linux-x86_64-cpython-39/horovod/spark
copying horovod/spark/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark
copying horovod/spark/conf.py -> build/lib.linux-x86_64-cpython-39/horovod/spark
copying horovod/spark/gloo_run.py -> build/lib.linux-x86_64-cpython-39/horovod/spark
copying horovod/spark/mpi_run.py -> build/lib.linux-x86_64-cpython-39/horovod/spark
copying horovod/spark/runner.py -> build/lib.linux-x86_64-cpython-39/horovod/spark
creating build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/compression.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/functions.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/gradient_aggregation.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/gradient_aggregation_eager.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/mpi_ops.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/sync_batch_norm.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/util.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
creating build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/compression.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/functions.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/mpi_ops.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/optimizer.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/sync_batch_norm.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
creating build/lib.linux-x86_64-cpython-39/horovod/runner/common
copying horovod/runner/common/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common
creating build/lib.linux-x86_64-cpython-39/horovod/runner/driver
copying horovod/runner/driver/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/driver
copying horovod/runner/driver/driver_service.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/driver
creating build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/constants.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/discovery.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/driver.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/registration.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/rendezvous.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/settings.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/worker.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
creating build/lib.linux-x86_64-cpython-39/horovod/runner/http
copying horovod/runner/http/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/http
copying horovod/runner/http/http_client.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/http
copying horovod/runner/http/http_server.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/http
creating build/lib.linux-x86_64-cpython-39/horovod/runner/task
copying horovod/runner/task/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/task
copying horovod/runner/task/task_service.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/task
creating build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/cache.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/lsf.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/network.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/remote.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/streams.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/threads.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
creating build/lib.linux-x86_64-cpython-39/horovod/runner/common/service
copying horovod/runner/common/service/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/service
copying horovod/runner/common/service/compute_service.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/service
copying horovod/runner/common/service/driver_service.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/service
copying horovod/runner/common/service/task_service.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/service
creating build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/codec.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/config_parser.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/env.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/host_hash.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/hosts.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/network.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/safe_shell_exec.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/secret.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/settings.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/timeout.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/tiny_shell_exec.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
creating build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/_namedtuple_fix.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/backend.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/cache.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/constants.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/datamodule.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/estimator.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/params.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/serialization.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/store.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/util.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
creating build/lib.linux-x86_64-cpython-39/horovod/spark/data_loaders
copying horovod/spark/data_loaders/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/data_loaders
copying horovod/spark/data_loaders/pytorch_data_loaders.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/data_loaders
creating build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/driver_service.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/host_discovery.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/job_id.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/mpirun_rsh.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/rendezvous.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/rsh.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
creating build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/bare.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/datamodule.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/estimator.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/optimizer.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/remote.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/tensorflow.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/util.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
creating build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/datamodule.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/estimator.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/legacy.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/remote.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/util.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
creating build/lib.linux-x86_64-cpython-39/horovod/spark/task
copying horovod/spark/task/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/task
copying horovod/spark/task/gloo_exec_fn.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/task
copying horovod/spark/task/mpirun_exec_fn.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/task
copying horovod/spark/task/task_info.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/task
copying horovod/spark/task/task_service.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/task
creating build/lib.linux-x86_64-cpython-39/horovod/spark/tensorflow
copying horovod/spark/tensorflow/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/tensorflow
copying horovod/spark/tensorflow/compute_worker.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/tensorflow
creating build/lib.linux-x86_64-cpython-39/horovod/spark/torch
copying horovod/spark/torch/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/torch
copying horovod/spark/torch/datamodule.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/torch
copying horovod/spark/torch/estimator.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/torch
copying horovod/spark/torch/remote.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/torch
copying horovod/spark/torch/util.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/torch
creating build/lib.linux-x86_64-cpython-39/horovod/tensorflow/data
copying horovod/tensorflow/data/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/data
copying horovod/tensorflow/data/compute_service.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/data
copying horovod/tensorflow/data/compute_worker.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/data
creating build/lib.linux-x86_64-cpython-39/horovod/tensorflow/keras
copying horovod/tensorflow/keras/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/keras
copying horovod/tensorflow/keras/callbacks.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/keras
copying horovod/tensorflow/keras/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/keras
creating build/lib.linux-x86_64-cpython-39/horovod/torch/elastic
copying horovod/torch/elastic/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/torch/elastic
copying horovod/torch/elastic/sampler.py -> build/lib.linux-x86_64-cpython-39/horovod/torch/elastic
copying horovod/torch/elastic/state.py -> build/lib.linux-x86_64-cpython-39/horovod/torch/elastic
running build_ext
Running CMake in build/temp.linux-x86_64-cpython-39/RelWithDebInfo:
cmake /tmp/pip-req-build-bnegxg7f -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/tmp/pip-req-build-bnegxg7f/build/lib.linux-x86_64-cpython-39 -DPYTHON_EXECUTABLE:FILEPATH=/root/miniconda3/envs/DL/bin/python
cmake --build . --config RelWithDebInfo -- -j8 VERBOSE=1
-- Could not find CCache. Consider installing CCache to speed up compilation.
-- The CXX compiler identification is GNU 9.3.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build architecture flags: -mf16c -mavx -mfma
-- Using command /root/miniconda3/envs/DL/bin/python
-- Found MPI_CXX: /usr/local/openmpi/openmpi-4.1.4/build/lib/libmpi.so (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - /usr/local/cuda/bin/nvcc
-- The CUDA compiler identification is NVIDIA 11.3.109
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Found CUDAToolkit: /usr/local/cuda/include (found version "11.3.109")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Linking against static NCCL library
-- Found NCCL: /usr/local/nccl/nccl_2.9.9-1+cuda11.3_x86_64/include
-- Determining NCCL version from the header file: /usr/local/nccl/nccl_2.9.9-1+cuda11.3_x86_64/include/nccl.h
-- NCCL_MAJOR_VERSION: 2
-- NCCL_VERSION_CODE: 20909
-- Found NCCL (include: /usr/local/nccl/nccl_2.9.9-1+cuda11.3_x86_64/include, library: /usr/local/nccl/nccl_2.9.9-1+cuda11.3_x86_64/lib/libnccl_static.a)
-- Found NVTX: /usr/local/cuda/include
-- Found NVTX (include: /usr/local/cuda/include, library: dl)
CMake Error at CMakeLists.txt:299 (add_subdirectory):
add_subdirectory given source "third_party/gloo" which is not an existing
directory.
CMake Error at CMakeLists.txt:301 (target_compile_definitions):
Cannot specify compile definitions for target "gloo" which is not built by
this project.
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'tensorflow'
-- Could NOT find Tensorflow (missing: Tensorflow_LIBRARIES) (Required is at least version "1.15.0")
-- Found Pytorch: 1.11.0 (found suitable version "1.11.0", minimum required is "1.5.0")
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'mxnet'
-- Could NOT find Mxnet (missing: Mxnet_LIBRARIES) (Required is at least version "1.4.1")
-- HVD_NVCC_COMPILE_FLAGS = -O3 -Xcompiler -fPIC -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_72,code=sm_72 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=\"sm_86,compute_86\"
CMake Error at CMakeLists.txt:365 (file):
file COPY cannot find "/tmp/pip-req-build-bnegxg7f/third_party/gloo": No
such file or directory.
CMake Error at CMakeLists.txt:366 (file):
file failed to open for reading (No such file or directory):
/tmp/pip-req-build-bnegxg7f/third_party/compatible_gloo/gloo/CMakeLists.txt
CMake Error at CMakeLists.txt:369 (add_subdirectory):
The source directory
/tmp/pip-req-build-bnegxg7f/third_party/compatible_gloo
does not contain a CMakeLists.txt file.
CMake Error at CMakeLists.txt:370 (target_compile_definitions):
Cannot specify compile definitions for target "compatible_gloo" which is
not built by this project.
-- Configuring incomplete, errors occurred!
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-req-build-bnegxg7f/setup.py", line 213, in <module>
setup(name='horovod',
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 325, in run
self.run_command("build")
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/command/build.py", line 132, in run
self.run_command(cmd_name)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 84, in run
_build_ext.run(self)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
self.build_extensions()
File "/tmp/pip-req-build-bnegxg7f/setup.py", line 145, in build_extensions
subprocess.check_call(command, cwd=cmake_build_dir)
File "/root/miniconda3/envs/DL/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/tmp/pip-req-build-bnegxg7f', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/tmp/pip-req-build-bnegxg7f/build/lib.linux-x86_64-cpython-39', '-DPYTHON_EXECUTABLE:FILEPATH=/root/miniconda3/envs/DL/bin/python']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for horovod
Running setup.py clean for horovod
Failed to build horovod
Installing collected packages: horovod
Running setup.py install for horovod ... error
error: subprocess-exited-with-error
× Running setup.py install for horovod did not run successfully.
│ exit code: 1
╰─> [305 lines of output]
running install
/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-39
creating build/lib.linux-x86_64-cpython-39/horovod
copying horovod/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod
creating build/lib.linux-x86_64-cpython-39/horovod/_keras
copying horovod/_keras/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/_keras
copying horovod/_keras/callbacks.py -> build/lib.linux-x86_64-cpython-39/horovod/_keras
copying horovod/_keras/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/_keras
creating build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/basics.py -> build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/exceptions.py -> build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/process_sets.py -> build/lib.linux-x86_64-cpython-39/horovod/common
copying horovod/common/util.py -> build/lib.linux-x86_64-cpython-39/horovod/common
creating build/lib.linux-x86_64-cpython-39/horovod/data
copying horovod/data/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/data
copying horovod/data/data_loader_base.py -> build/lib.linux-x86_64-cpython-39/horovod/data
creating build/lib.linux-x86_64-cpython-39/horovod/keras
copying horovod/keras/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/keras
copying horovod/keras/callbacks.py -> build/lib.linux-x86_64-cpython-39/horovod/keras
copying horovod/keras/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/keras
creating build/lib.linux-x86_64-cpython-39/horovod/mxnet
copying horovod/mxnet/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/mxnet
copying horovod/mxnet/compression.py -> build/lib.linux-x86_64-cpython-39/horovod/mxnet
copying horovod/mxnet/functions.py -> build/lib.linux-x86_64-cpython-39/horovod/mxnet
copying horovod/mxnet/mpi_ops.py -> build/lib.linux-x86_64-cpython-39/horovod/mxnet
creating build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/adapter.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/driver_service.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/elastic_v2.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/ray_logger.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/runner.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/strategy.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/utils.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
copying horovod/ray/worker.py -> build/lib.linux-x86_64-cpython-39/horovod/ray
creating build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/gloo_run.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/js_run.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/launch.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/mpi_run.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/run_task.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
copying horovod/runner/task_fn.py -> build/lib.linux-x86_64-cpython-39/horovod/runner
creating build/lib.linux-x86_64-cpython-39/horovod/spark
copying horovod/spark/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark
copying horovod/spark/conf.py -> build/lib.linux-x86_64-cpython-39/horovod/spark
copying horovod/spark/gloo_run.py -> build/lib.linux-x86_64-cpython-39/horovod/spark
copying horovod/spark/mpi_run.py -> build/lib.linux-x86_64-cpython-39/horovod/spark
copying horovod/spark/runner.py -> build/lib.linux-x86_64-cpython-39/horovod/spark
creating build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/compression.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/functions.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/gradient_aggregation.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/gradient_aggregation_eager.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/mpi_ops.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/sync_batch_norm.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
copying horovod/tensorflow/util.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow
creating build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/compression.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/functions.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/mpi_ops.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/optimizer.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
copying horovod/torch/sync_batch_norm.py -> build/lib.linux-x86_64-cpython-39/horovod/torch
creating build/lib.linux-x86_64-cpython-39/horovod/runner/common
copying horovod/runner/common/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common
creating build/lib.linux-x86_64-cpython-39/horovod/runner/driver
copying horovod/runner/driver/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/driver
copying horovod/runner/driver/driver_service.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/driver
creating build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/constants.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/discovery.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/driver.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/registration.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/rendezvous.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/settings.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
copying horovod/runner/elastic/worker.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/elastic
creating build/lib.linux-x86_64-cpython-39/horovod/runner/http
copying horovod/runner/http/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/http
copying horovod/runner/http/http_client.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/http
copying horovod/runner/http/http_server.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/http
creating build/lib.linux-x86_64-cpython-39/horovod/runner/task
copying horovod/runner/task/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/task
copying horovod/runner/task/task_service.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/task
creating build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/cache.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/lsf.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/network.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/remote.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/streams.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
copying horovod/runner/util/threads.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/util
creating build/lib.linux-x86_64-cpython-39/horovod/runner/common/service
copying horovod/runner/common/service/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/service
copying horovod/runner/common/service/compute_service.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/service
copying horovod/runner/common/service/driver_service.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/service
copying horovod/runner/common/service/task_service.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/service
creating build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/codec.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/config_parser.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/env.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/host_hash.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/hosts.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/network.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/safe_shell_exec.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/secret.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/settings.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/timeout.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
copying horovod/runner/common/util/tiny_shell_exec.py -> build/lib.linux-x86_64-cpython-39/horovod/runner/common/util
creating build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/_namedtuple_fix.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/backend.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/cache.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/constants.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/datamodule.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/estimator.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/params.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/serialization.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/store.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
copying horovod/spark/common/util.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/common
creating build/lib.linux-x86_64-cpython-39/horovod/spark/data_loaders
copying horovod/spark/data_loaders/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/data_loaders
copying horovod/spark/data_loaders/pytorch_data_loaders.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/data_loaders
creating build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/driver_service.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/host_discovery.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/job_id.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/mpirun_rsh.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/rendezvous.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
copying horovod/spark/driver/rsh.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/driver
creating build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/bare.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/datamodule.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/estimator.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/optimizer.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/remote.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/tensorflow.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
copying horovod/spark/keras/util.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/keras
creating build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/datamodule.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/estimator.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/legacy.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/remote.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
copying horovod/spark/lightning/util.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/lightning
creating build/lib.linux-x86_64-cpython-39/horovod/spark/task
copying horovod/spark/task/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/task
copying horovod/spark/task/gloo_exec_fn.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/task
copying horovod/spark/task/mpirun_exec_fn.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/task
copying horovod/spark/task/task_info.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/task
copying horovod/spark/task/task_service.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/task
creating build/lib.linux-x86_64-cpython-39/horovod/spark/tensorflow
copying horovod/spark/tensorflow/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/tensorflow
copying horovod/spark/tensorflow/compute_worker.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/tensorflow
creating build/lib.linux-x86_64-cpython-39/horovod/spark/torch
copying horovod/spark/torch/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/torch
copying horovod/spark/torch/datamodule.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/torch
copying horovod/spark/torch/estimator.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/torch
copying horovod/spark/torch/remote.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/torch
copying horovod/spark/torch/util.py -> build/lib.linux-x86_64-cpython-39/horovod/spark/torch
creating build/lib.linux-x86_64-cpython-39/horovod/tensorflow/data
copying horovod/tensorflow/data/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/data
copying horovod/tensorflow/data/compute_service.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/data
copying horovod/tensorflow/data/compute_worker.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/data
creating build/lib.linux-x86_64-cpython-39/horovod/tensorflow/keras
copying horovod/tensorflow/keras/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/keras
copying horovod/tensorflow/keras/callbacks.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/keras
copying horovod/tensorflow/keras/elastic.py -> build/lib.linux-x86_64-cpython-39/horovod/tensorflow/keras
creating build/lib.linux-x86_64-cpython-39/horovod/torch/elastic
copying horovod/torch/elastic/__init__.py -> build/lib.linux-x86_64-cpython-39/horovod/torch/elastic
copying horovod/torch/elastic/sampler.py -> build/lib.linux-x86_64-cpython-39/horovod/torch/elastic
copying horovod/torch/elastic/state.py -> build/lib.linux-x86_64-cpython-39/horovod/torch/elastic
running build_ext
Running CMake in build/temp.linux-x86_64-cpython-39/RelWithDebInfo:
cmake /tmp/pip-req-build-bnegxg7f -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/tmp/pip-req-build-bnegxg7f/build/lib.linux-x86_64-cpython-39 -DPYTHON_EXECUTABLE:FILEPATH=/root/miniconda3/envs/DL/bin/python
cmake --build . --config RelWithDebInfo -- -j8 VERBOSE=1
-- Could not find CCache. Consider installing CCache to speed up compilation.
-- The CXX compiler identification is GNU 9.3.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build architecture flags: -mf16c -mavx -mfma
-- Using command /root/miniconda3/envs/DL/bin/python
-- Found MPI_CXX: /usr/local/openmpi/openmpi-4.1.4/build/lib/libmpi.so (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - /usr/local/cuda/bin/nvcc
-- The CUDA compiler identification is NVIDIA 11.3.109
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Found CUDAToolkit: /usr/local/cuda/include (found version "11.3.109")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Linking against static NCCL library
-- Found NCCL: /usr/local/nccl/nccl_2.9.9-1+cuda11.3_x86_64/include
-- Determining NCCL version from the header file: /usr/local/nccl/nccl_2.9.9-1+cuda11.3_x86_64/include/nccl.h
-- NCCL_MAJOR_VERSION: 2
-- NCCL_VERSION_CODE: 20909
-- Found NCCL (include: /usr/local/nccl/nccl_2.9.9-1+cuda11.3_x86_64/include, library: /usr/local/nccl/nccl_2.9.9-1+cuda11.3_x86_64/lib/libnccl_static.a)
-- Found NVTX: /usr/local/cuda/include
-- Found NVTX (include: /usr/local/cuda/include, library: dl)
CMake Error at CMakeLists.txt:299 (add_subdirectory):
add_subdirectory given source "third_party/gloo" which is not an existing
directory.
CMake Error at CMakeLists.txt:301 (target_compile_definitions):
Cannot specify compile definitions for target "gloo" which is not built by
this project.
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'tensorflow'
-- Could NOT find Tensorflow (missing: Tensorflow_LIBRARIES) (Required is at least version "1.15.0")
-- Found Pytorch: 1.11.0 (found suitable version "1.11.0", minimum required is "1.5.0")
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'mxnet'
-- Could NOT find Mxnet (missing: Mxnet_LIBRARIES) (Required is at least version "1.4.1")
-- HVD_NVCC_COMPILE_FLAGS = -O3 -Xcompiler -fPIC -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_72,code=sm_72 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=\"sm_86,compute_86\"
CMake Error at CMakeLists.txt:365 (file):
file COPY cannot find "/tmp/pip-req-build-bnegxg7f/third_party/gloo": No
such file or directory.
CMake Error at CMakeLists.txt:369 (add_subdirectory):
The source directory
/tmp/pip-req-build-bnegxg7f/third_party/compatible_gloo
does not contain a CMakeLists.txt file.
CMake Error at CMakeLists.txt:370 (target_compile_definitions):
Cannot specify compile definitions for target "compatible_gloo" which is
not built by this project.
-- Configuring incomplete, errors occurred!
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-req-build-bnegxg7f/setup.py", line 213, in <module>
setup(name='horovod',
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/command/install.py", line 68, in run
return orig.install.run(self)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/command/install.py", line 698, in run
self.run_command('build')
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/command/build.py", line 132, in run
self.run_command(cmd_name)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 84, in run
_build_ext.run(self)
File "/root/miniconda3/envs/DL/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
self.build_extensions()
File "/tmp/pip-req-build-bnegxg7f/setup.py", line 145, in build_extensions
subprocess.check_call(command, cwd=cmake_build_dir)
File "/root/miniconda3/envs/DL/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/tmp/pip-req-build-bnegxg7f', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/tmp/pip-req-build-bnegxg7f/build/lib.linux-x86_64-cpython-39', '-DPYTHON_EXECUTABLE:FILEPATH=/root/miniconda3/envs/DL/bin/python']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> horovod
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
|
closed
|
2023-04-07T14:54:33Z
|
2023-12-15T04:10:50Z
|
https://github.com/horovod/horovod/issues/3883
|
[
"wontfix"
] |
Dairhepon
| 2
|
vaexio/vaex
|
data-science
| 1,911
|
[FEATURE-REQUEST] Is there an equivalent of creating multiple virtual columns from function returning tuples?
|
Is there the equivalent of doing this in Pandas, in vaex?
```python
df = pd.DataFrame(data={'num': range(10)})
print(df)
num
0 0
1 1
2 2
3 3
4 4
def powers(x):
return x, x**2
df['p1'], df['p2'] = zip(*df['num'].map(powers))
print(df)
num p1 p2
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 9
4 4 4 16
```
I have a function that returns multiple values I would like to store in separate columns (Not this trivial example, since I understand it woudl be easy to add them as separate virtual columns, but in my case the values in the columns are related)
|
closed
|
2022-02-11T19:49:17Z
|
2022-02-14T13:06:45Z
|
https://github.com/vaexio/vaex/issues/1911
|
[] |
tdeboer-ilmn
| 1
|
waditu/tushare
|
pandas
| 1,244
|
[Bug][get_sz50s] Throw exception - read_excel() got an unexpected keyword argument `parse_cols`
|
[root cause]
There is no parameter `parse_cols` in read_excell() of pandas 0.25.3, since has been deprecated.
|
open
|
2020-01-03T08:01:27Z
|
2020-01-08T05:55:51Z
|
https://github.com/waditu/tushare/issues/1244
|
[] |
bradleetw
| 1
|
kensho-technologies/graphql-compiler
|
graphql
| 150
|
Unable to resolve dependencies for pipenv lock
|
Off a clean master branch, running `pipenv lock` throws error:
```
Locking [dev-packages] dependencies...
Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
You can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
Could not find a version that matches pluggy<0.7,>=0.5,>=0.7
Tried: 0.3.0, 0.3.0, 0.3.1, 0.3.1, 0.4.0, 0.4.0, 0.5.0, 0.5.1, 0.5.1, 0.5.2, 0.5.2, 0.6.0, 0.6.0, 0.6.0, 0.7.1, 0.7.1, 0.8.0, 0.8.0
There are incompatible versions in the resolved dependencies.```
|
closed
|
2019-01-02T19:49:33Z
|
2019-01-03T02:00:19Z
|
https://github.com/kensho-technologies/graphql-compiler/issues/150
|
[
"bug"
] |
jmeulemans
| 0
|
jacobgil/pytorch-grad-cam
|
computer-vision
| 492
|
How should I assign a value to the "targets" when I draw the heatmap of yolov8-pose?
|

I don't think "targets=None" is right. in the "base_cam.py",
```
if targets is None:
target_categories = np.argmax(outputs.cpu().data.numpy(), axis=-2)
targets = [ClassifierOutputTarget(
category) for category in target_categories]
```
obviously, "pose" output should not use "ClassifierOutputTarget"
|
open
|
2024-03-15T03:42:07Z
|
2024-10-27T08:05:20Z
|
https://github.com/jacobgil/pytorch-grad-cam/issues/492
|
[] |
Xavier-W
| 1
|
omar2535/GraphQLer
|
graphql
| 81
|
[Enhancement] Materialize only SCALARs after MAX_DEPTH
|
# Overview
Currently, the materializer which creates queries and mutations to materialize will end when MAX_DEPTH is hit. However, this doesn't guarantee a valid query due to NON_NULL constraints (since NON_NULLs could be at various depths of a payload).
IE.
**Schema:**
```sh
type User {
id: ID!
name: String!
books: [Book!]!
}
type Book {
id: ID!
title: String!
authors: [User!]!
}
type Query {
user(id: ID!): User
books: [Book!]!
}
```
**Malformed query:**
```sh
user(id: $userId) {
name
books {
title
authors
}
```
at a DEPTH=3. However, due to the NON_NULL constraints of a author, we still need to provide extra context for an author (only resolve their name but not the books) or else this will error. A valid query would look like this:
**Valid query:**
```sh
user(id: $userId) {
name
books {
title
authors {
name
}
}
}
```
## Deliverable
For the test-case above, only resolve scalars after `MAX_DEPTH` is reached.
|
closed
|
2024-07-22T12:28:03Z
|
2024-08-06T05:51:54Z
|
https://github.com/omar2535/GraphQLer/issues/81
|
[
"➕enhancement",
"❕ Critical"
] |
omar2535
| 0
|
matterport/Mask_RCNN
|
tensorflow
| 2,599
|
Writing Mask R-CNN prediction pixel co-ordinates to text file
|
Does anyone know how to write the predicted mask co-ordinates to a text file? I want to import my prediction results to GIS.
|
open
|
2021-06-15T01:04:39Z
|
2022-05-06T08:53:03Z
|
https://github.com/matterport/Mask_RCNN/issues/2599
|
[] |
ghost
| 1
|
flasgger/flasgger
|
rest-api
| 90
|
Handling Collections of Objects with marshmallow
|
In marshmallow I can handle many collections of an object. For the
https://github.com/rochacbruno/flasgger/blob/master/examples/marshmallow_apispec.py
example how do I handle a collection of "User" and have it reflect in the apidocs?
|
closed
|
2017-04-23T11:12:53Z
|
2017-04-24T18:17:41Z
|
https://github.com/flasgger/flasgger/issues/90
|
[
"question"
] |
wobeng
| 1
|
sammchardy/python-binance
|
api
| 1,214
|
bot sets TAKE_PROFIT_MARKET too high 🤨
|
Because when my bot opens a MARKET order with a take profit of 0.001 percent, it sets the TAKE_PROFIT_MARKET much higher, like 0.03 percent. And it goes both ways LONG and SHORT.

|
open
|
2022-07-11T00:46:41Z
|
2022-07-11T00:46:41Z
|
https://github.com/sammchardy/python-binance/issues/1214
|
[] |
DrakoAI
| 0
|
python-arq/arq
|
asyncio
| 425
|
Task Progress
|
In Python RQ there is this `job.meta` dictionary that can be used to set some custom task progress indication which is useful for showing in the UI, does ARQ have this?
https://python-rq.org/docs/jobs/
|
open
|
2023-12-29T13:27:28Z
|
2024-03-09T01:56:42Z
|
https://github.com/python-arq/arq/issues/425
|
[] |
ronbeltran
| 1
|
noirbizarre/flask-restplus
|
flask
| 752
|
CSS injection security vulnerability in swagger-ui
|
closed
|
2019-11-26T17:04:37Z
|
2020-01-07T15:20:39Z
|
https://github.com/noirbizarre/flask-restplus/issues/752
|
[
"bug"
] |
khsu528
| 0
|
|
microsoft/Bringing-Old-Photos-Back-to-Life
|
pytorch
| 5
|
Colab "Try it on your own photos!" fails to save output
|
When running the section `"Try it on your own photos!"` the upload of the photo works fine:
```
EMILIA 78.png(image/png) - 6794251 bytes, last modified: 19/9/2020 - 100% done
Saving EMILIA 78.png to EMILIA 78.png
```
and even the pipeline seems to work:
```
Running Stage 1: Overall restoration
Now you are processing EMILIA 78.png
Skip EMILIA 78.png
Finish Stage 1 ...
Running Stage 2: Face Detection
Finish Stage 2 ...
Running Stage 3: Face Enhancement
The main GPU is
0
dataset [FaceTestDataset] of size 0 was created
The size of the latent vector size is [8,8]
Network [SPADEGenerator] was created. Total number of parameters: 92.1 million. To see the architecture, do print(network).
hi :)
Finish Stage 3 ...
Running Stage 4: Blending
Finish Stage 4 ...
All the processing is done. Please check the results.
```
but there is any file named "EMILIA 78.png` (As the input) in the output folder, so the next section "Visualize" will do nothing, since the folder `output/` has only the test images, but not this one, I assume due to the following like:
```
Skip EMILIA 78.png
```
so it seems it is related to https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/3
So I manually rescaled:
```
EMILIA 78.png(image/png) - 1639881 bytes, last modified: 19/9/2020 - 100% done
Saving EMILIA 78.png to EMILIA 78.png
```
Closing here then, forwarding a question in the other issue.
|
closed
|
2020-09-19T18:23:35Z
|
2020-09-19T18:36:24Z
|
https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/5
|
[] |
loretoparisi
| 0
|
geopandas/geopandas
|
pandas
| 3,210
|
BUG: up to 4 times slower in Linux compared to Windows when using gpd.read_file to read vector data
|
It is very very slow to read vector data (including ESRI ShapeFile, ESRI Geodatabase) with gpd.read_file in Linux (Including Ubuntu, Rocky linux). I have reproduced this issue in several different Linux servers with high performace and Windows PCs, so it seemed that it is not a question involving the performace of devices but a question related to GeoPandas or Fiona
For instance, reading a point gdb layer with 100000 points might cost 10 secends in Windows, but it would cost up to 40 seconds to read the same data with gpd.read_file in Linux servers.
The same problem occurs when reading shp data, but not as severe. Reading with gpd.read_file on linux might only take twice times.
By debugging the Python source code for GeoPandas, I've pinpointed the particularly slow code, which is GeoDataframe.from_features in geodataframe.py. It looks like it is iterating over the return value of fiona.open that is particularly slow. I'm not sure if this is due to geopandas, or fiona?

PYTHON DEPENDENCIES
-------------------
geopandas : 0.14.3
numpy : 1.24.3
pandas : 2.0.1
pyproj : 3.6.0
shapely : 2.0.1
fiona : 1.9.5
geoalchemy2: None
geopy : None
matplotlib : 3.7.1
mapclassify: 2.6.1
pygeos : None
pyogrio : 0.6.0
psycopg2 : None
pyarrow : None
rtree : 1.1.0
</details>
|
closed
|
2024-03-06T10:09:26Z
|
2024-03-07T06:37:03Z
|
https://github.com/geopandas/geopandas/issues/3210
|
[
"bug",
"needs triage"
] |
kwtk86
| 6
|
numba/numba
|
numpy
| 9,903
|
Iteration over a C-order array yields subarrays without C-order property
|
Using a `for` loop to iterate over the first axis of a C-order array yields subarrays that are not C-order, even though they should be (and likely are).
A workaround is to iterate over a range of indices and then extract the subarray in another statement.
See this example code:
```python
import numba
import numpy as np
@numba.njit('void(int64[::1])') # (Array(int64, 1, 'C', False, aligned=True),)
def add1(array):
array += 1
@numba.njit
def func1(array):
for i in range(len(array)):
add1(array[i]) # Jits and runs OK.
@numba.njit
def func2(array):
for row in array:
add1(row) # Jit error: (array(int64, 1d, A)) not known signature.
@numba.njit
def func3(array):
for i, row in enumerate(array):
add1(row) # Jit error: (array(int64, 1d, A)) not known signature.
def test1():
array = np.zeros((2, 2), np.int64)
func1(array)
# func2(array)
# func3(array)
print(array)
test1()
```
|
open
|
2025-01-23T06:10:49Z
|
2025-01-28T12:26:20Z
|
https://github.com/numba/numba/issues/9903
|
[
"bug - typing"
] |
hhoppe
| 1
|
abhiTronix/vidgear
|
dash
| 356
|
[Question]: How to set hwaccel to StreamGear for utilizing GPU, Using the option ?
|
### Issue guidelines
- [X] I've read the [Issue Guidelines](https://abhitronix.github.io/vidgear/latest/contribution/issue/#submitting-an-issue-guidelines) and wholeheartedly agree.
### Issue Checklist
- [X] I have searched open or closed [issues](https://github.com/abhiTronix/vidgear/issues) for my problem and found nothing related or helpful.
- [X] I have read the [Documentation](https://abhitronix.github.io/vidgear/latest) and found nothing related to my problem.
- [X] I have gone through the [Bonus Examples](https://abhitronix.github.io/vidgear/latest/help/get_help/#bonus-examples) and [FAQs](https://abhitronix.github.io/vidgear/latest/help/get_help/#frequently-asked-questions) and found nothing related or helpful.
### Describe your Question
"-hwaccel auto" before the inputs (-i ) tries to use hardware accelerated but when applied stream_params goes in last which makes problem for ffmpeg and does not utilize GPU.
### Terminal log output(Optional)
```shell
fmpeg.exe', '-y', '-i', 'sample-mp4-file-small.mp4', '-vcodec', 'libx264', '-vf', 'format=yuv420p', '-aspect', '4:3', '-crf', '20', '-tune', 'zerolatency', '-preset', 'veryfast', '-acodec', 'copy', '-map', '0', '-s:v:0', '320x240', '-b:v:0', '115k', '-b:a:0', '128k', '-map', '0', '-s:v:1', '640x480', '-b:v:1', '500k', '-b:a:1', '96k', '-map', '0', '-s:v:2', '1280x720', '-b:v:2', '500k', '-b:a:2', '128k', '-map', '0', '-s:v:3', '1920x1080', '-b:v:3', '500k', '-b:a:3', '128k', '-bf', '1', '-sc_threshold', '0', '-keyint_min', '30', '-g', '30', '-seg_duration', '5', '-use_timeline', '1', '-use_template', '1', '-adaptation_sets', 'id=0,streams=v id=1,streams=a', '-f', 'dash', '-hwaccel', 'auto', 'dash_out.mpd'
```
### Python Code(Optional)
```python
# Activate Single-Source Mode with valid video input
stream_params = {
"-hwaccel": "auto",
"-video_source": video_src,
# "-livestream": True,
"-streams": [
{"-resolution": "640x480", "-video_bitrate": "500k"},
{"-resolution": "1280x720", "-video_bitrate": "500k"},
{"-resolution": "1920x1080", "-video_bitrate": "500k"},
],
}
# describe a suitable manifest-file location/name and assign params
streamer = StreamGear(
output=save_path, format="dash", logging=True, **stream_params
)
# trancode source
streamer.transcode_source()
# terminate
streamer.terminate()
```
### VidGear Version
0.3.0
### Python version
3.9.12
### Operating System version
Windows 10 x64
### Any other Relevant Information?
_No response_
|
closed
|
2023-04-01T03:46:24Z
|
2023-04-04T09:12:26Z
|
https://github.com/abhiTronix/vidgear/issues/356
|
[
"QUESTION :question:",
"ANSWERED IN DOCS :book:"
] |
PraveenSuryawanshi-Dev
| 2
|
PrefectHQ/prefect
|
data-science
| 17,384
|
DaskTaskRunner does not handle Dask exceptions
|
### Bug summary
In `PrefectDaskFuture.wait`, it's assumed (per [this comment](https://github.com/PrefectHQ/prefect/blob/eb9d51f7c507adeed73a460c234594a815c9b4c0/src/integrations/prefect-dask/prefect_dask/task_runners.py#L120)) that either `future.result()` returns a `State` or times out. But there are other possible failure states described [here](https://distributed.dask.org/en/stable/killed.html) that are not handled, and lead to a fairly cryptic failure message:
```
File ~/model/.venv/lib/python3.10/site-packages/prefect/states.py:509, in get_state_exception(state)
507 default_message = "Run cancelled."
508 else:
--> 509 raise ValueError(f"Expected failed or crashed state got {state!r}.")
511 if isinstance(state.data, ResultRecord):
512 result = state.data.result
ValueError: Expected failed or crashed state got Running(message='', type=RUNNING, result=None).
```
To reproduce, I am running a simple flow like below on a local dask cluster:
```
from time import sleep
from prefect import flow, task
from prefect_dask import DaskTaskRunner
@flow(task_runner=DaskTaskRunner(address="localhost:8786"))
def wait_flow():
@task
def wait():
sleep(30)
return True
result = wait.submit()
return result
if __name__ == "__main__":
wait_flow()
```
and killing the dask worker until the scheduler declares the task suspicious and gives up:
```
distributed.scheduler.KilledWorker: Attempted to run task slow_task-a25511e480b7951003007c0155e3f56c on 1 different workers, but all those workers died while running it. The last worker that attempt to run the task was tcp://127.0.0.1:60949. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.
```
Because of the `except: return` block linked above, we end up not returning any kind of State, leading to the `Expected failed or crashed state got Running` failure.
Not super familiar with the new State ontology yet but it seems like `KilledWorker` or `CommError` should probably result in a `Crashed` state?
cc Coiled folks @mrocklin @ntabris @jrbourbeau in case anyone has a stronger and better informed opinion than I on the proper behavior here 🙂 .
### Version info
```Text
Version: 3.2.7
API version: 0.8.4
Python version: 3.10.16
Git commit: d4d9001e
Built: Fri, Feb 21, 2025 7:39 PM
OS/Arch: darwin/arm64
Profile: default
Server type: cloud
Pydantic version: 2.9.2
Integrations:
prefect-dask: 0.3.2.dev1046+gbe1ba636e4.d20250305
prefect-gcp: 0.6.2
prefect-kubernetes: 0.5.3
```
### Additional context
_No response_
|
open
|
2025-03-05T15:26:03Z
|
2025-03-05T15:26:42Z
|
https://github.com/PrefectHQ/prefect/issues/17384
|
[
"bug"
] |
bnaul
| 0
|
sczhou/CodeFormer
|
pytorch
| 78
|
Error CUDA out of memory.
|
**Pls howto fix memory error?**
"Error CUDA out of memory. Tried to allocate 930.00 MiB (GPU 0; 3.82 GiB total capacity; 849.56 MiB already allocated; 938.75 MiB free; 1.66 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"
Ubuntu 20.04.5 LTS / or Fedora 37 (rpmfusion cuda 11.7)
nVidia 1650
Anaconda3-2022.10-Linux-x86_64.sh (conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia)
1-2 workflows and errors, last image size 600*900 and missing image output.
|
closed
|
2022-12-04T19:18:06Z
|
2022-12-31T07:14:42Z
|
https://github.com/sczhou/CodeFormer/issues/78
|
[] |
idanka
| 6
|
d2l-ai/d2l-en
|
machine-learning
| 2,048
|
\n disappears when using if tab.selected
|




|
closed
|
2022-02-19T17:06:11Z
|
2022-06-18T08:37:17Z
|
https://github.com/d2l-ai/d2l-en/issues/2048
|
[] |
315930399
| 0
|
miguelgrinberg/flasky
|
flask
| 107
|
**kwargs
|
Hi Miguel
I have been following your book steep by steep, I have enjoyed but I am a bit stuck in page 71 Example 6-3. The code is like this:
# ...
def send_email (to, subject, template, *_kwargs):
......
......msg.body = render_template( template + 'txt', *_kwarg)
......
......mail.send(msg)
In the book says that: "The keyword arguments passed by the caller are given to the render_template() calls so they can be used by the templates that generate the email body."
Sorry for this question, when and how **kwarg values are assigned. ? Do you have a piece of code to clarify this point .. Thank you very much in advanced :)
|
closed
|
2016-01-22T17:18:22Z
|
2016-05-19T17:27:16Z
|
https://github.com/miguelgrinberg/flasky/issues/107
|
[
"question"
] |
masaguaro
| 3
|
OWASP/Nettacker
|
automation
| 111
|
wappalyzer_scan bug - nothing found on target
|
Hey, just want to inform you of this bug.
```
[+] checking https://www.owasp.org ...
[+] category: Wikis, frameworks: MediaWiki found!
[+] category: Video Players, frameworks: YouTube found!
[+] nothing found on https://www.owasp.org in wappalyzer_scan!
```
Regards.
_________________
**OS**: `Windows`
**OS Version**: `10`
**Python Version**: `2.7.13`
|
closed
|
2018-04-22T14:25:53Z
|
2018-05-19T19:42:20Z
|
https://github.com/OWASP/Nettacker/issues/111
|
[
"bug",
"done",
"bug fixed"
] |
Ali-Razmjoo
| 1
|
biolab/orange3
|
data-visualization
| 6,094
|
REST interface component - receiving JSON data into models
|
<!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
<!-- In other words, what's your pain point? -->
<!-- Is your request related to a problem, or perhaps a frustration? -->
<!-- Tell us the story that led you to write this request. -->
Database systems such as Postgres can act as REST services servers and deliver their search sets in the form of JSON. Allowing direct access to such functionality allows for data transfer without transformations that increases the risks and challenges.
**What's your proposed solution?**
<!-- Be specific, clear, and concise. -->
Utilise the current SQL object and allow the data received to be in the form of JSON that internally also can be transformed into CSV.
**Are there any alternative solutions?**
|
closed
|
2022-08-13T10:37:36Z
|
2022-10-11T09:27:20Z
|
https://github.com/biolab/orange3/issues/6094
|
[] |
stenerikbjorling
| 2
|
nonebot/nonebot2
|
fastapi
| 3,022
|
Plugin: PM帮助
|
### PyPI 项目名
nonebot-plugin-pmhelp
### 插件 import 包名
nonebot_plugin_pmhelp
### 标签
[{"label":"帮助","color":"#ea5252"}]
### 插件配置项
_No response_
|
closed
|
2024-10-16T09:32:22Z
|
2024-10-18T14:39:52Z
|
https://github.com/nonebot/nonebot2/issues/3022
|
[
"Plugin"
] |
CM-Edelweiss
| 12
|
scikit-optimize/scikit-optimize
|
scikit-learn
| 1,205
|
Scikit optimize abandoned?
|
open
|
2024-02-23T07:08:31Z
|
2024-02-28T15:16:21Z
|
https://github.com/scikit-optimize/scikit-optimize/issues/1205
|
[] |
jobs-git
| 2
|
|
ResidentMario/geoplot
|
matplotlib
| 190
|
Cannot plot with projection
|
I try to run plotting_with_geoplot.py with available on [](http://geopandas.org/gallery/plotting_with_geoplot.html) in python3.7. The result could not appear but got a message below.
```
Geometry must be a Point or LineString
python: geos_ts_c.cpp:4038: int GEOSCoordSeq_getSize_r(GEOSContextHandle_t, const geos::geom::CoordinateSequence*, unsigned int*): Assertion `0 != cs' failed.`
```
I use the lasted version of geoplot (0.4.0). Any suggestion?
|
closed
|
2019-11-19T01:43:12Z
|
2019-11-21T16:06:19Z
|
https://github.com/ResidentMario/geoplot/issues/190
|
[] |
sdayu
| 6
|
babysor/MockingBird
|
deep-learning
| 404
|
预处理失败,求助
|
C:\Users\Administrator\Downloads\MockingBird-main\MockingBird-main>python pre.py C:\Users\Administrator\Downloads -d aidatatang_200zh -n 6
Using data from:
C:\Users\Administrator\Downloads\aidatatang_200zh\corpus\train
Traceback (most recent call last):
File "C:\Users\Administrator\Downloads\MockingBird-main\MockingBird-main\pre.py", line 74, in <module>
preprocess_dataset(**vars(args))
File "C:\Users\Administrator\Downloads\MockingBird-main\MockingBird-main\synthesizer\preprocess.py", line 45, in preprocess_dataset
assert all(input_dir.exists() for input_dir in input_dirs)
AssertionError
|
closed
|
2022-02-25T14:46:34Z
|
2022-07-17T15:11:44Z
|
https://github.com/babysor/MockingBird/issues/404
|
[] |
Xlbnas
| 5
|
geex-arts/django-jet
|
django
| 98
|
Wrong dashboard settings documentation
|
Hi, i'm working in a project and i set the other questions:
``` python
JET_INDEX_DASHBOARD = 'jet.dashboard.DefaultIndexDashboard'
JET_APP_INDEX_DASHBOARD = 'jet.dashboard.DefaultAppIndexDashboard'
```
Put this settings in my settings file and i get this problem:

Then, i investigated in the code and i change the settings to this:
``` python
JET_INDEX_DASHBOARD = 'jet.dashboard.dashboard.DefaultIndexDashboard'
JET_APP_INDEX_DASHBOARD = 'jet.dashboard.dashboard.DefaultAppIndexDashboard'
```
And this fixes this bug.
In addition, the violet theme doesn't works anymore, can you get back it? or delete it from documentation files.
|
closed
|
2016-08-10T03:55:02Z
|
2016-08-19T08:56:22Z
|
https://github.com/geex-arts/django-jet/issues/98
|
[] |
SalahAdDin
| 5
|
Lightning-AI/pytorch-lightning
|
pytorch
| 20,149
|
How to use Webdataset in DDP setting? ValueError: you need to add an explicit nodesplitter to your input pipeline for multi-node training
|
### Bug description
I would like to train a module on a webdataset in a multi-GPU DDP setup.
The documentation already has a [section for webdatasets](https://lightning.ai/docs/pytorch/stable/data/alternatives.html#webdataset) pointing to [this example implementation](https://github.com/tmbdev-archive/webdataset-lightning/blob/c3a5d5f10d890a170c57fc5aac4c0a17c7ae4dda/train.py#L86)
However, it seems this is outdated, since `ddp_equalize` is [no longer avaialbe in webdataset](https://github.com/webdataset/webdataset/blob/5b12e0ba78bfb64741add2533c5d1e4cf088ffff/FAQ.md?plain=1#L1265).
I followed all advices, made sure to use the `webdataset.WebLoader` and still get the following error:
```bash
...
[rank0]: ValueError: you need to add an explicit nodesplitter to your input pipeline for multi-node training
```
Will update soon with a small example
### What version are you seeing the problem on?
master
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA A10G
- NVIDIA A10G
- NVIDIA A10G
- NVIDIA A10G
- available: True
- version: 12.1
* Lightning:
- lightning: 2.3.3
- lightning-utilities: 0.11.6
- pytorch-lightning: 2.3.3
- torch: 2.4.0
- torchmetrics: 1.4.0.post0
- torchvision: 0.19.0
* Packages:
- aiobotocore: 2.13.1
- aiohttp: 3.9.5
- aioitertools: 0.11.0
- aiosignal: 1.3.1
- albucore: 0.0.12
- albumentations: 1.4.11
- annotated-types: 0.7.0
- asttokens: 2.4.1
- attrs: 23.2.0
- boto3: 1.34.106
- botocore: 1.34.131
- braceexpand: 0.1.7
- certifi: 2024.7.4
- charset-normalizer: 3.3.2
- click: 8.1.7
- decorator: 5.1.1
- docker-pycreds: 0.4.0
- docstring-parser: 0.16
- eval-type-backport: 0.2.0
- executing: 2.0.1
- filelock: 3.15.4
- frozenlist: 1.4.1
- fsspec: 2024.6.1
- gitdb: 4.0.11
- gitpython: 3.1.43
- huggingface-hub: 0.24.2
- idna: 3.7
- imageio: 2.34.2
- importlib-resources: 6.4.0
- ipython: 8.26.0
- jedi: 0.19.1
- jinja2: 3.1.4
- jmespath: 1.0.1
- joblib: 1.4.2
- jsonargparse: 4.32.0
- lazy-loader: 0.4
- lightning: 2.3.3
- lightning-utilities: 0.11.6
- litdata: 0.2.16
- loguru: 0.7.2
- markupsafe: 2.1.5
- matplotlib-inline: 0.1.7
- mpmath: 1.3.0
- multidict: 6.0.5
- networkx: 3.3
- numpy: 1.26.4
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 9.1.0.70
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu12: 2.20.5
- nvidia-nvjitlink-cu12: 12.5.82
- nvidia-nvtx-cu12: 12.1.105
- objprint: 0.2.3
- opencv-python-headless: 4.10.0.84
- packaging: 24.1
- pandas: 2.1.4
- parso: 0.8.4
- pexpect: 4.9.0
- pillow: 10.4.0
- platformdirs: 4.2.2
- prompt-toolkit: 3.0.47
- protobuf: 5.27.2
- psutil: 6.0.0
- ptyprocess: 0.7.0
- pudb: 2024.1.2
- pure-eval: 0.2.3
- pydantic: 2.8.2
- pydantic-core: 2.20.1
- pygments: 2.18.0
- python-dateutil: 2.9.0.post0
- python-dotenv: 1.0.1
- python-magic: 0.4.27
- pytorch-lightning: 2.3.3
- pytz: 2024.1
- pyyaml: 6.0.1
- requests: 2.32.3
- s3cmd: 2.4.0
- s3fs: 2024.6.1
- s3transfer: 0.10.1
- safetensors: 0.4.3
- scikit-image: 0.24.0
- scikit-learn: 1.5.1
- scipy: 1.14.0
- segment-anything: 1.0
- semantic-segmentation: 0.1.0
- sentry-sdk: 2.11.0
- setproctitle: 1.3.3
- setuptools: 63.4.3
- six: 1.16.0
- smmap: 5.0.1
- stack-data: 0.6.3
- sympy: 1.13.1
- threadpoolctl: 3.5.0
- tifffile: 2024.7.24
- timm: 1.0.7
- tomli: 2.0.1
- torch: 2.4.0
- torchmetrics: 1.4.0.post0
- torchvision: 0.19.0
- tqdm: 4.66.4
- traitlets: 5.14.3
- triton: 3.0.0
- typeshed-client: 2.7.0
- typing-extensions: 4.12.2
- tzdata: 2024.1
- urllib3: 2.2.2
- urwid: 2.6.15
- urwid-readline: 0.14
- viztracer: 0.16.3
- wandb: 0.17.5
- wcwidth: 0.2.13
- webdataset: 0.2.86
- wrapt: 1.16.0
- yarl: 1.9.4
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.11.9
- release: 5.15.0-1066-aws
- version: #72~20.04.1-Ubuntu SMP Thu Jul 18 10:41:27 UTC 2024
</details>
### More info
_No response_
cc @borda
|
open
|
2024-08-01T15:48:48Z
|
2024-08-03T09:44:57Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20149
|
[
"help wanted",
"docs",
"ver: 2.2.x"
] |
cgebbe
| 0
|
Yorko/mlcourse.ai
|
data-science
| 718
|
Bad view in Feature importance
|
In the [feature importance article](https://mlcourse.ai/book/topic05/topic5_part3_feature_importance.html) there is some problem with the representation of sklearn Impurity Reduction algo. It seems like some indentations are not shown properly.
Actually, in [English ntbk](https://github.com/Yorko/mlcourse.ai/blob/main/jupyter_english/topic05_ensembles_random_forests/topic5_part3_feature_importance.ipynb) it looks right.

|
closed
|
2022-09-12T10:32:39Z
|
2022-09-13T23:01:38Z
|
https://github.com/Yorko/mlcourse.ai/issues/718
|
[] |
aulasau
| 1
|
piskvorky/gensim
|
data-science
| 3,017
|
Inconsistency within documentation
|
<!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
Hi, I found inconsistency within your documentation.
In some examples `AnnoyIndexer` is imported from `gensim.similarities.annoy` and in some from `gensim.similarities.index`.
I tried to import form both but only `gensim.similarities.index` works.
#### Steps/code/corpus to reproduce
Go to documentation: https://radimrehurek.com/gensim/similarities/annoy.html .
#### Versions
```
Windows-10-10.0.19041-SP0
Python 3.7.9 (default, Aug 31 2020, 17:10:11) [MSC v.1916 64 bit (AMD64)]
Bits 64
NumPy 1.19.2
SciPy 1.5.2
gensim 3.8.3
FAST_VERSION 1
```
|
closed
|
2020-12-27T13:07:01Z
|
2020-12-27T15:44:47Z
|
https://github.com/piskvorky/gensim/issues/3017
|
[] |
JakovGlavac
| 1
|
streamlit/streamlit
|
python
| 10,384
|
st.html() Injects CSS but It Does Not Take Effect in Streamlit 1.42.1 (Worked in 1.37)
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
<h4><strong>Summary</strong></h4>
<p>In <strong>Streamlit 1.37</strong>, the <code inline="">st.html()</code> function successfully injected and applied <strong>custom CSS styles</strong> to modify the <code inline=""><h1></code> color. However, in <strong>Streamlit 1.42.1</strong>, the CSS <strong>is present in the DOM but does not take effect</strong>.</p>
<h4><strong>Expected Behavior (Streamlit 1.37.0)</strong></h4>
<ul>
<li>The <code inline=""><h1></code> text should appear in <strong>red (<code inline="">#ff6347</code>)</strong>, as defined in the CSS.</li>
</ul>
<h4><strong>Observed Behavior (Streamlit 1.42.1)</strong></h4>
<ul>
<li>The <strong>CSS is injected and visible in the page source</strong>.</li>
<li>However, <strong>the style does not apply</strong> (the <code inline=""><h1></code> text remains the default color).</li>
<li>Manually modifying the CSS via developer tools <strong>applies the style correctly</strong>, suggesting something is preventing it from taking effect.</li>
</ul>
<hr>
<h3><strong>Code to Reproduce</strong></h3>
<pre><code class="language-python">import streamlit as st
st.html("""
<style>
h1 {
color: #ff6347;
}
</style>
""")
st.title("Hello")
st.caption(f"Streamlit version: {st.__version__}")
</code></pre>
<h4><strong>Tested Versions</strong></h4>
Streamlit Version | Behavior
-- | --
1.37.0 | ✅ CSS Works (Red (h1))
1.42.1 | ❌ CSS Injected but Does Not Take Effect
### Reproducible Code Example
```Python
import streamlit as st
st.html("""
<style>
h1 {
color: #ff6347;
}
</style>
""")
st.title("Hello")
st.caption(f"Streamlit version: {st.version}")
```
### Steps To Reproduce
_No response_
### Expected Behavior

### Current Behavior
Inline CSS does not take effect
### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.37.1 / 1.42.1
- Python version: 3.10
- Operating System: MacOS
- Browser: Chrome
### Additional Information
_No response_
|
closed
|
2025-02-12T20:40:06Z
|
2025-02-13T06:32:36Z
|
https://github.com/streamlit/streamlit/issues/10384
|
[
"type:bug",
"status:needs-triage"
] |
ishswar
| 3
|
joke2k/django-environ
|
django
| 413
|
Add support for CONN_HEALTH_CHECKS and OPTIONS
|
Hello!
How to convert my config to django-environ? I'm worry about CONN_HEALTH_CHECKS and OPTIONS.
```
DATABASES = types.MappingProxyType({
'default': {
'ENGINE': 'django.db.backends.postgresql',
'HOST': '127.0.0.1',
'NAME': 'devdatabase',
'CONN_MAX_AGE': None,
'CONN_HEALTH_CHECKS': True,
'OPTIONS': {'sslmode': 'require'} if IS_PRODUCTION else {},
'PASSWORD': 'django-insecure-database_password',
'PORT': '5432',
'USER': 'devdatabaseuser',
},
})
```
Names like `CONN_MAX_AGE` are hardcoded in https://github.com/joke2k/django-environ/blob/main/environ/environ.py#L132-L137...
|
closed
|
2022-08-22T07:58:29Z
|
2024-04-17T18:41:24Z
|
https://github.com/joke2k/django-environ/issues/413
|
[
"enhancement"
] |
lorddaedra
| 12
|
CorentinJ/Real-Time-Voice-Cloning
|
pytorch
| 1,197
|
Others
|
Hello. How i can add Brazilian portuguese to this program?
|
open
|
2023-04-18T13:09:31Z
|
2023-04-18T13:09:31Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1197
|
[] |
jvzin8040
| 0
|
miguelgrinberg/python-socketio
|
asyncio
| 340
|
"event" method with AsyncSever
|
The Sanic example shows a call with an event method being used as a decorator function; however, upon inspecting the library, the method doesn't exist... is there something that I am missing?
|
closed
|
2019-08-27T10:38:58Z
|
2019-11-17T19:05:56Z
|
https://github.com/miguelgrinberg/python-socketio/issues/340
|
[
"question"
] |
leehol
| 4
|
Avaiga/taipy
|
automation
| 1,777
|
Improve editing capabilities of Taipy tables
|
### Description
I'm looking for a way to enable immediate editing for all editable columns or cells of a table. Instead of clicking the pencil icon for each cell I want to edit, I would like to have a single button that makes all cells in the table ready for editing.
This was asked by a user.
### Requested Features
1. **Single Button for Editing All Cells:**
- A feature to enable editing mode for all editable cells in a table with a single click, rather than activating each cell individually.
2. **Row-Wise Editing and Accept Button:**
- Ability to edit multiple cells in a row simultaneously and then click an "accept" button to update an item in CosmosDB for the entire row, instead of confirming each cell's edit independently.
3. **Master Accept Button for All Changes:**
- A master "accept" button that commits all changes made across the entire table, updating the corresponding CosmosDB items for each modified row.
### Current Limitation
These capabilities are not available in the existing `Table` element.
### Environment
Taipy: develop/4.0
### Suggested Solution
Consider adding these options to Taipy.
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
- [ ] Check if a new demo could be provided based on this, or if legacy demos could be benefit from it.
- [ ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)
|
open
|
2024-09-11T12:47:48Z
|
2025-03-14T13:51:54Z
|
https://github.com/Avaiga/taipy/issues/1777
|
[
"🖰 GUI",
"🆘 Help wanted",
"🟩 Priority: Low",
"✨New feature"
] |
FlorianJacta
| 12
|
LibreTranslate/LibreTranslate
|
api
| 706
|
Community forum is down
|
Hello, i have a question about LibreTranslate (how to only download one specific model with a self hosted instance) but community.libretranslate.com does not work. So i hope im allowed to ask this question here and also report the issue about the community page.
|
closed
|
2024-11-02T15:19:44Z
|
2024-11-02T15:22:17Z
|
https://github.com/LibreTranslate/LibreTranslate/issues/706
|
[] |
Hallilogod
| 3
|
Sanster/IOPaint
|
pytorch
| 414
|
lama cleaner memory usage increase every clean cycle in docker container deployed kubernetes
|
Hi,
I tried to deploy lama cleaner docker container in kubernetes cluster (in machine using CPU).
Lama cleaner works well. But every cleaning cycle, memory usage of container keep increasing.
When restart container, memory usage become default state. I could set memory limit of the container and restart container when reached limit. Are there other ways to solve this memory issue?
Because I am not familiar with python backend and and AI code in python, I have no solution other than restart container.
|
closed
|
2023-12-19T04:51:29Z
|
2025-03-01T02:05:48Z
|
https://github.com/Sanster/IOPaint/issues/414
|
[
"stale"
] |
kmpartner
| 2
|
tiangolo/uwsgi-nginx-flask-docker
|
flask
| 109
|
Here's how uWSGI is configured, from the base image: https://github.com/tiangolo/uwsgi-nginx-docker
|
From what I understand, having a configuration like:
>program:uwsgi]
>environment=PATH='/opt/conda/envs/conda_environment/bin:/opt/conda/bin'
>command=/opt/conda/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term --need-app --processes 4
>master = true
>stdout_logfile=/dev/stdout
>stdout_logfile_maxbytes=0
>stderr_logfile=/dev/stderr
>stderr_logfile_maxbytes=0
Would not be enough to make the supervisor spawn more workers?
Is there a way to spawn more workers using the uwsgi.ini file?
> 2016-08-16: Use dynamic a number of worker processes for uWSGI, from 2 to 16 depending on load. This should work for most cases. This helps especially when there are some responses that are slow and take some time to be generated, this change allows all the other responses to keep fast (in a new process) without having to wait for the first (slow) one to finish.
---
Now, what are you trying to achieve exactly? What do you want the `lazy-apps` for?
_Originally posted by @tiangolo in https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/37#issuecomment-363742175_
|
closed
|
2018-11-20T11:40:31Z
|
2019-01-01T20:00:07Z
|
https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/109
|
[] |
hadjichristslave
| 3
|
dynaconf/dynaconf
|
django
| 203
|
Document the use of pytest with dynaconf
|
For testing in my project i want to add in my conftest.py something like that:
```
import pytest
import os
@pytest.fixture(scope='session', autouse=True)
def settings():
os.environ['ENV_FOR_DYNACONF'] = 'testing'
```
But this is not work ;-(. What can you advise me ?
I dont want start my test like that : `ENV_FOR_DYNACONF=testing pytest` because somebody can miss that command prefix and mess up some dev data.
|
closed
|
2019-08-08T10:41:39Z
|
2020-02-26T18:04:26Z
|
https://github.com/dynaconf/dynaconf/issues/203
|
[
"enhancement",
"question",
"Docs",
"good first issue"
] |
dyens
| 12
|
man-group/arctic
|
pandas
| 448
|
Context Manager
|
Can I please know if we have context manger in Arctic, e.g., drop connections or clear cache?
|
closed
|
2017-11-06T16:48:37Z
|
2017-12-03T21:28:33Z
|
https://github.com/man-group/arctic/issues/448
|
[] |
johnjihong
| 3
|
aimhubio/aim
|
tensorflow
| 2,433
|
Add a community discord link in the sidebar
|
## 🚀 Feature
Add a community discord link in the `Sidebar`
### Motivation
Provide users the ability to easily navigate to the `Aim discord` community channel from the `Sidebar`
### Pitch
Display a link to the discord with an icon in the `Sidebar`.
|
closed
|
2022-12-15T11:37:57Z
|
2023-01-31T11:13:59Z
|
https://github.com/aimhubio/aim/issues/2433
|
[
"type / enhancement",
"area / Web-UI",
"phase / shipped"
] |
arsengit
| 0
|
explosion/spaCy
|
deep-learning
| 12,982
|
RuntimeError: Error(s) in loading state_dict for RobertaModel: Unexpected key(s) in state_dict: "embeddings.position_ids".
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
```py
import spacy
nlp = spacy.load('en_core_web_trf')
```
Full traceback:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[7], line 1
----> 1 nlp = spacy.load('en_core_web_trf')
File /opt/conda/lib/python3.8/site-packages/spacy/__init__.py:51, in load(name, vocab, disable, enable, exclude, config)
27 def load(
28 name: Union[str, Path],
29 *,
(...)
34 config: Union[Dict[str, Any], Config] = util.SimpleFrozenDict(),
35 ) -> Language:
36 """Load a spaCy model from an installed package or a local path.
37
38 name (str): Package name or model path.
(...)
49 RETURNS (Language): The loaded nlp object.
50 """
---> 51 return util.load_model(
52 name,
53 vocab=vocab,
54 disable=disable,
55 enable=enable,
56 exclude=exclude,
57 config=config,
58 )
File /opt/conda/lib/python3.8/site-packages/spacy/util.py:465, in load_model(name, vocab, disable, enable, exclude, config)
463 return get_lang_class(name.replace("blank:", ""))()
464 if is_package(name): # installed as package
--> 465 return load_model_from_package(name, **kwargs) # type: ignore[arg-type]
466 if Path(name).exists(): # path to model data directory
467 return load_model_from_path(Path(name), **kwargs) # type: ignore[arg-type]
File /opt/conda/lib/python3.8/site-packages/spacy/util.py:501, in load_model_from_package(name, vocab, disable, enable, exclude, config)
484 """Load a model from an installed package.
485
486 name (str): The package name.
(...)
498 RETURNS (Language): The loaded nlp object.
499 """
500 cls = importlib.import_module(name)
--> 501 return cls.load(vocab=vocab, disable=disable, enable=enable, exclude=exclude, config=config)
File /opt/conda/lib/python3.8/site-packages/en_core_web_trf/__init__.py:10, in load(**overrides)
9 def load(**overrides):
---> 10 return load_model_from_init_py(__file__, **overrides)
File /opt/conda/lib/python3.8/site-packages/spacy/util.py:682, in load_model_from_init_py(init_file, vocab, disable, enable, exclude, config)
680 if not model_path.exists():
681 raise IOError(Errors.E052.format(path=data_path))
--> 682 return load_model_from_path(
683 data_path,
684 vocab=vocab,
685 meta=meta,
686 disable=disable,
687 enable=enable,
688 exclude=exclude,
689 config=config,
690 )
File /opt/conda/lib/python3.8/site-packages/spacy/util.py:547, in load_model_from_path(model_path, meta, vocab, disable, enable, exclude, config)
538 config = load_config(config_path, overrides=overrides)
539 nlp = load_model_from_config(
540 config,
541 vocab=vocab,
(...)
545 meta=meta,
546 )
--> 547 return nlp.from_disk(model_path, exclude=exclude, overrides=overrides)
File /opt/conda/lib/python3.8/site-packages/spacy/language.py:2155, in Language.from_disk(self, path, exclude, overrides)
2152 if not (path / "vocab").exists() and "vocab" not in exclude: # type: ignore[operator]
2153 # Convert to list here in case exclude is (default) tuple
2154 exclude = list(exclude) + ["vocab"]
-> 2155 util.from_disk(path, deserializers, exclude) # type: ignore[arg-type]
2156 self._path = path # type: ignore[assignment]
2157 self._link_components()
File /opt/conda/lib/python3.8/site-packages/spacy/util.py:1392, in from_disk(path, readers, exclude)
1389 for key, reader in readers.items():
1390 # Split to support file names like meta.json
1391 if key.split(".")[0] not in exclude:
-> 1392 reader(path / key)
1393 return path
File /opt/conda/lib/python3.8/site-packages/spacy/language.py:2149, in Language.from_disk.<locals>.<lambda>(p, proc)
2147 if not hasattr(proc, "from_disk"):
2148 continue
-> 2149 deserializers[name] = lambda p, proc=proc: proc.from_disk( # type: ignore[misc]
2150 p, exclude=["vocab"]
2151 )
2152 if not (path / "vocab").exists() and "vocab" not in exclude: # type: ignore[operator]
2153 # Convert to list here in case exclude is (default) tuple
2154 exclude = list(exclude) + ["vocab"]
File /opt/conda/lib/python3.8/site-packages/spacy_transformers/pipeline_component.py:416, in Transformer.from_disk(self, path, exclude)
409 self.model.attrs["set_transformer"](self.model, hf_model)
411 deserialize = {
412 "vocab": self.vocab.from_disk,
413 "cfg": lambda p: self.cfg.update(deserialize_config(p)),
414 "model": load_model,
415 }
--> 416 util.from_disk(path, deserialize, exclude) # type: ignore
417 return self
File /opt/conda/lib/python3.8/site-packages/spacy/util.py:1392, in from_disk(path, readers, exclude)
1389 for key, reader in readers.items():
1390 # Split to support file names like meta.json
1391 if key.split(".")[0] not in exclude:
-> 1392 reader(path / key)
1393 return path
File /opt/conda/lib/python3.8/site-packages/spacy_transformers/pipeline_component.py:390, in Transformer.from_disk.<locals>.load_model(p)
388 try:
389 with open(p, "rb") as mfile:
--> 390 self.model.from_bytes(mfile.read())
391 except AttributeError:
392 raise ValueError(Errors.E149) from None
File /opt/conda/lib/python3.8/site-packages/thinc/model.py:638, in Model.from_bytes(self, bytes_data)
636 msg = srsly.msgpack_loads(bytes_data)
637 msg = convert_recursive(is_xp_array, self.ops.asarray, msg)
--> 638 return self.from_dict(msg)
File /opt/conda/lib/python3.8/site-packages/thinc/model.py:676, in Model.from_dict(self, msg)
674 node.set_param(param_name, value)
675 for i, shim_bytes in enumerate(msg["shims"][i]):
--> 676 node.shims[i].from_bytes(shim_bytes)
677 return self
File /opt/conda/lib/python3.8/site-packages/spacy_transformers/layers/hf_shim.py:120, in HFShim.from_bytes(self, bytes_data)
118 filelike.seek(0)
119 device = get_torch_default_device()
--> 120 self._model.load_state_dict(torch.load(filelike, map_location=device))
121 self._model.to(device)
122 else:
File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:2041, in Module.load_state_dict(self, state_dict, strict)
2036 error_msgs.insert(
2037 0, 'Missing key(s) in state_dict: {}. '.format(
2038 ', '.join('"{}"'.format(k) for k in missing_keys)))
2040 if len(error_msgs) > 0:
-> 2041 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
2042 self.__class__.__name__, "\n\t".join(error_msgs)))
2043 return _IncompatibleKeys(missing_keys, unexpected_keys)
RuntimeError: Error(s) in loading state_dict for RobertaModel:
Unexpected key(s) in state_dict: "embeddings.position_ids".
```
Also:
```
~$ conda list torch
# packages in environment at /opt/conda:
#
# Name Version Build Channel
efficientnet-pytorch 0.7.1 pyhd8ed1ab_1 conda-forge
pytorch 2.0.1 py3.8_cuda11.7_cudnn8.5.0_0 pytorch
pytorch-cuda 11.7 h67b0de4_0 pytorch
pytorch-lightning 2.0.1.post0 pypi_0 pypi
pytorch-mutex 1.0 cuda pytorch
rotary-embedding-torch 0.2.1 pypi_0 pypi
torchaudio 2.0.2 py38_cu117 pytorch
torchmetrics 0.11.4 pypi_0 pypi
torchtriton 2.0.0 py38 pytorch
torchvision 0.15.2 py38_cu117 pytorch
torchviz 0.0.2 pypi_0 pypi
```
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System:
* Python Version Used:
* spaCy Version Used:
* Environment Information:
```
spaCy version 3.6.1
Location /opt/conda/lib/python3.8/site-packages/spacy
Platform Linux-5.13.0-1023-aws-x86_64-with-glibc2.17
Python version 3.8.17
Pipelines en_core_web_trf (3.6.1)
```
|
closed
|
2023-09-14T14:08:37Z
|
2023-10-19T00:02:09Z
|
https://github.com/explosion/spaCy/issues/12982
|
[
"install",
"feat / transformer"
] |
dzenilee
| 6
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 289
|
How do I use my own mp3?
|
I'm playing with the demo, and I only have an option to record, how do I import an audio file?
tnx.
|
closed
|
2020-02-26T00:24:56Z
|
2020-07-04T22:35:07Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/289
|
[] |
orenong
| 10
|
openapi-generators/openapi-python-client
|
fastapi
| 839
|
`UnexpectedStatus` contains an uninformative message
|
### Problem
Now when we set
```python
raise_on_unexpected_status=True
```
in the client, we will only see `Unexpected status code: 400` (or smt like) when any unexpected status is received. But in fact, this is an uninformative message. To find out what is the reason for this exception, we should try to reproduce the situation ourselves and check the content of the response. Sometimes it's very difficult.
## Solution I'd propose
Maybe add response content to text of [UnexpectedStatus](https://github.com/openapi-generators/openapi-python-client/blob/a719c87b7d278135c475d8123aa144651fa55523/openapi_python_client/templates/errors.py.jinja#L10) exception? Or add some flag that allows us to show the contents of the response when we receive an unexpected status code?
|
closed
|
2023-08-16T07:55:10Z
|
2023-08-16T15:56:38Z
|
https://github.com/openapi-generators/openapi-python-client/issues/839
|
[] |
M1troll
| 0
|
hpcaitech/ColossalAI
|
deep-learning
| 5,421
|
[BUG]: ColossalEval/AGIEvalDataset loader causing IndexError when few shot is disabled
|
### 🐛 Describe the bug
## Describe the bug
AGIEvalDataset loader in ColossalEval incorrectly set `few_shot_data` to `[]` when `few_shot` is disabled, causing IndexError at https://github.com/hpcaitech/ColossalAI/blob/main/applications/ColossalEval/colossal_eval/utils/conversation.py#L126.
## To Reproduce
Use the following inference config.json, to run https://github.com/hpcaitech/ColossalAI/blob/main/applications/ColossalEval/examples/dataset_evaluation/inference.sh
```json
{
"model": [
{
"name": "<model name>",
"model_class": "HuggingFaceCausalLM",
"parameters": {
"path": "<model>",
"model_max_length": 2048,
"tokenizer_path": "<tokenizer>",
"tokenizer_kwargs": {
"trust_remote_code": true
},
"peft_path": null,
"model_kwargs": {
"torch_dtype": "torch.float16",
"trust_remote_code": true
},
"prompt_template": "plain",
"batch_size": 4
}
}
],
"dataset": [
{
"name": "agieval",
"dataset_class": "AGIEvalDataset",
"debug": false,
"few_shot": false,
"path": "eval-data/AGIEval/data/v1",
"save_path": "inference_data/agieval.json"
}
]
}
```
The following exception will occur.
```bash
agieval-aqua-rat Inference steps: 0%| | 0/64 [00:00<?, ?it/s]Traceback (most recent call last):
File "/ColossalAI/applications/ColossalEval/examples/dataset_evaluation/inference.py", line 260, in <module>
main(args)
File "/ColossalAI/applications/ColossalEval/examples/dataset_evaluation/inference.py", line 223, in main
answers_per_rank = model_.inference(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/colossal_eval/models/huggingface.py", line 373, in inference
batch_prompt, batch_target = get_batch_prompt(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/colossal_eval/utils/conversation.py", line 195, in get_batch_prompt
few_shot_prefix = get_few_shot_prefix(conv, few_shot_data, tokenizer, language, max_tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/colossal_eval/utils/conversation.py", line 141, in get_few_shot_prefix
few_shot_prefix = few_shot_data[0] + "\n\n"
~~~~~~~~~~~~~^^^
IndexError: list index out of range
```
## Expected behavior
With `few_shot` set to false, `few_shot_data` should be set to None, so `get_batch_prompt`(https://github.com/hpcaitech/ColossalAI/blob/main/applications/ColossalEval/colossal_eval/utils/conversation.py#L182) will skip few shot prefix generation.
### Environment
Python 3.11.0rc1
|
closed
|
2024-03-03T20:40:29Z
|
2024-03-05T13:48:56Z
|
https://github.com/hpcaitech/ColossalAI/issues/5421
|
[
"bug"
] |
starcatmeow
| 0
|
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 417
|
Cookie更换频率和请求频率
|
我在尝试使用自己部署的服务爬取TikTok的视频评论。我想请教一下作者:为了避免被平台控,Cookie推荐多长时间换一次?以及请求频率推荐为多快?此外,是否有其他的避免被控的注意事项?谢谢~
|
open
|
2024-06-03T14:35:58Z
|
2024-06-09T08:00:00Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/417
|
[] |
scn0901
| 9
|
sunscrapers/djoser
|
rest-api
| 91
|
possible E-mail improvement
|
I want to quickly say first Djoser is a great package to kickoff DRF REST API projects and I've used it to its absolute maximal intent and beyond. Great job!
I figure I would add my 2cents on some improvements though. Well, actually it's just one, the rest I found very flexible and inline with OOP/DRY.
With respect to sending e-mails, I would look into decoupling it from the DRF Views completely.
Say for example you have attachments (which is not so uncommon), with Djoser as is, I had to make some minor but ugly modifications to Djoser and email views I sub-classed.
Currently Djoser does something similar with Email as TemplateView (or more generally, class based views) in Django, in that there is a "get_context_data" method etc...that is passed in to render the text (the e-mail subject, body etc...) given the path to the template (subject.html, body.html etc...).
This isn't bad, in most cases, but again, if I have to deal with attachments, and also with different types of scenarios (send to multiple recipients, bccs etc...), this may not be the best solution.
Instead of coupling that to the DRF Views, I would decouple it completely and anytime Email functionality is needed, simply reference an Email class within the code and call it.
Here is simple email class I wrote (I called it mixin class by accident) I'm using currently in one of my projects with djoser:
https://github.com/apokinsocha/django-email-mixin/blob/master/django-email-mixin.py
It maybe a bit redundant with respect to djangos existing email function, but I wanted to encapsulate some additional common functionality with respect to e-mails.
Usage example:
``` python
_file = some_file
html_body = DjangoEmailWrapper.get_template_text('email_body.txt', context=context)
subject = DjangoEmailWrapper.get_template_text('subject.txt', context=context, inline=True)
emails = ['test@gmail.com', 'test2@gmail.com']
msg = DjangoEmailWrapper(subject, getattr(settings, 'DEFAULT_FROM_EMAIL', None), html_body, bcc=emails, html_body=html_body)
msg.attach_file(_file)
msg.send_email()
```
I can do a PR in the near future to integrate it, if there is foreseeable use beyond my own.
A pretty simple change in my view, but I think it cleans things up a bit and also allows to easily call DjangoEmailWrapper in other non-djoser views.
|
closed
|
2015-11-03T01:17:21Z
|
2017-09-25T21:12:32Z
|
https://github.com/sunscrapers/djoser/issues/91
|
[
"enhancement"
] |
ghost
| 7
|
onnx/onnx
|
machine-learning
| 6,475
|
source code question
|
Why not optimize this by determining if the index is already in the set?

|
closed
|
2024-10-20T08:50:58Z
|
2024-11-01T14:54:16Z
|
https://github.com/onnx/onnx/issues/6475
|
[
"question"
] |
XiaBing992
| 1
|
pytest-dev/pytest-xdist
|
pytest
| 176
|
AttributeError when using --showlocals with -d
|
Sorry if I putting this issue in wrong place. Maybe it related to pytest (core).
I am searching around the issue database but did not find anything similar.
This is the first time when I am playing with ssh support of xdist plugin. So maybe I am doing something wrong.
My problem is I have got a traceback when am using pytest with "--showlocals" command line parameter, along with "-d". See reproduction below.
I did not get the traceback when
* I run test locally (without -d), or
* just removing "--showlocals" when using -d
File contents:
* pytest.ini
```
[pytest]
addopts = --tx ssh=root@172.17.0.2//python=python3.5
rsyncdirs = .
```
* new_test/test_new.py
```
def test_new():
assert False
```
* run_with_python3.py
```
#!/usr/bin/python3
import pytest
def main():
pytest.main(['--cache-clear', '-v', '--showlocals', '-d', 'new_test/'])
if __name__ == "__main__":
main()
```
Reproduction for error:
```
./run_with_python3.py
===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.5.3, pytest-3.1.2, py-1.4.34, pluggy-0.4.0 -- /usr/bin/python3
cachedir: .cache
rootdir: /home/micek/pytest_example, inifile: pytest.ini
plugins: mock-1.6.0, cov-2.4.0, xdist-1.18.0, profiling-1.2.6, flakes-2.0.0, docker-0.5.0, pylama-7.3.3
gw0 Iroot@172.17.0.2's password:
[gw0] linux Python 3.5.2 cwd: /root/pyexecnetcache
[gw0] Python 3.5.2 (default, Nov 17 2016, 17:05:23) -- [GCC 5.4.0 20160609]
gw0 [1]
scheduling tests via LoadScheduling
new_test/test_new.py::test_new
[gw0] FAILED new_test/test_new.py::test_new
========================================================================================== FAILURES ===========================================================================================
__________________________________________________________________________________________ test_new ___________________________________________________________________________________________
[gw0] linux -- Python 3.5.2 /usr/bin/python3.5
def test_new():
> assert False
E assert False
Traceback (most recent call last):
File "./run_with_python3.py", line 8, in <module>
main()
File "./run_with_python3.py", line 5, in main
pytest.main(['--cache-clear', '-v', '--showlocals', '-d', 'new_test/'])
File "/usr/local/lib/python3.5/dist-packages/_pytest/config.py", line 58, in main
return config.hook.pytest_cmdline_main(config=config)
File "/usr/local/lib/python3.5/dist-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "/usr/local/lib/python3.5/dist-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/usr/local/lib/python3.5/dist-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "/usr/local/lib/python3.5/dist-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
res = hook_impl.function(*args)
File "/usr/local/lib/python3.5/dist-packages/_pytest/main.py", line 134, in pytest_cmdline_main
return wrap_session(config, _main)
File "/usr/local/lib/python3.5/dist-packages/_pytest/main.py", line 128, in wrap_session
exitstatus=session.exitstatus)
File "/usr/local/lib/python3.5/dist-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "/usr/local/lib/python3.5/dist-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/usr/local/lib/python3.5/dist-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "/usr/local/lib/python3.5/dist-packages/_pytest/vendored_packages/pluggy.py", line 613, in execute
return _wrapped_call(hook_impl.function(*args), self.execute)
File "/usr/local/lib/python3.5/dist-packages/_pytest/vendored_packages/pluggy.py", line 250, in _wrapped_call
wrap_controller.send(call_outcome)
File "/usr/local/lib/python3.5/dist-packages/_pytest/terminal.py", line 395, in pytest_sessionfinish
self.summary_failures()
File "/usr/local/lib/python3.5/dist-packages/_pytest/terminal.py", line 520, in summary_failures
self._outrep_summary(rep)
File "/usr/local/lib/python3.5/dist-packages/_pytest/terminal.py", line 544, in _outrep_summary
rep.toterminal(self._tw)
File "/usr/local/lib/python3.5/dist-packages/_pytest/runner.py", line 196, in toterminal
longrepr.toterminal(out)
File "/usr/local/lib/python3.5/dist-packages/_pytest/_code/code.py", line 740, in toterminal
self.reprtraceback.toterminal(tw)
File "/usr/local/lib/python3.5/dist-packages/_pytest/_code/code.py", line 756, in toterminal
entry.toterminal(tw)
File "/usr/local/lib/python3.5/dist-packages/_pytest/_code/code.py", line 807, in toterminal
self.reprlocals.toterminal(tw)
AttributeError: 'dict' object has no attribute 'toterminal'
```
|
closed
|
2017-06-30T08:09:59Z
|
2017-07-06T00:20:22Z
|
https://github.com/pytest-dev/pytest-xdist/issues/176
|
[
"bug"
] |
mitzkia
| 8
|
Johnserf-Seed/TikTokDownload
|
api
| 113
|
[BUG]pip安装依赖报错
|
**描述出现的错误**
git clone 后运行源码安装pip3 install -r requirements.txt 报错
**报错代码截图**
<a href="https://imgtu.com/i/qsNAq1"><img src="https://s1.ax1x.com/2022/03/28/qsNAq1.png" alt="qsNAq1.png" border="0" /></a>
**桌面(请填写以下信息):**
-操作系统:arm64
-版本 latest
|
closed
|
2022-03-28T15:20:08Z
|
2022-04-02T08:49:21Z
|
https://github.com/Johnserf-Seed/TikTokDownload/issues/113
|
[
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] |
Jakob-Boy
| 1
|
ultralytics/ultralytics
|
machine-learning
| 19,662
|
yolo Model training problems
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello, why is the network structure printed out during the training process of my yolov8s model, which has not undergone any modifications, inconsistent with the one on the official website? The summary of YOLOv8s on the official website is: 225 layers, 11,166,560 parameters, 11,166,544 gradients, and 28.8 GFLOPs. My training results with other models are also inconsistent with the official website. The environment I use is RTX4090 24G, PyTorch 2.5.1 Python 3.12(ubuntu22.04)Cuda 12.4 CPU:16 vCPU Intel(R) Xeon(R) Platinum 8352V CPU @ 2.10GHz
### Additional
_No response_
|
closed
|
2025-03-12T08:44:32Z
|
2025-03-12T23:44:45Z
|
https://github.com/ultralytics/ultralytics/issues/19662
|
[
"question",
"fixed",
"detect"
] |
Meaccy
| 9
|
PaddlePaddle/models
|
nlp
| 5,346
|
PointNet++ 测试运行失败
|
基础Docker镜像: paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7
1. 调整 ext_op/src/make.sh 代码,在 g++ 编译指令中加上 -D_GLIBCXX_USE_CXX11_ABI=0
```bash
# make.sh
include_dir=$( python -c 'import paddle; print(paddle.sysconfig.get_include())' )
lib_dir=$( python -c 'import paddle; print(paddle.sysconfig.get_lib())' )
echo $include_dir
echo $lib_dir
OPS='farthest_point_sampling_op gather_point_op group_points_op query_ball_op three_interp_op three_nn_op'
for op in ${OPS}
do
nvcc ${op}.cu -c -o ${op}.cu.o -ccbin cc -DPADDLE_WITH_CUDA -DEIGEN_USE_GPU -DPADDLE_USE_DSO -DPADDLE_WITH_MKLDNN -Xcompiler -fPIC -std=c++11 -Xcompiler -fPIC -w --expt-relaxed-constexpr -O0 -g -DNVCC \
-I ${include_dir}/third_party/ \
-I ${include_dir}
done
g++ farthest_point_sampling_op.cc farthest_point_sampling_op.cu.o gather_point_op.cc gather_point_op.cu.o group_points_op.cc group_points_op.cu.o query_ball_op.cu.o query_ball_op.cc three_interp_op.cu.o three_interp_op.cc three_nn_op.cu.o three_nn_op.cc -o pointnet_lib.so -DPADDLE_WITH_MKLDNN -shared -fPIC -std=c++11 -O0 -g \
-I ${include_dir}/third_party/ \
-I ${include_dir} \
-L ${lib_dir} \
-L /usr/local/cuda/lib64 -lpaddle_framework -lcudart\
-D_GLIBCXX_USE_CXX11_ABI=0
rm *.cu.o
```
2. 运行 make.sh 编译通过
3. 执行测试
```bash
export CUDA_VISIBLE_DEVICES=0
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`python -c 'import paddle; print(paddle.sysconfig.get_lib())'`
export PYTHONPATH=$PYTHONPATH:`pwd`
python tests/test_three_nn_op.py
```
报错信息如下:
```
W0916 10:55:57.921620 376 init.cc:216] Warning: PaddlePaddle catches a failure signal, it may not work properly
W0916 10:55:57.921661 376 init.cc:218] You could check whether you killed PaddlePaddle thread/process accidentally or report the case to PaddlePaddle
W0916 10:55:57.921666 376 init.cc:221] The detail failure signal is:
W0916 10:55:57.921671 376 init.cc:224] *** Aborted at 1631789757 (unix time) try "date -d @1631789757" if you are using GNU date ***
W0916 10:55:57.923035 376 init.cc:224] PC: @ 0x0 (unknown)
W0916 10:55:57.923224 376 init.cc:224] *** SIGFPE (@0x7f1e088ed83b) received by PID 376 (TID 0x7f1e0e650700) from PID 143579195; stack trace: ***
W0916 10:55:57.924304 376 init.cc:224] @ 0x7f1e0e22e390 (unknown)
W0916 10:55:57.925643 376 init.cc:224] @ 0x7f1e088ed83b std::__detail::_Mod_range_hashing::operator()()
W0916 10:55:57.926782 376 init.cc:224] @ 0x7f1e089104b2 std::__detail::_Hash_code_base<>::_M_bucket_index()
W0916 10:55:57.927825 376 init.cc:224] @ 0x7f1e0890f5d0 std::_Hashtable<>::_M_bucket_index()
W0916 10:55:57.928773 376 init.cc:224] @ 0x7f1e089119bb std::__detail::_Map_base<>::operator[]()
W0916 10:55:57.929697 376 init.cc:224] @ 0x7f1e08910992 std::unordered_map<>::operator[]()
W0916 10:55:57.930344 376 init.cc:224] @ 0x7f1e0890fe46 _ZN6paddle9framework19RegisterKernelClassINS_8platform9CUDAPlaceEfZNKS0_24OpKernelRegistrarFunctorIS3_Lb0ELm0EINS_9operators33FarthestPointSamplingOpCUDAKernelIfEENS6_IdEEEEclEPKcSB_iEUlRKNS0_16ExecutionContextEE_EEvSB_SB_iT1_
W0916 10:55:57.931105 376 init.cc:224] @ 0x7f1e0890f2e4 paddle::framework::OpKernelRegistrarFunctor<>::operator()()
W0916 10:55:57.931761 376 init.cc:224] @ 0x7f1e0890e799 _ZN6paddle9framework17OpKernelRegistrarINS_8platform9CUDAPlaceEJNS_9operators33FarthestPointSamplingOpCUDAKernelIfEENS5_IdEEEEC2EPKcSA_i
W0916 10:55:57.932322 376 init.cc:224] @ 0x7f1e0890af49 __static_initialization_and_destruction_0()
W0916 10:55:57.932822 376 init.cc:224] @ 0x7f1e0890af77 _GLOBAL__sub_I_tmpxft_000000df_00000000_5_farthest_point_sampling_op.cudafe1.cpp
W0916 10:55:57.933336 376 init.cc:224] @ 0x7f1e0e44a6ca (unknown)
W0916 10:55:57.933843 376 init.cc:224] @ 0x7f1e0e44a7db (unknown)
W0916 10:55:57.934345 376 init.cc:224] @ 0x7f1e0e44f8f2 (unknown)
W0916 10:55:57.934846 376 init.cc:224] @ 0x7f1e0e44a574 (unknown)
W0916 10:55:57.935348 376 init.cc:224] @ 0x7f1e0e44edb9 (unknown)
W0916 10:55:57.935863 376 init.cc:224] @ 0x7f1e0dc4ff09 (unknown)
W0916 10:55:57.936401 376 init.cc:224] @ 0x7f1e0e44a574 (unknown)
W0916 10:55:57.936937 376 init.cc:224] @ 0x7f1e0dc50571 (unknown)
W0916 10:55:57.937688 376 init.cc:224] @ 0x7f1e0dc4ffa1 dlopen
W0916 10:55:57.945061 376 init.cc:224] @ 0x7f1da08da6d3 paddle::platform::dynload::GetOpDsoHandle()
W0916 10:55:57.950942 376 init.cc:224] @ 0x7f1d9cfbe71d paddle::framework::LoadOpLib()
W0916 10:55:57.953336 376 init.cc:224] @ 0x7f1d9d0239ed _ZZN8pybind1112cpp_function10initializeIRPFvRKSsEvIS3_EINS_4nameENS_5scopeENS_7siblingEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNESN_
W0916 10:55:57.955534 376 init.cc:224] @ 0x7f1d9d048b39 pybind11::cpp_function::dispatcher()
W0916 10:55:57.955688 376 init.cc:224] @ 0x4bc9ba PyEval_EvalFrameEx
W0916 10:55:57.955794 376 init.cc:224] @ 0x4ba036 PyEval_EvalCodeEx
W0916 10:55:57.955926 376 init.cc:224] @ 0x4c237b PyEval_EvalFrameEx
W0916 10:55:57.956028 376 init.cc:224] @ 0x4ba036 PyEval_EvalCodeEx
W0916 10:55:57.956147 376 init.cc:224] @ 0x4b9d26 PyEval_EvalCode
W0916 10:55:57.956218 376 init.cc:224] @ 0x4b9c5f PyImport_ExecCodeModuleEx
W0916 10:55:57.956341 376 init.cc:224] @ 0x4b2f86 (unknown)
W0916 10:55:57.956454 376 init.cc:224] @ 0x4a4d21 (unknown)
Floating point exception (core dumped)
```
|
open
|
2021-09-16T11:35:20Z
|
2024-02-26T05:08:43Z
|
https://github.com/PaddlePaddle/models/issues/5346
|
[] |
zjuncd
| 1
|
ansible/ansible
|
python
| 84,408
|
ansible requires that LC_ALL is set on Linux
|
### Summary
All ansible commands fail with an error message regarding "unsupported locale setting".
```sh
$ ansible --version
ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
```
I am forced to prefix the commands with LC_ALL
```sh
$ LC_ALL=en_GB.UTF-8 ansible --version
...
```
### Issue Type
Bug Report
### Component Name
ansible
### Ansible Version
```console
$ LC_ALL=en_GB.UTF-8 ansible --version
ansible [core 2.18.0]
config file = /home/bm/.ansible.cfg
configured module search path = ['/home/bm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.12/site-packages/ansible
ansible collection location = /home/bm/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.12.7 (main, Oct 1 2024, 11:15:50) [GCC 14.2.1 20240910] (/usr/bin/python)
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ LC_ALL=en_GB.UTF-8 ansible-config dump --only-changed -t all
CONFIG_FILE() = /home/bm/.ansible.cfg
DEFAULT_HOST_LIST(/home/bm/.ansible.cfg) = ['/home/bm/.ansible/inventory.ini']
DEFAULT_ROLES_PATH(/home/bm/.ansible.cfg) = ['/home/bm/.ansible/roles', '/home/bm/code/personal', '/home/bm/code/personal/ansible-roles', '/usr/share/ansible/roles', '/etc/ansible/roles']
DEFAULT_VAULT_PASSWORD_FILE(/home/bm/.ansible.cfg) = /home/bm/.ansible/vault-password
INTERPRETER_PYTHON(/home/bm/.ansible.cfg) = auto_silent
```
### OS / Environment
Arch Linux
Locale Configuration
```sh
$ cat /etc/locale.conf
LANG=en_GB.UTF-8
```
```sh
$ grep -v "#" /etc/locale.gen
de_DE.UTF-8 UTF-8
en_GB.UTF-8 UTF-8
en_GB ISO-8859-1
en_US.UTF-8 UTF-8
```
```sh
$ locale -a
C
C.utf8
de_DE.utf8
en_GB
en_GB.iso88591
en_GB.utf8
en_US.utf8
POSIX
```
```sh
$ locale
locale: Cannot set LC_ALL to default locale: No such file or directory
LANG=en_GB.UTF-8
LC_CTYPE="en_GB.UTF-8"
LC_NUMERIC="en_GB.UTF-8"
LC_TIME=en_DE.UTF-8
LC_COLLATE="en_GB.UTF-8"
LC_MONETARY="en_GB.UTF-8"
LC_MESSAGES="en_GB.UTF-8"
LC_PAPER="en_GB.UTF-8"
LC_NAME="en_GB.UTF-8"
LC_ADDRESS="en_GB.UTF-8"
LC_TELEPHONE="en_GB.UTF-8"
LC_MEASUREMENT="en_GB.UTF-8"
LC_IDENTIFICATION="en_GB.UTF-8"
LC_ALL=
```
### Steps to Reproduce
Run **any** ansible command without explicitly setting LC_ALL
### Expected Results
Ansible to work with the configuration that every other application uses.
### Actual Results
```console
ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
closed
|
2024-11-29T22:01:12Z
|
2024-12-14T14:00:02Z
|
https://github.com/ansible/ansible/issues/84408
|
[] |
red-lichtie
| 3
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 4,170
|
OperationalError: Database or disk is full on GlobaLeaks Despite Sufficient Disk Space
|
### What version of GlobaLeaks are you using?
v4.14.8
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Linux
### Describe the issue
We are encountering a persistent OperationalError on our GlobaLeaks installation, specifically version 4.14.8. The error indicates that the "database or disk is full," despite confirming that there is sufficient disk space available on the server.
Error Traceback:
sqlalchemy.exc.OperationalError Wraps a DB-API OperationalError.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: database or disk is full
...
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) database or disk is full
[SQL: INSERT INTO mail (id, tid, creation_date, address, subject, body) VALUES (?, ?, ?, ?, ?, ?)]
[parameters: ('5daffd23-9c9e-4a02-833f-dcddc6d87fd8', 1, '2024-08-17 22:05:57.571743', 'whistleblowing@plu***.eu', 'GlobaLeaks Exception', ...]
(Background on this error at: https://sqlalche.me/e/14/e3q8)
Environment:
GlobaLeaks Version: 4.14.8
Host: segnalazioni.plu***.it (via Tor and HTTPS)
Operating System: Debian 12
Database: SQLite
Steps to Reproduce:
Operate the GlobaLeaks platform under normal conditions.
The system intermittently triggers the above OperationalError indicating that the database or disk is full.
What we have tried:
Verified that there is sufficient disk space on the server.
Checked file system quotas and disk usage.
Considered the possibility of corruption but found no evidence.
Expected Behavior:
The system should continue to function without triggering this error if sufficient disk space is available.
Potential Cause: Given that the current GlobaLeaks version is 4.14.8, and the latest stable release is 5.0.2, we suspect that this issue might be related to an outdated version of the software. It is possible that this issue has been addressed in subsequent updates.
Request:
We seek confirmation on whether this issue is resolved in later versions.
Any suggestions on mitigating this error while we prepare to upgrade to the latest version would be appreciated.
Next Steps:
We plan to upgrade to version 5.0.2 but would like to understand if this issue is recognized and any recommended steps before proceeding with the upgrade.

|
closed
|
2024-08-27T12:04:02Z
|
2024-08-29T13:12:55Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4170
|
[] |
willskymaker
| 5
|
microsoft/unilm
|
nlp
| 1,373
|
[KOSMOS-2] The visual-pretrained ckpt for kosmos-2 training
|
**Describe**
i want to finetune the kosmos-2,where can i find it?thank you very much
|
closed
|
2023-11-23T09:28:40Z
|
2023-11-23T09:34:03Z
|
https://github.com/microsoft/unilm/issues/1373
|
[] |
bill4689
| 0
|
ydataai/ydata-profiling
|
jupyter
| 1,499
|
Add SECURITY.md
|
Hello 👋
I run a security community that finds and fixes vulnerabilities in OSS. A researcher (@zer0h-bb) has found a potential issue, which I would be eager to share with you.
Could you add a `SECURITY.md` file with an e-mail address for me to send further details to? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) a security policy to ensure issues are responsibly disclosed, and it would help direct researchers in the future.
Looking forward to hearing from you 👍
(cc @huntr-helper)
|
open
|
2023-11-12T21:30:04Z
|
2023-12-04T19:28:09Z
|
https://github.com/ydataai/ydata-profiling/issues/1499
|
[
"code quality 📈"
] |
psmoros
| 0
|
microsoft/qlib
|
deep-learning
| 1,487
|
Download source data fail when executing python collector.py
|
I am trying to build a customized dataset from k-line data. I execute qlib/scripts/data_collector/yahoo/collector.py using command `python collector.py normalize_data --source_dir ~/.qlib/stock_data/source/cn_1min --normalize_dir ~/.qlib/stock_data/source/cn_1min_nor --region CN --interval 1min`.
I find this script works but does not save any file in cn_1min directory. I am not sure how to debug since it does not give any error message.
The terminal output is:
<img width="1902" alt="1681006409747" src="https://user-images.githubusercontent.com/36117319/230750689-c74552a0-77d0-433c-8ae4-bca5c92a20d8.png">
the terminal keeps printing warnings, wich I do not understand. and seems finished successfully.
<img width="853" alt="1681006384875" src="https://user-images.githubusercontent.com/36117319/230750682-97ddb5c5-3c47-4e43-8a5e-14528c037835.png">
but no file was saved to target directory:
<img width="1613" alt="1681006553105" src="https://user-images.githubusercontent.com/36117319/230750747-8a0abf91-7e53-4166-83c5-2a74ece75821.png">
|
closed
|
2023-04-09T02:17:18Z
|
2023-07-13T06:02:08Z
|
https://github.com/microsoft/qlib/issues/1487
|
[
"question",
"stale"
] |
ziangqin-stu
| 1
|
pytest-dev/pytest-cov
|
pytest
| 465
|
Ensure COV_CORE_SRC is an absolute path before exporting to the environment
|
# Summary
When COV_CORE_SRC is a relative directory and a subprocess first changes its working directory
before invoking Python then coverage won't associate the
## Expected vs actual result
Get proper coverage reporting, but coverage is not reported properly.
# Reproducer
* specifiy the test directory with a relative path, i.e. `bin/py.test src`
* Create wrap a subprocess call in a shell script that first changes its work directory before calling `bin/python src/something.py`
## Versions
Output of relevant packages `pip list`, `python --version`, `pytest --version` etc.
```
Python 3.8.5
pytest 6.1.2
pytest-asyncio==0.14.0
pytest-cache==1.0
pytest-cov==2.11.1
pytest-flake8==1.0.6
pytest-timeout==1.4.2
```
## Config
Include your `tox.ini`, `pytest.ini`, `.coveragerc`, `setup.cfg` or any relevant configuration.
```
[run]
branch = True
```
```
[pytest]
addopts = --timeout=30 --tb=native --cov=src --cov-report=html src -r w
markers = slow: This is a non-unit test and thus is not run by default. Use ``-m slow`` to run these, or ``-m 1`` to run all tests.
log_level = NOTSET
filterwarnings =
ignore::DeprecationWarning:telnetlib3.*:
```
## Code
See https://github.com/flyingcircusio/backy/blob/master/src/backy/tests/test_backy.py#L99
I'm currently working around this by explicitly making COV_CORE_SRC absolute before calling the subprocess. I guess this could/should be done in general, too.
```
os.environ['COV_CORE_SOURCE'] = os.path.abspath(
os.environ['COV_CORE_SOURCE'])
```
|
open
|
2021-04-26T08:09:28Z
|
2022-11-11T20:29:44Z
|
https://github.com/pytest-dev/pytest-cov/issues/465
|
[] |
ctheune
| 1
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 1,068
|
How to add loss function only for G_A
|
Hi~ I'm confused about is G_A and G_B using same loss function? If it is, why can we get two different generator.
Besides, How can I add a loss function only for G_A, I just did not find similar question in Issues.
|
closed
|
2020-06-12T11:57:04Z
|
2020-06-13T00:46:17Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1068
|
[] |
Iarkii
| 2
|
jkrusina/SoccerPredictor
|
dash
| 20
|
ValueError: Value must be a nonnegative integer or None
|
```py
Traceback (most recent call last):
File "C:\Users\usr\Downloads\SoccerPredictor-master\main.py", line 21, in <module>
pd.set_option("display.max_colwidth", -1)
File "C:\Users\usr\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_config\config.py", line 261, in __call__
return self.__func__(*args, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_config\config.py", line 160, in _set_option
o.validator(v)
File "C:\Users\usr\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_config\config.py", line 882, in is_nonnegative_int
raise ValueError(msg)
ValueError: Value must be a nonnegative integer or None
C:\Users\usr\Downloads\SoccerPredictor-master>python main.py -h
Traceback (most recent call last):
File "C:\Users\usr\Downloads\SoccerPredictor-master\main.py", line 21, in <module>
pd.set_option("display.max_colwidth", -1)
File "C:\Users\usr\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_config\config.py", line 261, in __call__
return self.__func__(*args, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_config\config.py", line 160, in _set_option
o.validator(v)
File "C:\Users\usr\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\_config\config.py", line 882, in is_nonnegative_int
raise ValueError(msg)
ValueError: Value must be a nonnegative integer or None
```
|
open
|
2023-08-29T18:13:46Z
|
2023-08-29T18:13:46Z
|
https://github.com/jkrusina/SoccerPredictor/issues/20
|
[] |
indicts
| 0
|
huggingface/datasets
|
pytorch
| 6,995
|
ImportError when importing datasets.load_dataset
|
### Describe the bug
I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'.
### Steps to reproduce the bug
1. pip install git+https://github.com/huggingface/datasets
2. from datasets import load_dataset
### Expected behavior
ImportError Traceback (most recent call last)
Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1)
----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset
[3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train")
[4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test")
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7
1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
[2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) #
[3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License");
(...)
[12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and
[13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License.
[15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0"
---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset
[18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction
[19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63
[61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc
[62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs
---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import (
[64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo,
[65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd,
...
[70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) )
[71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile
[72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Environment info
Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub
$ datasets-cli env
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module>
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module>
from .arrow_dataset import Dataset
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module>
from huggingface_hub import (
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
(CS224S)
|
closed
|
2024-06-24T17:07:22Z
|
2024-11-14T01:42:09Z
|
https://github.com/huggingface/datasets/issues/6995
|
[] |
Leo-Lsc
| 9
|
lux-org/lux
|
jupyter
| 439
|
Loading Lux library into StreamLit Cloud
|
**Is your feature request related to a problem? Please describe.**
Trying to deploy my data project with Lux visualisations in the streamlit cloud, but no idea what to put in the requirements.txt or packages.txt file so keep getting deployment errors.
**Describe the solution you'd like**
The ability to get lux working with streamlit cloud.
|
closed
|
2021-12-08T22:35:52Z
|
2022-01-06T20:19:40Z
|
https://github.com/lux-org/lux/issues/439
|
[] |
djswoosh
| 2
|
QuivrHQ/quivr
|
api
| 2,985
|
Make Anthropic compatible with Quivr Core
|
Currently Anthropic is not OpenAI API compatible
<img src="https://uploads.linear.app/51e2032d-a488-42cf-9483-a30479d3e2d0/e9a817c8-b304-4009-a1a9-05e3bdde63e7/55401dbc-49f7-43d7-8794-4100976d67e5?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiLzUxZTIwMzJkLWE0ODgtNDJjZi05NDgzLWEzMDQ3OWQzZTJkMC9lOWE4MTdjOC1iMzA0LTQwMDktYTFhOS0wNWUzYmRkZTYzZTcvNTU0MDFkYmMtNDlmNy00M2Q3LTg3OTQtNDEwMDk3NmQ2N2U1IiwiaWF0IjoxNzIzMTUzODE1LCJleHAiOjMzMjkzNzEzODE1fQ.A5N69i8G-wsREE2OTNbeshJKojeBo-in_gCbmo4imgQ " alt="image.png" width="865" height="778" />
[https://docs.anthropic.com/en/api/getting-started](https://docs.anthropic.com/en/api/getting-started)
|
closed
|
2024-08-08T21:50:15Z
|
2024-09-24T07:21:30Z
|
https://github.com/QuivrHQ/quivr/issues/2985
|
[
"enhancement"
] |
StanGirard
| 1
|
robotframework/robotframework
|
automation
| 4,604
|
Listeners do not get source information for keywords executed with `Run Keyword`
|
This was initially reported as a regression (#4599), but it seems source information has never been properly sent to listeners for keywords executed with `Run Keyword` or its variants like `Run Keyword If`.
|
closed
|
2023-01-15T16:39:42Z
|
2023-03-15T12:50:22Z
|
https://github.com/robotframework/robotframework/issues/4604
|
[
"bug",
"priority: medium",
"alpha 1",
"effort: small"
] |
pekkaklarck
| 1
|
pallets/quart
|
asyncio
| 370
|
Program still not closing on Ctrl+C on Windows
|
I still experience the issue that Ctrl+C on Windows does not work with the current version (0.19.8).
This was already reported and addressed in #282, but not fully fixed.
<!--
Describe the expected behavior that should have happened but didn't.
-->
For an example please look at the original issue.
Environment:
- Python version: 3.12
- Quart version: 0.19.8
|
closed
|
2024-11-11T18:16:03Z
|
2024-11-28T00:26:23Z
|
https://github.com/pallets/quart/issues/370
|
[] |
Shadow-Devil
| 1
|
Miksus/rocketry
|
automation
| 150
|
BUG TaskRunnable is missing __str__
|
**Describe the bug**
When using Rocketry with FastAPI, I return all tasks using such code: return `rocketry_app.session.tasks` .
But if I dynamically create task (`rocketry_app.session.create_task(...)`) and set its `start_cond` to `conds.cron(...)`, then endpoint raises exception on returning tasks:
`AttributeError: Condition <class 'rocketry.conditions.task.task.TaskRunnable'> is missing __str__.`
**To Reproduce**
Create task in session, set its start cond to future date using conds.cron(...), return session.tasks via FastAPI endpoint (found reference example in Rocketry docs)
**Expected behavior**
No 500 internal error (no AttributeError)
I expect it to print something, to stringify the condition
**Screenshots**
I am sorry, no screenshots.
**Desktop (please complete the following information):**
- OS: Windows10
- Python version 3.10.4
**Additional context**
I am creating a pull request to fix this issue. Link:
[pull request](https://github.com/Miksus/rocketry/pull/149)
|
closed
|
2022-11-21T12:44:44Z
|
2022-11-28T21:04:44Z
|
https://github.com/Miksus/rocketry/issues/150
|
[
"bug"
] |
egisxxegis
| 2
|
tqdm/tqdm
|
jupyter
| 981
|
Minor documentation issue: allmychanges.com is dead(?)
|
There's a link here to a defunct website (allmychanges.com): https://github.com/tqdm/tqdm#changelog
|
closed
|
2020-06-01T18:53:20Z
|
2020-06-28T22:25:10Z
|
https://github.com/tqdm/tqdm/issues/981
|
[
"question/docs ‽",
"to-merge ↰"
] |
roger-
| 1
|
ultralytics/ultralytics
|
deep-learning
| 19,111
|
YOLO with dinov2 as backbone
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello @Y-T-G ,
I saw your code to support different backbones from torchvision. Could you please provide me with some guidance on how to implement YOLO with DINOv2?
### Additional
_No response_
|
open
|
2025-02-06T23:57:42Z
|
2025-02-14T13:34:43Z
|
https://github.com/ultralytics/ultralytics/issues/19111
|
[
"enhancement",
"question"
] |
SebastianJanampa
| 6
|
pytorch/pytorch
|
machine-learning
| 149,472
|
torch.compile(mode="max-autotune") produces different outputs from eager mode
|
### 🐛 Describe the bug
I'm encountering a result mismatch between eager mode and `torch.compile(mode="max-autotune")`.
The outputs differ beyond acceptable tolerances (e.g., `torch.allclose` fails), and this behavior persists in both stable and nightly builds.
### Related Discussion
I initially posted this issue on the PyTorch discussion forum, but have not received a resolution so far.
Here is the link to the original thread:
https://discuss.pytorch.org/t/torch-compile-mode-max-autotune-produces-different-inference-result-from-eager-mode-is-this-expected/217873
Since this appears to be a reproducible and version-independent issue, I'm now submitting it here as a formal GitHub issue.
### Versions
- PyTorch 2.5.1 (original test)
- PyTorch 2.6.0.dev20241112+cu121 (nightly)
- CUDA 12.1
- Platform: Ubuntu 22.04.4 LTS
### Output
=== Detailed comparison ===
- Total number of elements: 3,211,264
- Max absolute error: 0.00128412
- Mean absolute error: 0.000100889
- Max relative error: 23,868.7
- Mean relative error: 0.285904
- Number of elements exceeding tolerance: 98,102
- Percentage of out-of-tolerance elements: 3.05%
- Result of torch.allclose(output_eager, output_compiled, atol=1e-5): False
### Model
Here is my model:
```python
import torch.nn as nn
class BaseConv(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding, conv_layer):
super().__init__()
self.conv = conv_layer(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False)
def forward(self, x):
return self.conv(x)
class ActivatedConv(BaseConv):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding, conv_layer, activation):
super().__init__(in_channels, out_channels, kernel_size, stride, padding, conv_layer)
self.activation = activation
def forward(self, x):
return self.activation(self.conv(x))
class NormalizedConv(ActivatedConv):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding, conv_layer, norm, activation):
super().__init__(in_channels, out_channels, kernel_size, stride, padding, conv_layer, activation)
self.norm = norm(out_channels)
def forward(self, x):
return self.activation(self.norm(self.conv(x)))
class Conv2DBNReLU(NormalizedConv):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding):
super().__init__(in_channels, out_channels, kernel_size, stride, padding, nn.Conv2d, nn.BatchNorm2d, nn.ReLU())
class MyModel(nn.Module):
def __init__(self, in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1):
super().__init__()
self.conv1 = Conv2DBNReLU(in_channels, out_channels, kernel_size, stride, padding)
def forward(self, x):
return self.conv1(x)
def my_model_function(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1):
return MyModel(in_channels, out_channels, kernel_size, stride, padding)
if __name__ == "__main__":
model = my_model_function()
print(model)
```
### Minimal Script
And this is a minimal script that reproduces the issue:
```python
import torch
import importlib.util
import os
def load_model_from_file(module_path, model_function_name="my_model_function"):
model_file = os.path.basename(module_path)[:-3]
spec = importlib.util.spec_from_file_location(model_file, module_path)
model_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(model_module)
model_function = getattr(model_module, model_function_name)
model = model_function()
return model
def compare_outputs(a: torch.Tensor, b: torch.Tensor, atol=1e-5, rtol=1e-3):
print("=== Output difference comparison ===")
diff = a - b
abs_diff = diff.abs()
rel_diff = abs_diff / (a.abs() + 1e-8)
total_elements = a.numel()
print(f"- Total elements: {total_elements}")
print(f"- Max absolute error: {abs_diff.max().item():.8f}")
print(f"- Mean absolute error: {abs_diff.mean().item():.8f}")
print(f"- Max relative error: {rel_diff.max().item():.8f}")
print(f"- Mean relative error: {rel_diff.mean().item():.8f}")
num_exceed = (~torch.isclose(a, b, atol=atol, rtol=rtol)).sum().item()
print(f"- Elements exceeding tolerance: {num_exceed}")
print(f"- Percentage exceeding tolerance: {100.0 * num_exceed / total_elements:.4f}%")
print(f"- torch.allclose: {torch.allclose(a, b, atol=atol, rtol=rtol)}")
if __name__ == "__main__":
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
input_tensor = torch.rand(1, 3, 224, 224, device=device)
model_path = "xxx/xxx/xxx/xxx.py"
model = load_model_from_file(model_path).to(device).eval()
with torch.no_grad():
output_eager = model(input_tensor)
compiled_model = torch.compile(model, mode="max-autotune")
with torch.no_grad():
output_compiled = compiled_model(input_tensor)
compare_outputs(output_eager, output_compiled)
```
### Versions
### Nightly
```
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241112+cu121
[pip3] torchaudio==2.5.0.dev20241112+cu121
[pip3] torchvision==0.20.0.dev20241112+cu121
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241112+cu121 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241112+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241112+cu121 pypi_0 pypi
```
### Original
```
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.5.3.2
[pip3] nvidia-cuda-cupti-cu12==12.5.82
[pip3] nvidia-cuda-nvrtc-cu12==12.5.82
[pip3] nvidia-cuda-runtime-cu12==12.5.82
[pip3] nvidia-cudnn-cu12==9.3.0.75
[pip3] nvidia-cufft-cu12==11.2.3.61
[pip3] nvidia-curand-cu12==10.3.6.82
[pip3] nvidia-cusolver-cu12==11.6.3.83
[pip3] nvidia-cusparse-cu12==12.5.1.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.5.82
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.10.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl defaults
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2022.1.0 hc2b9512_224 defaults
[conda] numpy 1.26.4 py310hb13e2d6_0 conda-forge
[conda] nvidia-cublas-cu12 12.5.3.2 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.5.82 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.5.82 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.5.82 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.3.0.75 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.3.61 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.6.82 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.3.83 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.1.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.5.82 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch 2.5.1 py3.10_cuda12.1_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py310_cu121 pytorch
[conda] torchdata 0.10.0 pypi_0 pypi
[conda] torchtriton 3.1.0 py310 pytorch
[conda] torchvision 0.20.1 py310_cu121 pytorch
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
|
open
|
2025-03-19T02:15:53Z
|
2025-03-24T11:21:07Z
|
https://github.com/pytorch/pytorch/issues/149472
|
[
"triaged",
"oncall: pt2",
"module: inductor",
"topic: fuzzer"
] |
tinywisdom
| 3
|
deepinsight/insightface
|
pytorch
| 2,095
|
Why the params of IResNet50 is larger than ResNet50?
|
IResNet50 vs ResNet50(in object detection): 43.77M vs 33.71M
|
open
|
2022-09-01T15:53:20Z
|
2022-09-01T16:36:34Z
|
https://github.com/deepinsight/insightface/issues/2095
|
[] |
Icecream-blue-sky
| 1
|
benbusby/whoogle-search
|
flask
| 904
|
[BUG] Most of the search terms are not bold in Chinese results
|
**Describe the bug**
Most of the search terms are not bold in Chinese results.
Whoogle results:
<img width="819" alt="Screenshot 2022-12-11 at 6 43 14 PM" src="https://user-images.githubusercontent.com/33184148/206899969-b2d72332-7eee-44ca-88ec-b126492d361e.png">
Google results for reference:
<img width="936" alt="Screenshot 2022-12-11 at 6 43 21 PM" src="https://user-images.githubusercontent.com/33184148/206899990-c2edcc1f-399a-4624-8e94-a3fe589db406.png">
**To Reproduce**
Search "新聞" or any other term in Chinese.
**Deployment Method**
- [x] Docker
**Version of Whoogle Search**
- [x] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
**Desktop (please complete the following information):**
- OS: MacOS and iOS
- Browser: Safari
|
closed
|
2022-12-11T11:09:16Z
|
2023-01-09T19:54:43Z
|
https://github.com/benbusby/whoogle-search/issues/904
|
[
"bug"
] |
whaler-ragweed
| 6
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,107
|
emit message from server to client on a particular time using python-socketio
|
I want to emit message on a particular time to a particular client. is that possible ??? is possible to make standard connection between client and server in socket io up to that time???
|
closed
|
2019-11-22T08:56:55Z
|
2020-06-30T22:51:59Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1107
|
[
"question"
] |
harsha2041
| 7
|
ghtmtt/DataPlotly
|
plotly
| 146
|
Add data defined property 'Use feature subset'
|
I would like to suggest to add a data defined property named 'Use feature subset' below the layer selection combo box. It is applied to filter the features in the layer and can replace the 'use only selected features' checkbox.
|
closed
|
2019-10-15T11:20:08Z
|
2019-10-25T00:02:08Z
|
https://github.com/ghtmtt/DataPlotly/issues/146
|
[
"enhancement"
] |
SGroe
| 1
|
lk-geimfari/mimesis
|
pandas
| 1,096
|
Publish v5.0.0 version to pypi
|
# Release request
## Version Info
- version: 5.0.0
## Expected
- Version 5.0.0 appears in [mimesis release history](https://pypi.org/project/mimesis/#history) of pypi
|
closed
|
2021-09-26T09:00:03Z
|
2022-01-04T13:40:11Z
|
https://github.com/lk-geimfari/mimesis/issues/1096
|
[] |
blakegao
| 5
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,006
|
Remove Guitar
|
Hello, I'm new to this world of mixing.
Is there any way, through this software, to remove just the guitar from a song?
I know how to remove the battery.
But I would like to learn how to remove the guitar.
Can anyone with any advice help me and help the community?
Thanks.
|
closed
|
2023-12-05T23:54:47Z
|
2023-12-10T23:29:47Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1006
|
[] |
DEV7Kadu
| 2
|
deepfakes/faceswap
|
deep-learning
| 1,202
|
faceswap graph crashes
|
The training module of faceswap used to have a broken line statistical chart. Now I click the chart, and the program crashes and exits
|
closed
|
2022-01-01T23:46:30Z
|
2022-06-30T09:52:22Z
|
https://github.com/deepfakes/faceswap/issues/1202
|
[] |
wangyifan349
| 3
|
explosion/spaCy
|
nlp
| 13,673
|
`spacy download nl_core_news_sm` downgrades transformers installation
|
Since models are in fact pip modules, it makes sense that they have their own dependencies. However, I was very surprised to find out that `nl_core_news_sm` required me to downgrade my `transformers` version. I am running on the main branch of `transformers` so ahead of 4.45.2 (`4.46.0.dev0`). yet when installing `nl-core-news-sm` I find the transformers version to be downgraded to the pip release. To me that sounds like a bug but maybe this is intended behavior to avoid conflicts for the average user.
As a power user that restriction is a bit too strong, though. As per semver, no breaking changes ought to be introduced with minor version bumps so it would be surprising to see major shifts. (Disclaimer: I'm not sure how closely HF follows semver.)
spaCy version 3.8.2
Location /home/local/vanroy/defgen/.venv/lib/python3.10/site-packages/spacy
Platform Linux-5.14.0-427.20.1.el9_4.x86_64-x86_64-with-glibc2.34
Python version 3.10.15
Pipelines nl_core_news_sm (3.8.0)
If the restriction cannot be relieved a bit, do you have another suggestion to by-pass this? I am willing to build things from source if needed, though I am not sure how to do that with the mode files.
|
open
|
2024-10-19T21:45:34Z
|
2024-10-19T21:45:34Z
|
https://github.com/explosion/spaCy/issues/13673
|
[] |
BramVanroy
| 0
|
astrofrog/mpl-scatter-density
|
matplotlib
| 15
|
Does not work with inline matplotlib on Jupyter-Notebooks
|
MWE:
In a Jupyter Notebook,
```
%matplotlib inline
import mpl_scatter_density
import numpy as np
import matplotlib.pyplot as plt
N = 10000000
x = np.random.normal(4, 2, N)
y = np.random.normal(3, 1, N)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection='scatter_density')
ax.scatter_density(x, y)
ax.set_xlim(-5, 10)
ax.set_ylim(-5, 10)
```
throws `TypeError: 'NoneType' object is not iterable`.
Using Jupyter Notebook version `5.0.0`, `matplotlib` version `2.1.0` on Google Chrome on MacOS 10.13.4.
However, works fine with `%matplotlib notebook`.
|
open
|
2018-04-20T20:36:29Z
|
2018-06-19T22:04:47Z
|
https://github.com/astrofrog/mpl-scatter-density/issues/15
|
[
"bug"
] |
ijoseph
| 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.