repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
Textualize/rich
|
python
| 3,490
|
[BUG] rich.progress.track with stderr Console breaks print builtin
|
- [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
While using `rich.progress.track` configured with a `Console` instantiated with `stderr=True`, print statements also go to stderr.
I would expect `rich` to not hijack the default behavior of `print`.
I understand I myself can use `console.print()` in my own code but I have no control over 3rd party plugins or libraries.
> Provide a minimal code example that demonstrates the issue if you can. If the issue is visual in nature, consider posting a screenshot.
```python
#!/usr/bin/env python
from rich.console import Console
from rich.progress import track
err_console = Console(stderr=True)
console = Console()
if __name__ == "__main__":
for i in track(range(10), console=err_console):
if i % 4 == 0:
print(f"foo {i}")
for i in range(10, 20):
if i % 4 == 0:
print(f"foo {i}")
```

**Platform**
<details>
<summary>Click to expand</summary>
> What platform (Win/Linux/Mac) are you running on? What terminal software are you using?
>
> I may ask you to copy and paste the output of the following commands. It may save some time if you do it now.
>
>
> If you're using Rich in a terminal:
>
> ```
> python -m rich.diagnose
> pip freeze | grep rich
> ```
Linux VM via WSL2.
```text
โฐโโช python -m rich.diagnose
โญโโโโโโโโโโโโโโโโโโโโโโโ <class 'rich.console.Console'> โโโโโโโโโโโโโโโโโโโโโโโโฎ
โ A high level console interface. โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ <console width=80 ColorSystem.EIGHT_BIT> โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ color_system = '256' โ
โ encoding = 'utf-8' โ
โ file = <_io.TextIOWrapper name='<stdout>' mode='w' โ
โ encoding='utf-8'> โ
โ height = 41 โ
โ is_alt_screen = False โ
โ is_dumb_terminal = False โ
โ is_interactive = True โ
โ is_jupyter = False โ
โ is_terminal = True โ
โ legacy_windows = False โ
โ no_color = False โ
โ options = ConsoleOptions( โ
โ size=ConsoleDimensions(width=80, height=41), โ
โ legacy_windows=False, โ
โ min_width=1, โ
โ max_width=80, โ
โ is_terminal=True, โ
โ encoding='utf-8', โ
โ max_height=41, โ
โ justify=None, โ
โ overflow=None, โ
โ no_wrap=False, โ
โ highlight=None, โ
โ markup=None, โ
โ height=None โ
โ ) โ
โ quiet = False โ
โ record = False โ
โ safe_box = True โ
โ size = ConsoleDimensions(width=80, height=41) โ
โ soft_wrap = False โ
โ stderr = False โ
โ style = None โ
โ tab_size = 8 โ
โ width = 80 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโ <class 'rich._windows.WindowsConsoleFeatures'> โโโโโฎ
โ Windows features available. โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ WindowsConsoleFeatures(vt=False, truecolor=False) โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ truecolor = False โ
โ vt = False โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโโโโ Environment Variables โโโโโโโโฎ
โ { โ
โ 'TERM': 'xterm-256color', โ
โ 'COLORTERM': None, โ
โ 'CLICOLOR': None, โ
โ 'NO_COLOR': None, โ
โ 'TERM_PROGRAM': None, โ
โ 'COLUMNS': None, โ
โ 'LINES': None, โ
โ 'JUPYTER_COLUMNS': None, โ
โ 'JUPYTER_LINES': None, โ
โ 'JPY_PARENT_PID': None, โ
โ 'VSCODE_VERBOSE_LOGGING': None โ
โ } โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
platform="Linux"
```
</details>
|
closed
|
2024-09-12T16:10:55Z
|
2024-09-30T14:45:59Z
|
https://github.com/Textualize/rich/issues/3490
|
[
"Needs triage"
] |
ericfrederich
| 3
|
flasgger/flasgger
|
flask
| 329
|
RESTful Resource - $ref does not work when used in parameters (Flask RESTful)
|
I'm using Flask RESTful and I want POST validated using `Flasgger(parse=True)`
If $ref is used in schema:
```
parameters:
- in: body
name: body
schema:
$ref: '#/definitions/Target'
```
I get:
```
Error: BAD REQUEST
Unresolvable JSON pointer: u'definitions/Target'
```
However `Target` is among `definitions` in apispec (`http://localhost:5000/apispec_v1.json`) and correctly rendered all around.
|
open
|
2019-08-29T17:24:33Z
|
2021-11-02T07:49:18Z
|
https://github.com/flasgger/flasgger/issues/329
|
[] |
vasekch
| 2
|
pydantic/pydantic-ai
|
pydantic
| 755
|
Improvement: Conform to message start spec for anthropic
|
See https://github.com/pydantic/pydantic-ai/pull/684#issuecomment-2595299422 and https://github.com/pydantic/pydantic-ai/pull/684#issuecomment-2610834552 for context.
https://github.com/pydantic/pydantic-ai/pull/684 implemented initial support for anthropic streaming.
|
open
|
2025-01-23T19:37:12Z
|
2025-01-23T19:37:12Z
|
https://github.com/pydantic/pydantic-ai/issues/755
|
[] |
sydney-runkle
| 0
|
napari/napari
|
numpy
| 7,068
|
Consider putting Open Sample in it's own block or in the top block
|
## ๐งฐ Task
With all the new options in the File menu, it's a bit harder to quickly find the Open Sample menu, which is super useful for demos or testing:
<img width="357" alt="image" src="https://github.com/napari/napari/assets/76622105/b292fbec-7a01-4e6e-8011-715c77718e15">
Maybe we should think about moving it to it's own block between the New... and Open... blocks?
|
closed
|
2024-07-05T15:46:03Z
|
2024-07-06T07:17:25Z
|
https://github.com/napari/napari/issues/7068
|
[
"task"
] |
psobolewskiPhD
| 7
|
geex-arts/django-jet
|
django
| 213
|
Help text on admin not working
|
Hi there, I am using jet with an TabularInline and a help_text I can see the help text. The content is there but i've tried by hovering the arrow and the click and nothing works.
Thanks,
Hรฉlio


|
open
|
2017-05-16T14:42:11Z
|
2017-08-20T16:53:32Z
|
https://github.com/geex-arts/django-jet/issues/213
|
[] |
helioascorreia
| 13
|
jupyter-book/jupyter-book
|
jupyter
| 1,935
|
Error in glue documentation
|
### Describe the bug
**context**
The [documentation for glue](https://github.com/executablebooks/jupyter-book/blob/master/docs/content/executable/output-insert.md?plain=1#L103-L110) says:
````
```{code-cell} ipython3
glue("boot_chi", chi)
```
By default, `glue` will display the value of the variable you are gluing. This
is useful for sanity-checking its value at glue-time. If you'd like to **prevent display**,
use the `display=False` option. Note that below, we also *overwrite* the value of
`boot_chi` (but using the same value):
```{code-cell} ipython3
glue("boot_chi_notdisplayed", chi, display=False)
```
````
This looks like an error because the variable name is changed in the second glue, so I would expect this to **not** overwrite the value.
I tested
````
```{code-cell}
from datetime import date
today_string = date.today().isoformat()
glue('today',today_string,display=False)
glue('today_notdisplayed',"not today",display=False)
```
... as of {glue:}`today`
````
and got the correct output.
- [rendered](https://introcompsys.github.io/spring2023/syllabus/badges.html#prepare-work-and-experience-badges:~:text=shows%2C%20as%20of-,%272023%2D02%2D20%27,-%2C%20the%20number%20of)
- [in repo](https://github.com/introcompsys/spring2023/blob/38e9bfa28c0dc5c920e3f19b9ca4a7c1db9fde64/syllabus/badges.md?plain=1#L131-L138)
**problem**
Glue works as expected, but the documentation is not correct.
**solution**
I'm happy to PR solution, but it is unclear if there is a reason to demonstrate overwriting. That is two possible solutions are:
Do not mention over writing
````
```{code-cell} ipython3
glue("boot_chi", chi)
```
By default, `glue` will display the value of the variable you are gluing. This
is useful for sanity-checking its value at glue-time. If you'd like to **prevent display**,
use the `display=False` option.
```{code-cell} ipython3
glue("boot_chi_notdisplayed", chi, display=False)
```
````
actually overwrite
````
```{code-cell} ipython3
glue("boot_chi", chi)
```
By default, `glue` will display the value of the variable you are gluing. This
is useful for sanity-checking its value at glue-time. If you'd like to **prevent display**,
use the `display=False` option. Note that below, we also *overwrite* the value of
`boot_chi` (but using the same value):
```{code-cell} ipython3
glue("boot_chi", chi, display=False)
```
````
if the latter, it would make more sense to change the value, I think but it's not clear why it matters to demonstrate overwriting
### Reproduce the bug
all above
### List your environment
```
Jupyter Book : 0.13.1
External ToC : 0.2.4
MyST-Parser : 0.15.2
MyST-NB : 0.13.2
Sphinx Book Theme : 0.3.3
Jupyter-Cache : 0.4.3
NbClient : 0.5.0
```
also repeats on whatever is latest as per install on github with no versions specified [workflow run](https://github.com/introcompsys/spring2023/actions/runs/4225586263)
|
open
|
2023-02-20T17:59:29Z
|
2023-02-20T17:59:29Z
|
https://github.com/jupyter-book/jupyter-book/issues/1935
|
[
"bug"
] |
brownsarahm
| 0
|
mwaskom/seaborn
|
matplotlib
| 3,643
|
HI ,we need seaborn cpp
|
closed
|
2024-02-29T04:10:16Z
|
2024-03-01T01:22:09Z
|
https://github.com/mwaskom/seaborn/issues/3643
|
[] |
mullerhai
| 2
|
|
matterport/Mask_RCNN
|
tensorflow
| 2,256
|
Training on Custom Dataset (JSON Files)
|
I have an annotated dataset which consists of 1k+ json files. I want to train using those datasets. Each file contains some code as stated below:
```
[
{
"#": 1,
"Taxonomy": "",
"Class": "Leaves",
"File Folder": "",
"Filename": "POT.jpg",
"Resolution": "2736x2192",
"Top": 0,
"Left": 0,
"Width": 2736,
"Height": 2192,
"Tag": "URL_of_the_tag",
"Mask": "URL_of_the_mask",
"Minimal Top": 0,
"Minimal Left": 0,
"Minimal Width": 2736,
"Minimal Height": 2192
}
]
```
How do I proceed to start training using pre-trained model? I have looked into the demo.ipynb and train_shapes.ipynb. I got a rough idea about how it works. I am trying to learn more about it.
|
open
|
2020-06-25T15:05:06Z
|
2020-06-25T19:26:02Z
|
https://github.com/matterport/Mask_RCNN/issues/2256
|
[] |
anmspro
| 1
|
coqui-ai/TTS
|
python
| 3,653
|
[Bug] Cannot use Docker image
|
### Describe the bug
We cannot use the instructions for [instructions related to the Docker image](https://github.com/coqui-ai/TTS/tree/dev?tab=readme-ov-file#docker-image):
```bash
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
```
as we are getting:
```
docker: Error response from daemon: Head "https://ghcr.io/v2/coqui-ai/tts-cpu/manifests/latest": denied: denied.
```
### To Reproduce
```bash
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
```
### Expected behavior
No errors
### Logs
```shell
docker: Error response from daemon: Head "https://ghcr.io/v2/coqui-ai/tts-cpu/manifests/latest": denied: denied.
```
### Environment
```shell
- Docker Version: `Docker version 19.03.8, build afacb8b7f0`
- OS: `Windows Subsystem Linux Debian`
- Git Commit Hash: `dbf1a08a0d4e47fdad6172e433eeb34bc6b13b4e`
```
### Additional context
_No response_
|
closed
|
2024-03-29T07:00:07Z
|
2024-06-26T16:49:17Z
|
https://github.com/coqui-ai/TTS/issues/3653
|
[
"bug",
"wontfix"
] |
rtodea
| 1
|
K3D-tools/K3D-jupyter
|
jupyter
| 330
|
Is there any way we can use k3d in flutter, has anyone tried it?
|
I was working on a mobile application using flutter, and I wanted to use k3d as a visualization library. If yes can anyone explain how?
|
closed
|
2022-03-12T18:31:54Z
|
2022-05-16T05:51:55Z
|
https://github.com/K3D-tools/K3D-jupyter/issues/330
|
[] |
Arhan13
| 1
|
Avaiga/taipy
|
automation
| 2,127
|
GitHub action to prevent from merging uncompleted PR
|
Create a GitHub action preventing from merging a PR if its checklist is not completed:
https://github.com/marketplace/actions/task-completed-checker
|
open
|
2024-10-22T08:12:59Z
|
2024-10-22T08:12:59Z
|
https://github.com/Avaiga/taipy/issues/2127
|
[
"๐ง Devops",
"๐จ Priority: Medium",
"๐ Staff only"
] |
jrobinAV
| 0
|
ultralytics/ultralytics
|
pytorch
| 19,508
|
resume in YOLOv8
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
When I was using resume on a YOLOv8 based network hoping to continue training, I encountered a problem where it did not start training from where it was last time, but instead maintained this state from the first round, and even the rounds I set were not 100 at all. I hope to get your help๏ผHere is my code and a screenshot of the problem I encountered
` task_type = {
"train": YOLO(model_conf).train(resume=True)
# "val": YOLO(model_conf).val(**args),
# "test": YOLO(model_conf).test(**args),
}
And python mbyolo_train.py --task train --config /{weight path}/last.pt --data /data.yaml
`
<img width="739" alt="Image" src="https://github.com/user-attachments/assets/b69ca4ff-2d2f-472a-a213-895599ebd9a2" />
### Additional
_No response_
|
open
|
2025-03-04T01:43:09Z
|
2025-03-04T02:10:04Z
|
https://github.com/ultralytics/ultralytics/issues/19508
|
[
"question",
"detect"
] |
li-25-creater
| 3
|
pyeve/eve
|
flask
| 1,142
|
Only set the package version in __init__.py
|
At the moment we're holding the package version in two places: `setup.py` and `__init__.py`. We need to keep the version number in the init file for a number of reasons, but we could add some magic to avoid hard-coding it again in the setup file. Either that or soon or later I (or someone else) will forget to bump both numbers before a release. And also, DRY.
|
closed
|
2018-05-10T13:37:11Z
|
2018-05-11T07:26:45Z
|
https://github.com/pyeve/eve/issues/1142
|
[
"enhancement"
] |
nicolaiarocci
| 0
|
strawberry-graphql/strawberry
|
fastapi
| 3,536
|
field level relay results limit
|
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [x] New behavior
## Description
### Current:
The maximum of returned results for a relay connection defaults to 100 and can be changed by a schema wide setting: https://github.com/strawberry-graphql/strawberry/blob/b7f28815c116780127a9abdea42938bff5649057/strawberry/schema/config.py#L14
Example:
```py
MAX_RELAY_RESULTS = 777
schema_config = StrawberryConfig(relay_max_results=MAX_RELAY_RESULTS)
schema = Schema(
query=Query,
mutation=Mutation,
extensions=[
DjangoOptimizerExtension,
],
config=schema_config,
)
```
### Improvement:
The maximum results returned can be overwritten on a per field level via a field setting.
Example:
```py
@strawberry.type
MAX_RELAY_ITEM_RESULTS = 777
class Query:
my_items: ListConnectionWithTotalCount[MyItemType] = strawberry_django.connection(
relay_max_results=MAX_RELAY_ITEM_RESULTS
)
```
|
open
|
2024-06-08T14:45:40Z
|
2025-03-20T15:56:45Z
|
https://github.com/strawberry-graphql/strawberry/issues/3536
|
[] |
Eraldo
| 5
|
errbotio/errbot
|
automation
| 1,225
|
Errbot Flows Not being Triggered Correctly with BotFramework
|
### I am...
* [x ] Reporting a bug
* [ ] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 5.2.0
* OS version: MacOS 10.13.4
* Python version: 3.6.5
* Using a virtual environment: yes
### Issue description
I am using @vasilcovsky's [BotFramework] (https://github.com/vasilcovsky/errbot-backend-botframework) backend . I am encountering a problem while using flows. I found the problem to be with the ```_bot.send ``` method in flow.py . The ```_bot.send``` [documentation] (http://errbot.io/en/latest/_modules/errbot/botplugin.html#BotPlugin.send) specifies that the first argument must be of type Identifier. While the botframework backend implements an ```Identifier``` Err does not recognize it to be of type Identifier. This causes an exception to be thrown and the flow to not continue till the end.
I suspect this is causing the message's context dictionary to not be initialised properly. When using the bot in Text Mode it is working just fine.
The error message from BotPlugin's send does not show up on the log either.
### Steps to reproduce
Use the BotFramework backend. Implement basic flow with the second step auto triggered. The BotFramework emulator can be used.
### Additional info
Here is the flow.py with a try to catch the exception
```
log.debug(type(flow.requestor))
try:
self._bot.send(flow.requestor, "\n".join(possible_next_steps))
except ValueError :
log.debug("Reached error")
break
```
Here are the excerpts from the logs
2018-05-26 10:43:12,491 DEBUG errbot.flow <class 'yapsy_loaded_plugin_BotFramework_0.Identifier'>
2018-05-26 10:43:12,492 DEBUG errbot.flow Reached error
|
closed
|
2018-05-26T02:50:37Z
|
2019-01-05T17:24:26Z
|
https://github.com/errbotio/errbot/issues/1225
|
[] |
aravindprasanna
| 7
|
huggingface/datasets
|
nlp
| 6,441
|
Trouble Loading a Gated Dataset For User with Granted Permission
|
### Describe the bug
I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get
`FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the dataset they get a 404 error.
### Steps to reproduce the bug
1. Grant access to gated dataset for specific users
2. Users accept invitation
3. Users login to hugging face hub using cli login
4. Users run load_dataset
### Expected behavior
Dataset is loaded normally for users who were granted access to the gated dataset.
### Environment info
datasets==2.15.0
|
closed
|
2023-11-21T19:24:36Z
|
2023-12-13T08:27:16Z
|
https://github.com/huggingface/datasets/issues/6441
|
[] |
e-trop
| 3
|
sherlock-project/sherlock
|
python
| 2,081
|
FlareSolverr Support
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [x] I'm reporting a feature request
- [x] I've checked for similar feature requests including closed ones
## Description
Several years ago, a solution like cloudscraper may have been ideal. Now, there doesn't seem to be any actively maintained solution outside of FlareSolverr. This solution wouldn't be a good build-in, but it can function as a proxy with a couple adjustments.
I was able to put together a somewhat viable proof of concept in #2079 (would be re-engineered if desired, though).
The caveat is that FlareSolverr doesn't seem to properly pass along return codes right now. That would mean it can only be reliably used for message and possibly for redirect_url. When FlareSolverr is detected, we would have status_code requests bypass the proxy and function normally.
Seems that some messages may need a minor tweak depending on how sites handle l10n and such, but they would be similar if changed at all (as in the data.json diff).
Would this partial support be desired, or would we prefer to wait until FlareSolverr more properly supports status_code as well?
|
open
|
2024-04-13T20:08:18Z
|
2024-05-31T22:36:00Z
|
https://github.com/sherlock-project/sherlock/issues/2081
|
[
"enhancement"
] |
ppfeister
| 2
|
google-research/bert
|
tensorflow
| 1,037
|
MRPC Produces Two Vastly Different Eval Accuracy
|
Hi,
I am testing using TF v1.14.0.
I ran for 10 times with run_classifier.py (do_traing=False, do_eval=True), i.e. do inference immediately after loading https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-24_H-1024_A-16.zip
I am seeing two values 0.31617647 & 0.6838235.
I wonder why I am seeing different inference results and why I am seeing exactly these two results. Thanks!
check-pretrained-MRPC-1.log:INFO:tensorflow: eval_accuracy = 0.31617647
check-pretrained-MRPC-10.log:INFO:tensorflow: eval_accuracy = 0.6838235
check-pretrained-MRPC-2.log:INFO:tensorflow: eval_accuracy = 0.31617647
check-pretrained-MRPC-3.log:INFO:tensorflow: eval_accuracy = 0.6838235
check-pretrained-MRPC-4.log:INFO:tensorflow: eval_accuracy = 0.31617647
check-pretrained-MRPC-5.log:INFO:tensorflow: eval_accuracy = 0.31617647
check-pretrained-MRPC-6.log:INFO:tensorflow: eval_accuracy = 0.31617647
check-pretrained-MRPC-7.log:INFO:tensorflow: eval_accuracy = 0.31617647
check-pretrained-MRPC-8.log:INFO:tensorflow: eval_accuracy = 0.31617647
check-pretrained-MRPC-9.log:INFO:tensorflow: eval_accuracy = 0.31617647
|
closed
|
2020-03-22T16:00:24Z
|
2020-04-08T06:12:40Z
|
https://github.com/google-research/bert/issues/1037
|
[] |
wei-v-wang
| 2
|
ray-project/ray
|
pytorch
| 51,595
|
[core] Compiled Graphs has a dependence on pyarrow
|
### What happened + What you expected to happen
I used https://docs.vllm.ai/en/latest/serving/distributed_serving.html to setup the distributed environment(2 nodes with 8 GPUs per node), and then run the api_server as below:
```
python3 -m vllm.entrypoints.openai.api_server --port 18011 --model /models/DeepSeek-V3 --tensor-parallel-size 16 --gpu-memory-utilization 0.92 --dtype auto --served-model-name deepseekv3 --max-num-seqs 50 --max-model-len 16384 --trust-remote-code --disable-log-requests --enable-chunked-prefill --enable-prefix-caching
```
Then I got the following error in Ray module when handling the requests:
```
RayWorkerWrapper pid=243, ip=10.99.48.141) ERROR:root:Compiled DAG task exited with exception
(RayWorkerWrapper pid=243, ip=10.99.48.141) Traceback (most recent call last):
(RayWorkerWrapper pid=243, ip=10.99.48.141) File "/usr/local/lib/python3.12/dist-packages/ray/dag/compiled_dag_node.py", line 230, in do_exec_tasks
(RayWorkerWrapper pid=243, ip=10.99.48.141) done = tasks[operation.exec_task_idx].exec_operation(
(RayWorkerWrapper pid=243, ip=10.99.48.141) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=243, ip=10.99.48.141) File "/usr/local/lib/python3.12/dist-packages/ray/dag/compiled_dag_node.py", line 745, in exec_operation
(RayWorkerWrapper pid=243, ip=10.99.48.141) with _device_context_manager():
(RayWorkerWrapper pid=243, ip=10.99.48.141) ^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=243, ip=10.99.48.141) File "/usr/local/lib/python3.12/dist-packages/ray/dag/compiled_dag_node.py", line 345, in _device_context_manager
(RayWorkerWrapper pid=243, ip=10.99.48.141) device = ChannelContext.get_current().torch_device
(RayWorkerWrapper pid=243, ip=10.99.48.141) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RayWorkerWrapper pid=243, ip=10.99.48.141) File "/usr/local/lib/python3.12/dist-packages/ray/experimental/channel/common.py", line 173, in torch_device
(RayWorkerWrapper pid=243, ip=10.99.48.141) from ray.air._internal import torch_utils
(RayWorkerWrapper pid=243, ip=10.99.48.141) File "/usr/local/lib/python3.12/dist-packages/ray/air/__init__.py", line 1, in <module>
(RayWorkerWrapper pid=243, ip=10.99.48.141) from ray.air.config import (
(RayWorkerWrapper pid=243, ip=10.99.48.141) File "/usr/local/lib/python3.12/dist-packages/ray/air/config.py", line 17, in <module>
(RayWorkerWrapper pid=243, ip=10.99.48.141) import pyarrow.fs
(RayWorkerWrapper pid=243, ip=10.99.48.141) ModuleNotFoundError: No module named 'pyarrow'
ERROR 03-13 02:56:52 [core.py:337] EngineCore hit an exception: Traceback (most recent call last):
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 330, in run_engine_core
ERROR 03-13 02:56:52 [core.py:337] engine_core.run_busy_loop()
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 364, in run_busy_loop
ERROR 03-13 02:56:52 [core.py:337] outputs = step_fn()
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 192, in step
ERROR 03-13 02:56:52 [core.py:337] output = self.model_executor.execute_model(scheduler_output)
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/ray_distributed_executor.py", line 57, in execute_model
ERROR 03-13 02:56:52 [core.py:337] return refs[0].get()
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/experimental/compiled_dag_ref.py", line 154, in get
ERROR 03-13 02:56:52 [core.py:337] raise execution_error from None
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/experimental/compiled_dag_ref.py", line 145, in get
ERROR 03-13 02:56:52 [core.py:337] ray.get(actor_execution_loop_refs, timeout=10)
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
ERROR 03-13 02:56:52 [core.py:337] return fn(*args, **kwargs)
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^^^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
ERROR 03-13 02:56:52 [core.py:337] return func(*args, **kwargs)
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/_private/worker.py", line 2771, in get
ERROR 03-13 02:56:52 [core.py:337] values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/_private/worker.py", line 919, in get_objects
ERROR 03-13 02:56:52 [core.py:337] raise value.as_instanceof_cause()
ERROR 03-13 02:56:52 [core.py:337] ray.exceptions.RayTaskError(ModuleNotFoundError): ray::RayWorkerWrapper.__ray_call__() (pid=958, ip=10.99.48.142, actor_id=0e58a0e6bc9d2aaee879d79401000000, repr=<vllm.executor.ray_utils.RayWorkerWrapper object at 0x7fa67295a510>)
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/actor.py", line 1722, in __ray_call__
ERROR 03-13 02:56:52 [core.py:337] return fn(self, *args, **kwargs)
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/dag/compiled_dag_node.py", line 230, in do_exec_tasks
ERROR 03-13 02:56:52 [core.py:337] done = tasks[operation.exec_task_idx].exec_operation(
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/dag/compiled_dag_node.py", line 745, in exec_operation
ERROR 03-13 02:56:52 [core.py:337] with _device_context_manager():
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/dag/compiled_dag_node.py", line 345, in _device_context_manager
ERROR 03-13 02:56:52 [core.py:337] device = ChannelContext.get_current().torch_device
ERROR 03-13 02:56:52 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/experimental/channel/common.py", line 173, in torch_device
ERROR 03-13 02:56:52 [core.py:337] from ray.air._internal import torch_utils
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/air/__init__.py", line 1, in <module>
ERROR 03-13 02:56:52 [core.py:337] from ray.air.config import (
ERROR 03-13 02:56:52 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/air/config.py", line 17, in <module>
ERROR 03-13 02:56:52 [core.py:337] import pyarrow.fs
ERROR 03-13 02:56:52 [core.py:337] ModuleNotFoundError: No module named 'pyarrow'
```
### Versions / Dependencies
vLLM 0.8.0, Ray latest
### Reproduction script
```python3 -m vllm.entrypoints.openai.api_server --port 18011 --model /models/DeepSeek-V3 --tensor-parallel-size 16 --gpu-memory-utilization 0.92 --dtype auto --served-model-name deepseekv3 --max-num-seqs 50 --max-model-len 16384 --trust-remote-code --disable-log-requests --enable-chunked-prefill --enable-prefix-caching```
### Issue Severity
None
|
closed
|
2025-03-21T17:28:35Z
|
2025-03-21T17:32:08Z
|
https://github.com/ray-project/ray/issues/51595
|
[
"bug",
"triage"
] |
richardliaw
| 1
|
iMerica/dj-rest-auth
|
rest-api
| 531
|
Last Login Is Never Updated For USE_JWT True
|
Hi everyone,
I see a bug where last_login is never updated in db if we use jwt login.
Changes are expected in https://github.com/iMerica/dj-rest-auth/blob/8a2e07de1a5b166d99fa3cf2995561773c29333f/dj_rest_auth/utils.py#L9C22-L9C22
where I see we are not using the validate method of serializer and directly using get_token class method.
So, maybe we should call dajngo's signal here after generating tokens : update_last_login(None, self.user)
~~But that would again be conditional because the same util function is used in Registration flow and for some applications, login might not succeed until email or some other verifications are done.~~ Ignore last line because that check is already present in Registration view.
|
open
|
2023-08-06T03:42:13Z
|
2023-08-06T03:53:07Z
|
https://github.com/iMerica/dj-rest-auth/issues/531
|
[] |
Aniket-Singla
| 0
|
dynaconf/dynaconf
|
fastapi
| 236
|
[RFC] module impersonation, local files, runtime files
|
Read yaycl readme
https://github.com/seandst/yaycl
|
closed
|
2019-09-19T13:53:42Z
|
2019-09-25T20:55:18Z
|
https://github.com/dynaconf/dynaconf/issues/236
|
[
"Not a Bug",
"RFC",
"Docs"
] |
rochacbruno
| 1
|
huggingface/datasets
|
pandas
| 7,085
|
[Regression] IterableDataset is broken on 2.20.0
|
### Describe the bug
In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times.
The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't.
### Steps to reproduce the bug
Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`)
```
#!/bin/bash
# List of dataset versions to test
versions=("2.17.0" "2.20.0")
# Loop through each version
for version in "${versions[@]}"; do
# Install the specific version of the datasets library
pip3 install -q datasets=="$version" 2>/dev/null
# Run the Python script
python3 - <<EOF
from datasets import IterableDataset
from datasets.features.features import Features, Value
def test_gen():
yield from [{"foo": i} for i in range(10)]
features = Features([("foo", Value("int64"))])
d = IterableDataset.from_generator(test_gen, features=features)
mapped = d.map(lambda row: {"foo": row["foo"] * 2})
column = mapped.select_columns(["foo"])
print("Version $version - Iterate Once:", list(column))
print("Version $version - Iterate Twice:", list(column))
EOF
done
```
The output looks like this:
```
Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Twice: []
```
### Expected behavior
The expected behavior is it version 2.20.0 should behave the same as 2.17.0.
### Environment info
`datasets==2.20.0` on any platform.
|
closed
|
2024-07-31T13:01:59Z
|
2024-08-22T14:49:37Z
|
https://github.com/huggingface/datasets/issues/7085
|
[] |
AjayP13
| 3
|
PokemonGoF/PokemonGo-Bot
|
automation
| 5,419
|
Buddy System
|
Is there any plan to implement the buddy system?
|
closed
|
2016-09-13T01:01:51Z
|
2016-09-27T12:36:31Z
|
https://github.com/PokemonGoF/PokemonGo-Bot/issues/5419
|
[] |
kevinlife
| 11
|
lepture/authlib
|
django
| 123
|
validate_xyz() in JWTClaims should use UTC?
|
the time in validate() should be UTC?
https://github.com/lepture/authlib/blob/master/authlib/jose/rfc7519/claims.py#L59
|
closed
|
2019-04-13T21:58:48Z
|
2019-04-14T11:16:59Z
|
https://github.com/lepture/authlib/issues/123
|
[] |
blackmagic02881
| 3
|
keras-team/keras
|
python
| 20,552
|
This method creates a model with a 100% memory leak loop using model. fit()
|
I have tried various methods, but the memory is definitely leaking, it seems that the release of memory cannot keep up. Through the logs, it can be found that there is periodic memory recycling, but with the increase of time, there is still a clear upward trend
Name: keras
Version: 3.6.0
Please find the [gist](https://colab.sandbox.google.com/gist/Venkat6871/30fb12f2188a7826e2c649bbd945dbda/80753_tf_2-18-0-nightly-v.ipynb) here for your reference.
https://github.com/tensorflow/tensorflow/issues/80753#issuecomment-2503203801
|
open
|
2024-11-27T08:21:34Z
|
2024-11-28T06:42:29Z
|
https://github.com/keras-team/keras/issues/20552
|
[
"type:bug/performance"
] |
phpYj
| 1
|
scikit-learn/scikit-learn
|
python
| 30,714
|
Version 1.0.2 requires numpy<2
|
### Describe the bug
Installing scikit-learn version 1.0.2 leads to the following error:
```bash
ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
```
This seems to indicate a mismatch between this version of scikit-learn and numpy versions greater than 2.0 (Specifically 2.2.2 was being installed, following the only restriction of `numpy>=1.14.6`).
This can be solved by indicating to use a numpy version older than 2.0 by modifying step 1 to:
```bash
pip install "scikit-learn==1.0.2" "numpy<2"
```
## Additional references
https://stackoverflow.com/questions/66060487/valueerror-numpy-ndarray-size-changed-may-indicate-binary-incompatibility-exp
https://stackoverflow.com/questions/78650222/valueerror-numpy-dtype-size-changed-may-indicate-binary-incompatibility-expec
### Steps/Code to Reproduce
1. Install scikit-learn through pip
```bash
pip install "scikit-learn==1.0.2"
```
2. Use scikit-learn
````python
% path/to/script.py
...
from sklearn.datasets import load_iris
...
````
### Expected Results
No errors thrown
### Actual Results
Error is thrown:
```bash
path/to/script.py:2: in <module>
from sklearn.datasets import load_iris
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sklearn/__init__.py:82: in <module>
from .base import clone
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sklearn/base.py:17: in <module>
from .utils import _IS_32BIT
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sklearn/utils/__init__.py: in <module>
from .murmurhash import murmurhash3_32
sklearn/utils/murmurhash.pyx:1: in init sklearn.utils.murmurhash
???
E ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
```
### Versions
```shell
OS: Ubuntu 24.10 (latest)
Python version 3.10
Scikit-learn version: 1.0.2
pip version: 24.3.1
setuptools version: 65.5.0
```
|
closed
|
2025-01-24T11:47:50Z
|
2025-01-24T15:00:00Z
|
https://github.com/scikit-learn/scikit-learn/issues/30714
|
[
"Bug",
"Needs Triage"
] |
grudloff
| 7
|
postmanlabs/httpbin
|
api
| 431
|
Idea: Expost httpbin version from HTTP headers
|
It would be useful if you could tell the httpbin version from a request's HTTP headers.
This way I would be able to determine why my tests work locally, but not in CI.
|
open
|
2018-03-06T15:01:14Z
|
2018-04-26T17:51:17Z
|
https://github.com/postmanlabs/httpbin/issues/431
|
[] |
dagwieers
| 0
|
Colin-b/pytest_httpx
|
pytest
| 34
|
locked on pytest 6
|
Hi,
Thanks for the nice library.
Currently, pytest-httpx depends on pytest ==6.*
But there is a fix to an annoying async issue only available in pytest 7.2+
https://github.com/pytest-dev/pytest/issues/3747
Could you please increment the dependency version?
Thank you
|
closed
|
2020-12-03T10:33:21Z
|
2024-09-20T06:37:53Z
|
https://github.com/Colin-b/pytest_httpx/issues/34
|
[
"enhancement"
] |
remi-debette
| 0
|
plotly/plotly.py
|
plotly
| 4,108
|
add facet_row support in px.imshow
|
# Feature Request: `facet_row` support in `px.imshow`
It would be very convenient to be able to utilise the `facet_row` kwarg with `plotly.express.imshow`.
## Current Behaviour
Currently, `px.imshow` only takes the `facet_col` kwarg, limiting the display of multiple heatmap-like traces to a single dimension (e.g. layer slices through a 3D CT image). It is not possible to visualize 4D data without utilising the `animation_frame` kwarg.
## Motivation
In a number of scientific and engineering use-cases, it is useful to visualize data which possesses a dense 4D tensor structure. Any experiment using full factorial design with at least four factors, common in machine learning and biological science applications, falls into this category. Providing a better convenience function for generating heatmap-style plots would improve the utility of Plotly for publication-standard multivariate data visualization. It should be noted that whilst parallel category and parallel coordinate plots offer an arguably better visualization tool for this sort of data, they are still underutilised in publications, and, as such, an expanded `imshow` still has good utility.
|
open
|
2023-03-15T15:11:02Z
|
2024-08-12T20:50:34Z
|
https://github.com/plotly/plotly.py/issues/4108
|
[
"feature",
"P3"
] |
wbeardall
| 2
|
scikit-learn/scikit-learn
|
machine-learning
| 30,655
|
'super' object has no attribute '__sklearn_tags__'
|
closed
|
2025-01-16T14:38:03Z
|
2025-01-17T06:26:03Z
|
https://github.com/scikit-learn/scikit-learn/issues/30655
|
[
"Bug",
"Needs Triage"
] |
anandshaw123
| 1
|
|
tensorpack/tensorpack
|
tensorflow
| 785
|
How to run tensorpack on TPU?
|
Is that any plan to support TPU?
|
closed
|
2018-06-08T07:58:16Z
|
2020-05-24T06:47:38Z
|
https://github.com/tensorpack/tensorpack/issues/785
|
[
"enhancement"
] |
windid
| 2
|
deeppavlov/DeepPavlov
|
tensorflow
| 824
|
Segmentation Fault (core dumped) when running build_model(configs.ner.ner_rus, download=True)
|
Hi! I get `Segmentation Fault (core dumped)` when running `build_model(configs.ner.ner_rus, download=True)`. The version of deeppavlov I use is 0.2.0
|
closed
|
2019-04-30T12:57:44Z
|
2020-05-13T11:37:25Z
|
https://github.com/deeppavlov/DeepPavlov/issues/824
|
[] |
hpylieva
| 8
|
httpie/http-prompt
|
rest-api
| 107
|
Specifying options inline does not work properly
|
When trying to specify a param inline, the input does not accept a `space` character after the URL. Note below the lack of space between `/api/something` and `page==2`. The cursor also resets to position 0 in this situation.

The command actually works, but the command line processing is not handled properly.
|
closed
|
2017-02-07T16:49:58Z
|
2017-02-15T16:19:50Z
|
https://github.com/httpie/http-prompt/issues/107
|
[
"bug"
] |
sirianni
| 1
|
flasgger/flasgger
|
rest-api
| 74
|
`import: external_yaml` should work for multiple paths
|
แบhen a function is decorated multiple times with `@swag_from` the `import` should take care of the original root_path.
reference: https://github.com/rochacbruno/flasgger/blob/master/flasgger/base.py#L65-L79
|
open
|
2017-03-30T01:38:12Z
|
2018-09-10T05:31:54Z
|
https://github.com/flasgger/flasgger/issues/74
|
[
"hacktoberfest"
] |
rochacbruno
| 0
|
huggingface/text-generation-inference
|
nlp
| 2,644
|
OpenAI Client format + chat template for a single call
|
### System Info
latest docker
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
Hello,
Can you tell me please how to implement the following functionality combined:
1. I'm interested in OpenAI Client format:
prompt= [
{"role": "system", "content": "You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."},
{"role": "user", "content": "Hey, can you tell me any fun things to do in New York?"}
]
2. I want to make sure that chat template of the served odel is aplied
3. I dont want a chat - I want that each call with a prompt statrt from a clear history to aoid token overflow.
Thank you!
### Expected behavior
An answer for each prompt indepedent of the previos anser, but with OpenAI client API
|
open
|
2024-10-14T08:32:38Z
|
2024-10-18T13:19:34Z
|
https://github.com/huggingface/text-generation-inference/issues/2644
|
[] |
vitalyshalumov
| 1
|
pytorch/pytorch
|
deep-learning
| 148,976
|
`dist.barrier()` fails with TORCH_DISTRIBUTED_DEBUG=DETAIL and after dist.send/dist.recv calls
|
This program
```sh
$ cat bug.py
import torch
import torch.distributed as dist
import torch.distributed.elastic.multiprocessing.errors
@dist.elastic.multiprocessing.errors.record
def main():
dist.init_process_group()
rank = dist.get_rank()
size = dist.get_world_size()
x = torch.tensor(0)
if rank == 0:
x = torch.tensor(123)
dist.send(x, 1)
elif rank == 1:
dist.recv(x, 0)
dist.barrier()
for i in range(size):
if rank == i:
print(f"{rank=} {size=} {x=}")
dist.barrier()
dist.destroy_process_group()
if __name__ == '__main__':
main()
```
Fails with
```
$ OMP_NUM_THREADS=1 TORCH_DISTRIBUTED_DEBUG=DETAIL torchrun --nproc-per-node 3 --standalone bug.py
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/lisergey/deepseek/bug.py", line 25, in <module>
[rank0]: main()
[rank0]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
[rank0]: return f(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/lisergey/deepseek/bug.py", line 16, in main
[rank0]: dist.barrier()
[rank0]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
[rank0]: work = group.barrier(opts=opts)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: RuntimeError: Detected mismatch between collectives on ranks. Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER), but Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 1vs 0
[rank2]: Traceback (most recent call last):
[rank2]: File "/home/lisergey/deepseek/bug.py", line 25, in <module>
[rank2]: main()
[rank2]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
[rank2]: return f(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^
[rank2]: File "/home/lisergey/deepseek/bug.py", line 16, in main
[rank2]: dist.barrier()
[rank2]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank2]: return func(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
[rank2]: work = group.barrier(opts=opts)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: RuntimeError: Detected mismatch between collectives on ranks. Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER), but Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 0vs 1
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/lisergey/deepseek/bug.py", line 25, in <module>
[rank1]: main()
[rank1]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
[rank1]: return f(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/lisergey/deepseek/bug.py", line 16, in main
[rank1]: dist.barrier()
[rank1]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
[rank1]: work = group.barrier(opts=opts)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: RuntimeError: Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER), but Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 1vs 0
E0311 18:33:20.716000 340050 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 340054) of binary: /usr/bin/python
E0311 18:33:20.729000 340050 torch/distributed/elastic/multiprocessing/errors/error_handler.py:141] no error file defined for parent, to copy child error file (/tmp/torchelastic_ih8xk_wi/5240416c-e326-45c8-a708-c3ec0cc3d51b_sfj1zute/attempt_0/0/error.json)
Traceback (most recent call last):
File "/home/lisergey/.local/bin/torchrun", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/run.py", line 918, in main
run(args)
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/run.py", line 909, in run
elastic_launch(
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
bug.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2025-03-11_18:33:20
host : lenovo
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 340055)
error_file: /tmp/torchelastic_ih8xk_wi/5240416c-e326-45c8-a708-c3ec0cc3d51b_sfj1zute/attempt_0/1/error.json
traceback : Traceback (most recent call last):
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/lisergey/deepseek/bug.py", line 16, in main
dist.barrier()
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
work = group.barrier(opts=opts)
^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER), but Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 1vs 0
[2]:
time : 2025-03-11_18:33:20
host : lenovo
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 340056)
error_file: /tmp/torchelastic_ih8xk_wi/5240416c-e326-45c8-a708-c3ec0cc3d51b_sfj1zute/attempt_0/2/error.json
traceback : Traceback (most recent call last):
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/lisergey/deepseek/bug.py", line 16, in main
dist.barrier()
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
work = group.barrier(opts=opts)
^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Detected mismatch between collectives on ranks. Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER), but Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 0vs 1
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-03-11_18:33:20
host : lenovo
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 340054)
error_file: /tmp/torchelastic_ih8xk_wi/5240416c-e326-45c8-a708-c3ec0cc3d51b_sfj1zute/attempt_0/0/error.json
traceback : Traceback (most recent call last):
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/lisergey/deepseek/bug.py", line 16, in main
dist.barrier()
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/lisergey/.local/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 4551, in barrier
work = group.barrier(opts=opts)
^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Detected mismatch between collectives on ranks. Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=1OpType=BARRIER), but Rank 2 is running collective: CollectiveFingerPrint(SequenceNumber=0OpType=BARRIER).Collectives differ in the following aspects: Sequence number: 1vs 0
============================================================
```
It runs as expected with `--nproc-per-node 2`
```
$ OMP_NUM_THREADS=1 TORCH_DISTRIBUTED_DEBUG=DETAIL torchrun --nproc-per-node 2 --standalone bug.py
rank=0 size=2 x=tensor(123)
rank=1 size=2 x=tensor(123)
```
and with any `--nproc-per-node` if I don't set `TORCH_DISTRIBUTED_DEBUG=DETAIL`:
```
$ OMP_NUM_THREADS=1 torchrun --nproc-per-node 3 --standalone bug.py
rank=0 size=3 x=tensor(123)
rank=1 size=3 x=tensor(123)
rank=2 size=3 x=tensor(0)
```
It also works even with `TORCH_DISTRIBUTED_DEBUG=DETAIL` but without `dist.send()` and `dist.recv()` calls
```
$ cat bug.py
import torch
import torch.distributed as dist
import torch.distributed.elastic.multiprocessing.errors
@dist.elastic.multiprocessing.errors.record
def main():
dist.init_process_group()
rank = dist.get_rank()
size = dist.get_world_size()
x = torch.tensor(0)
dist.barrier()
for i in range(size):
if rank == i:
print(f"{rank=} {size=} {x=}")
dist.barrier()
dist.destroy_process_group()
if __name__ == '__main__':
main()
$ OMP_NUM_THREADS=1 TORCH_DISTRIBUTED_DEBUG=DETAIL torchrun --nproc-per-node 3 --standalone bug.py
rank=0 size=3 x=tensor(0)
rank=1 size=3 x=tensor(0)
rank=2 size=3 x=tensor(0)
```
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.11.0-19-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
CPU family: 6
Model: 140
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 43%
CPU max MHz: 4200.0000
CPU min MHz: 400.0000
BogoMIPS: 4838.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 5 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.14.0
[pip3] pytorch-forecasting==1.3.0
[pip3] pytorch-lightning==2.5.0.post0
[pip3] torch==2.6.0
[pip3] torchmetrics==1.6.1
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
|
open
|
2025-03-11T17:40:02Z
|
2025-03-14T01:25:03Z
|
https://github.com/pytorch/pytorch/issues/148976
|
[
"oncall: distributed",
"triaged"
] |
slitvinov
| 3
|
robinhood/faust
|
asyncio
| 443
|
Fails to create leader topic
|
## Checklist
- [x] I have included information about relevant versions
- [x] I have verified that the issue persists when using the `master` branch of Faust.
## Steps to reproduce
I'm running kafka on kubernetes (strimzi) installation with the configuration `topic.auto.create.enable: True`. I'm trying to start a simple faust app but it fails due to `TopicAuthorizationFailedError`:
```python
import faust
import ssl
import logging
logging.basicConfig(level=logging.DEBUG)
ssl_context = ssl.create_default_context(
purpose=ssl.Purpose.SERVER_AUTH, cafile=ssl_cafile)
ssl_context.load_cert_chain(ssl_certfile,
keyfile=ssl_keyfile)
app = faust.App('faust_test',
broker=host,
broker_credentials=ssl_context,
value_serializer='raw')
dev_topic = app.topic('dev')
@app.agent(dev_topic)
async def greet(numbers):
async for n in numbers:
print(n)
```
## Expected behavior
I expected `faust_test-__assignor-__leader` topic to be created and the application to start
## Actual behavior
faust app does not start due to fails due to `TopicAuthorizationFailedError`.
## Full traceback
```pytb
[2019-10-09 10:34:10,867: INFO]: [^Worker]: Starting...
[2019-10-09 10:34:10,870: INFO]: [^-App]: Starting...
[2019-10-09 10:34:10,870: INFO]: [^--Monitor]: Starting...
[2019-10-09 10:34:10,870: INFO]: [^--Producer]: Starting...
[2019-10-09 10:34:12,273: INFO]: [^--CacheBackend]: Starting...
[2019-10-09 10:34:12,273: INFO]: [^--Web]: Starting...
[2019-10-09 10:34:12,279: INFO]: [^---Server]: Starting...
[2019-10-09 10:34:12,280: INFO]: [^--Consumer]: Starting...
[2019-10-09 10:34:12,280: INFO]: [^---AIOKafkaConsumerThread]: Starting...
[2019-10-09 10:34:13,872: INFO]: [^--LeaderAssignor]: Starting...
[2019-10-09 10:34:13,872: INFO]: [^--Producer]: Creating topic 'faust_test-__assignor-__leader'
[2019-10-09 10:34:14,953: ERROR]: [^Worker]: Error: TopicAuthorizationFailedError('Cannot create topic: faust_test-__assignor-__leader (29): Authorization failed.')
Traceback (most recent call last):
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/worker.py", line 261, in execute_from_commandline
self.loop.run_until_complete(self._starting_fut)
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/worker.py", line 326, in start
await super().start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/services.py", line 719, in start
await self._default_start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/services.py", line 726, in _default_start
await self._actually_start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/services.py", line 750, in _actually_start
await child.maybe_start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/services.py", line 776, in maybe_start
await self.start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/services.py", line 719, in start
await self._default_start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/services.py", line 726, in _default_start
await self._actually_start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/services.py", line 750, in _actually_start
await child.maybe_start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/services.py", line 776, in maybe_start
await self.start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/services.py", line 719, in start
await self._default_start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/services.py", line 726, in _default_start
await self._actually_start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/services.py", line 743, in _actually_start
await self.on_start()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/faust/assignor/leader_assignor.py", line 20, in on_start
await leader_topic.maybe_declare()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/utils/futures.py", line 53, in __call__
result = await self.fun(*self.args, **self.kwargs)
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/faust/topics.py", line 427, in maybe_declare
await self.declare()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/faust/topics.py", line 446, in declare
retention=self.retention,
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/faust/transport/drivers/aiokafka.py", line 587, in create_topic
ensure_created=ensure_created,
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/faust/transport/drivers/aiokafka.py", line 731, in _create_topic
await wrap()
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/mode/utils/futures.py", line 53, in __call__
result = await self.fun(*self.args, **self.kwargs)
File "/Users/laukea/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/faust/transport/drivers/aiokafka.py", line 820, in _really_create_topic
f'Cannot create topic: {topic} ({code}): {reason}')
kafka.errors.TopicAuthorizationFailedError: [Error 29] TopicAuthorizationFailedError: Cannot create topic: faust_test-__assignor-__leader (29): Authorization failed.
[2019-10-09 10:34:14,957: INFO]: [^Worker]: Stopping...
[2019-10-09 10:34:14,957: INFO]: [^-App]: Stopping...
[2019-10-09 10:34:14,957: INFO]: [^-App]: Flush producer buffer...
[2019-10-09 10:34:14,958: INFO]: [^--TableManager]: Stopping...
[2019-10-09 10:34:14,958: INFO]: [^--Fetcher]: Stopping...
[2019-10-09 10:34:14,958: INFO]: [^--Conductor]: Stopping...
[2019-10-09 10:34:14,958: INFO]: [^--AgentManager]: Stopping...
[2019-10-09 10:34:14,958: INFO]: [^Agent: faust_test.greet]: Stopping...
[2019-10-09 10:34:14,959: INFO]: [^--ReplyConsumer]: Stopping...
[2019-10-09 10:34:14,959: INFO]: [^--LeaderAssignor]: Stopping...
[2019-10-09 10:34:14,959: INFO]: [^--Consumer]: Stopping...
[2019-10-09 10:34:14,959: INFO]: [^---AIOKafkaConsumerThread]: Stopping...
[2019-10-09 10:34:15,873: INFO]: [^--Web]: Stopping...
[2019-10-09 10:34:15,873: INFO]: [^---Server]: Stopping...
[2019-10-09 10:34:15,874: INFO]: [^--Web]: Cleanup
[2019-10-09 10:34:15,874: INFO]: [^--CacheBackend]: Stopping...
[2019-10-09 10:34:15,874: INFO]: [^--Producer]: Stopping...
[2019-10-09 10:34:15,985: INFO]: [^--Monitor]: Stopping...
[2019-10-09 10:34:15,986: INFO]: [^Worker]: Gathering service tasks...
[2019-10-09 10:34:15,986: INFO]: [^Worker]: Gathering all futures...
[2019-10-09 10:34:16,989: INFO]: [^Worker]: Closing event loop
```
# Versions
* Python version
* Faust version faust, version Faust 1.6.1
* Operating system kubernetes
* Kafka version https://strimzi.io/
* RocksDB version (if applicable)
Any help is much appreciated!
|
closed
|
2019-10-09T08:47:20Z
|
2019-11-04T02:39:06Z
|
https://github.com/robinhood/faust/issues/443
|
[] |
LeonardAukea
| 2
|
ScrapeGraphAI/Scrapegraph-ai
|
machine-learning
| 952
|
Hangs with remote Ollama
|
**Describe the bug**
When running with a remote vs local ollama server the `smart_scrapper_graph.run()` hangs and does not return (left running overnight as well just in case)
- the usage on the remote machine goes up so the requests are happening
- same model on both remote and local machine
- tested with all v1.4x versions
- tested different temperatures
- checked with and without base_url parameter when running locally (both work fine)
- tested with and without headless
- ollama v0.6.1 (both machines)
**To Reproduce**
```python
from scrapegraphai.graphs import SmartScraperGraph
graph_config = {
"llm": {
"model": "ollama/llama3.2",
"max_tokens": 8192,
"base_url": "http://192.168.1.1:11434",
"format": "json"
},
"verbose": True,
"headless": False,
}
smart_scraper_graph = SmartScraperGraph(
prompt="I want a list of all the links of the issues on the page",
source="https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues",
config=graph_config
)
result = smart_scraper_graph.run()
import json
print(json.dumps(result, indent=4))
```
**Expected behavior**
A clear and concise description of what you expected to happen.
|
open
|
2025-03-16T13:56:07Z
|
2025-03-16T19:09:06Z
|
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/952
|
[
"bug"
] |
axibo-reiner
| 2
|
ageitgey/face_recognition
|
machine-learning
| 1,530
|
Design thinking
|
open
|
2023-09-02T17:27:15Z
|
2023-09-02T17:27:15Z
|
https://github.com/ageitgey/face_recognition/issues/1530
|
[] |
RamegowdaMD
| 0
|
|
babysor/MockingBird
|
deep-learning
| 713
|
ไธไบๅปบ่ฎฎ
|
1. ๅฏไธๅฏไปฅๅฎๆด็ผ่ฏไธบexe๏ผๅคชๅคๆไบไธไผๆ
2. ๅฏไธๅฏไปฅๆๆๆๆไปถ๏ผๅ
ๆฌๅฃฐ้ณ็ด ๆ๏ผ็็พๅบฆ็ฝ็้พๆฅๅGoogle Drive้พๆฅ่ฝฌๅญไธบๅ
ถไปๅฅฝไธไบ็็ฝ็
> Google Driveไธๆฏๆๆไบบ้ฝๅฏไปฅไฝฟ็จ็๏ผไธญๅฝๅคง้
> ็พๅบฆ็ฝ็ๅฏไปฅ็จ่ชๅปบ็ฝ็ๆanonfiles๏ผๆ ้็ฉบ้ด๏ผไธ้้๏ผๅๆไปถๅคงๅฐๆๅคง20GB
|
open
|
2022-08-17T03:29:12Z
|
2022-08-17T03:29:12Z
|
https://github.com/babysor/MockingBird/issues/713
|
[] |
for-the-zero
| 0
|
BayesWitnesses/m2cgen
|
scikit-learn
| 42
|
Prepare for release 0.1.0
|
- [x] Classification support for ensemble models.
- [x] Classification for Python.
- [x] setup.py and release procedure.
- [x] Enable more sklearn models (like remaining linear models).
- [x] README + docs + examples.
- [x] Revisit the library API (exporters).
- [x] Implement CLI.
- [x] Deal with Python limitations on nested function calls (#57)
Optional:
- [x] C language support
- [x] XGBoost/LightGBM
|
closed
|
2019-01-30T19:21:15Z
|
2019-02-13T06:40:14Z
|
https://github.com/BayesWitnesses/m2cgen/issues/42
|
[] |
krinart
| 1
|
RobertCraigie/prisma-client-py
|
pydantic
| 994
|
Add support for create_many to SQLite
|
Added in https://github.com/prisma/prisma/releases/tag/5.12.0
|
closed
|
2024-08-04T17:24:38Z
|
2024-08-11T09:05:29Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/994
|
[
"kind/improvement",
"topic: client"
] |
RobertCraigie
| 1
|
iperov/DeepFaceLab
|
deep-learning
| 5,634
|
SAEHD trainer does not continue
|
THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
training begins and new training window opens
## Actual behavior
Running trainer.
Choose one of saved models, or enter a name to create a new model.
[r] : rename
[d] : delete
[0] : new - latest
:
0
Loading new_SAEHD model...
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : NVIDIA GeForce RTX 3060 Laptop GPU
[0] Which GPU indexes to choose:
0
Press enter in 2 seconds to override model settings.
[0] Autobackup every N hour ( 0..24 ?:help ) :
0
[n] Write preview history ( y/n ?:help ) :
n
[0] Target iteration :
0
[n] Flip SRC faces randomly ( y/n ?:help ) :
n
[y] Flip DST faces randomly ( y/n ?:help ) : n
[8] Batch_size ( ?:help ) : 4
4
[y] Masked training ( y/n ?:help ) :
y
[y] Eyes and mouth priority ( y/n ?:help ) :
y
[y] Uniform yaw distribution of samples ( y/n ?:help ) :
y
[y] Blur out mask ( y/n ?:help ) :
y
[y] Place models and optimizer on GPU ( y/n ?:help ) :
y
[y] Use AdaBelief optimizer? ( y/n ?:help ) : y
[y] Use learning rate dropout ( n/y/cpu ?:help ) :
y
[y] Enable random warp of samples ( y/n ?:help ) :
y
[0.0] Random hue/saturation/light intensity ( 0.0 .. 0.3 ?:help ) :
0.0
[0.0] GAN power ( 0.0 .. 5.0 ?:help ) :
0.0
[0.0] Face style power ( 0.0..100.0 ?:help ) :
0.0
[0.0] Background style power ( 0.0..100.0 ?:help ) :
0.0
[none] Color transfer for src faceset ( none/rct/lct/mkl/idt/sot ?:help ) : none
none
[n] Enable gradient clipping ( y/n ?:help ) :
n
[n] Enable pretraining mode ( y/n ?:help ) :
n
Initializing models: 100%|###############################################################| 5/5 [00:05<00:00, 1.01s/it]
Loaded 15115 packed faces from D:\Program Files\Deepface\DeepFaceLab - Live\workspace\data_src\aligned
Sort by yaw: 100%|##################################################################| 128/128 [00:00<00:00, 286.62it/s]
Loaded 63012 packed faces from D:\Program Files\Deepface\DeepFaceLab - Live\workspace\data_dst\aligned
Sort by yaw: 100%|###################################################################| 128/128 [00:02<00:00, 52.45it/s]
======================== Model Summary ========================
== ==
== Model name: new_SAEHD ==
== ==
== Current iteration: 1 ==
== ==
==---------------------- Model Options ----------------------==
== ==
== resolution: 224 ==
== face_type: wf ==
== models_opt_on_gpu: True ==
== archi: liae-udt ==
== ae_dims: 512 ==
== e_dims: 64 ==
== d_dims: 64 ==
== d_mask_dims: 32 ==
== masked_training: True ==
== eyes_mouth_prio: True ==
== uniform_yaw: True ==
== blur_out_mask: True ==
== adabelief: True ==
== lr_dropout: y ==
== random_warp: True ==
== random_hsv_power: 0.0 ==
== true_face_power: 0.0 ==
== face_style_power: 0.0 ==
== bg_style_power: 0.0 ==
== ct_mode: none ==
== clipgrad: False ==
== pretrain: False ==
== autobackup_hour: 0 ==
== write_preview_history: False ==
== target_iter: 0 ==
== random_src_flip: False ==
== random_dst_flip: False ==
== batch_size: 4 ==
== gan_power: 0.0 ==
== gan_patch_size: 28 ==
== gan_dims: 16 ==
== ==
==----------------------- Running On ------------------------==
== ==
== Device index: 0 ==
== Name: NVIDIA GeForce RTX 3060 Laptop GPU ==
== VRAM: 3.39GB ==
== ==
===============================================================
Starting. Press "Enter" to stop training and save model.
i have waited here overnight to no avail
## Steps to reproduce
i have tried a multitude of different settings and followed both tutorials and had the same issue, i have re extracted deepfakelabs and made sure im using the right version
## Other relevant information
windows and most recent python as of 5/3/2023
|
open
|
2023-03-06T06:09:21Z
|
2023-06-08T20:03:08Z
|
https://github.com/iperov/DeepFaceLab/issues/5634
|
[] |
haydle360
| 2
|
google-research/bert
|
nlp
| 567
|
words from 768 dim output
|
I'm interested in the smaller bert download. I see that tokenizing input is fairly easy. I would like to get words as output. I'm working on a simple chat-bot type application. If the output is 768 long, do I have to add my own linear layer in order to get one of the 30,000 words in the input vocabulary? Somehow I expect you have faced this type of problem in the past. What is the recomended course of action? Thank you,
|
closed
|
2019-04-09T12:55:39Z
|
2019-04-16T15:16:33Z
|
https://github.com/google-research/bert/issues/567
|
[] |
radiodee1
| 0
|
unit8co/darts
|
data-science
| 2,028
|
Can models (s4: Structured State Spaces for Sequence Modeling) from HazyResearch group at Stanford be included in this repository in a unified manner?
|
**Is your feature request related to a current problem? Please describe.**
https://github.com/HazyResearch/state-spaces
It would be nice if these models are also present in darts for people to try out and compare with other models.
**Describe proposed solution**
Inclusion of these models.
|
open
|
2023-10-15T13:58:27Z
|
2023-10-16T06:55:24Z
|
https://github.com/unit8co/darts/issues/2028
|
[
"new model"
] |
dineshdharme
| 0
|
roboflow/supervision
|
deep-learning
| 751
|
`PolygonZone` 2.0
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
TODO
### Use case
TODO
### Additional
- do not require `resolution_wh`
- offers current and accumulated count
- delivery per class in/out count
- new ways to visualize results
|
closed
|
2024-01-18T20:44:46Z
|
2024-01-26T13:53:18Z
|
https://github.com/roboflow/supervision/issues/751
|
[
"enhancement",
"Q1.2024",
"planning"
] |
SkalskiP
| 0
|
tensorflow/tensor2tensor
|
machine-learning
| 1,311
|
request img2img Problem type
|
### Description
Currently the ImageProblem can only take the input as image and output is classification. request a image2image problem
...
|
closed
|
2018-12-18T15:10:25Z
|
2023-03-02T17:01:29Z
|
https://github.com/tensorflow/tensor2tensor/issues/1311
|
[] |
maxwillzq
| 1
|
0b01001001/spectree
|
pydantic
| 208
|
[BUG]API schema is not updated for enums and objects nested inside the dependency (only request body schema)
|
**Describe the bug**
If a enum is defined in a dependency model and the Field pydantic model type is used the Schema generated is not updated
**To Reproduce**
The code snippet to generate the behavior
from enum import Enum
from random import random
from flask import Flask, abort, jsonify, request
from flask.views import MethodView
from pydantic import BaseModel, Field
from spectree import Response, SpecTree
app = Flask(__name__)
api = SpecTree("flask", app=app)
class Query(BaseModel):
text: str = "default query strings"
class Resp(BaseModel):
label: int
score: float = Field(
...,
gt=0,
lt=1,
)
class Language(str, Enum):
en = "en-US"
zh = "zh-CN"
class SubData(BaseModel):
uid: str
lang: Language = Field(Language.en, description="the supported languages")
class Data(BaseModel):
uid: str
limit: int = 5
vip: bool
data: SubData = Field(description="created a test dependency")
class Header(BaseModel):
Lang: Language = Field(Language.en, description="the supported languages")
class Cookie(BaseModel):
key: str
@app.route(
"/api/predict/<string(length=2):source>/<string(length=2):target>", methods=["POST"]
)
@api.validate(
query=Query, json=Data, resp=Response("HTTP_403", HTTP_200=Resp), tags=["model"]
)
def predict(source, target):
"""
predict demo
demo for `query`, `data`, `resp`, `x`
query with
``http POST ':8000/api/predict/zh/en?text=hello' uid=xxx limit=5 vip=false ``
"""
print(f"=> from {source} to {target}") # path
print(f"JSON: {request.context.json}") # Data
print(f"Query: {request.context.query}") # Query
if random() < 0.5:
abort(403)
return jsonify(label=int(10 * random()), score=random())
@app.route("/api/header", methods=["POST"])
@api.validate(
headers=Header, cookies=Cookie, resp=Response("HTTP_203"), tags=["test", "demo"]
)
def with_code_header():
"""
demo for JSON with status code and header
query with ``http POST :8000/api/header Lang:zh-CN Cookie:key=hello``
"""
return jsonify(language=request.context.headers.Lang), 203, {"X": 233}
**Expected behavior**
The schema should be proper in the API doc, since the schema section behaves fine the same is expected in default sec schema.
**Error Message**
<img width="1542" alt="image" src="https://user-images.githubusercontent.com/34709561/155974630-1e26793d-6fa5-44bf-a5dd-da7aa6a73207.png">
**Desktop (please complete the following information):**
- OS: [e.g. Linux]
- Version [e.g. Ubuntu-18.04]
**Python Information (please complete the following information):**
- Python Version [e.g. Python=3.10]
- Library Version [e.g. spectree=0.7.6]
- Other dependencies [e.g. flask=2.0.0]
**Additional context**
Add any other context about the problem here.
|
open
|
2022-02-28T11:21:24Z
|
2022-05-01T14:01:20Z
|
https://github.com/0b01001001/spectree/issues/208
|
[
"bug"
] |
vanitha-basavanna
| 7
|
labmlai/annotated_deep_learning_paper_implementations
|
deep-learning
| 155
|
How to instantiate the module
|
Hello everyone,
How can I instantiate parameters for a model? I followed the tutorial https://nn.labml.ai/transformers/vit/index.html, but when creating VisionTransformer model, the class is not allowed me to pass the parameters, I do not really understand why. Can someone please explain? and why it shows self: Module?

|
closed
|
2022-11-23T09:30:10Z
|
2022-11-23T22:45:54Z
|
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/155
|
[] |
hailuu684
| 1
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 795
|
About data set configuration
|
Hi!
I have a question about the composition of a dataset.
If there are similar questions in the past, I'm sorry.
I want to perform segmentation with images cut out from videos with Pix2pix.
In that case, is it possible to test with only the cut outed image without combining the correct image?
Best regars.
|
closed
|
2019-10-14T14:53:57Z
|
2019-10-15T20:26:45Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/795
|
[] |
Migita6
| 2
|
ading2210/poe-api
|
graphql
| 52
|
I can't make the bot with beaver type; server doesn't respond to my request
|
DEBUG:urllib3.connectionpool:https://poe.com:443 "POST /api/gql_POST HTTP/1.1" 200 89
WARNING:root:PoeBotEditMutation returned an error: Server Error | Retrying (20/20)
Traceback (most recent call last):
File "/workspaces/poe-api-wk/main.py", line 17, in <module>
edit_result = client.edit_bot(bot_id, handle, base_model="a2_2")
File "/home/codespace/.python/current/lib/python3.10/site-packages/poe.py", line 459, in edit_bot
result = self.send_query("PoeBotEditMutation", {
File "/home/codespace/.python/current/lib/python3.10/site-packages/poe.py", line 206, in send_query
raise RuntimeError(f'{query_name} failed too many times.')
RuntimeError: PoeBotEditMutation failed too many times.
|
closed
|
2023-04-19T07:50:03Z
|
2023-04-19T13:42:20Z
|
https://github.com/ading2210/poe-api/issues/52
|
[
"wontfix"
] |
tonymacx86PRO
| 2
|
slackapi/bolt-python
|
fastapi
| 330
|
Potentially request.body can be None when using a custom adapter
|
I think you should swap the order of this line, with aws_lambda `body` is `None` so the code will raise the exception inside the if condition. L49 not make any sense in this order because if `body` is `None` the code will always raise exception inside the if condition
https://github.com/slackapi/bolt-python/blob/31436f729871c38f596a1363aa1d61a5b4c15d0d/slack_bolt/request/request.py#L47-L49
|
closed
|
2021-05-08T18:32:54Z
|
2021-05-10T20:44:35Z
|
https://github.com/slackapi/bolt-python/issues/330
|
[
"area:async",
"improvement",
"area:sync"
] |
matteobaldelli
| 1
|
Lightning-AI/pytorch-lightning
|
pytorch
| 19,793
|
Please make it simple!
|
### Outline & Motivation
One thing tensorflow falls behind pytorch is its too complex designs, while pytorch is much simpler. But when I start to use pytorch-lightning, I feel that it is another tensorflow. So I beg your guys make thing simple. For a simple saving checkpoint function, I search the code from ModelCheckpoint, to trainer.save_checkpoint, and then checkpoint_connector.save_checkpoint, and then trainer.strategy.save_checkpoint, where is the end? How to ensure correctness under such complex designs? Please make it simple!
### Pitch
The strategy design in tensorflow is too complex, DDP is just a simple all reduce of gradients. But in strategy or keras, things become very complex, the function call stacks are very deep that we could hard understand where is it doing the actual all reduce? Even the users spend weeks of time, they may not figure out what you are actually doing because there is call from module a to b, then to c, then to a, then to b, then I give up.
### Additional context
I suggest to implement things as what it is, stop over encapsulation, please follow the design patterns of pytorch and caffe, stop making simple functions complicated.
cc @justusschock @awaelchli
|
open
|
2024-04-22T01:55:49Z
|
2024-05-03T16:12:16Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19793
|
[
"refactor",
"needs triage"
] |
chengmengli06
| 1
|
jupyter/docker-stacks
|
jupyter
| 1,746
|
[ENH] - Delta Image
|
### What docker image(s) is this feature applicable to?
all-spark-notebook
### What changes are you proposing?
Can we get an image with Delta included?
https://docs.delta.io/latest/quick-start.html
### How does this affect the user?
It would provide additional functionality.
### Anything else?
_No response_
|
closed
|
2022-07-09T11:06:08Z
|
2022-09-12T17:41:48Z
|
https://github.com/jupyter/docker-stacks/issues/1746
|
[
"type:Enhancement",
"tag:Upstream"
] |
lymedo
| 14
|
pytest-dev/pytest-html
|
pytest
| 623
|
Gettting this - AttributeError: 'Namespace' object has no attribute 'htmlpath'
|
/Library/Frameworks/Python.framework/Versions/3.8/bin/python3 "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py" --target test_WTW.py::Test_WTW.test_cast_icon
Testing started at 1:39 PM ...
Launching pytest with arguments test_WTW.py::Test_WTW::test_cast_icon in /Users/hakumar/pythonProject/AppiumAndroid/tests
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py:28: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if version.LooseVersion(pytest.__version__) >= version.LooseVersion("6.0"):
INTERNALERROR> Traceback (most recent call last):
**INTERNALERROR> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1104, in getoption
INTERNALERROR> val = getattr(self.option, name)
INTERNALERROR> AttributeError: 'Namespace' object has no attribute 'htmlpath'**
INTERNALERROR>
INTERNALERROR> During handling of the above exception, another exception occurred:
INTERNALERROR>
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/_pytest/main.py", line 202, in wrap_session
INTERNALERROR> config._do_configure()
INTERNALERROR> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/_pytest/config/__init__.py", line 773, in _do_configure
INTERNALERROR> self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
INTERNALERROR> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pluggy/hooks.py", line 308, in call_historic
INTERNALERROR> res = self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
INTERNALERROR> self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
INTERNALERROR> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/appiumbase/plugins/pytest_plugin.py", line 242, in pytest_configure
INTERNALERROR> ab_config.pytest_html_report = config.getoption("htmlpath") # --html=FILE
INTERNALERROR> File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1115, in getoption
INTERNALERROR> raise ValueError("no option named %r" % (name,))
INTERNALERROR> ValueError: no option named 'htmlpath'
Process finished with exit code 3
Empty suite
|
open
|
2023-04-05T08:17:59Z
|
2023-04-05T17:02:20Z
|
https://github.com/pytest-dev/pytest-html/issues/623
|
[] |
Harshithkumar
| 2
|
laurentS/slowapi
|
fastapi
| 129
|
Can global limits make this in FastAPI?
|
If I set the global limit to "10/minute" with default_limits, can some API endpoints receive the global limit even if they don't have a limit decorator, while other endpoints with limit decorators receive the limits set by the decorators?
Here's an example: I want to limit access to the /test endpoint to 3 times, and then limit access to both the / and /home endpoints to a total of 7 times. I don't want to add decorators to / and /home separately.
```python
limiter = Limiter(key_func=get_remote_address, default_limits=["10/minute"])
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
app.add_middleware(SlowAPIMiddleware)
@app.get("/")
async def root_page(request: Request):
return 'root'
@app.get("/home")
async def homepage(request: Request):
return 'home'
@app.get("/test")
@limiter.limit("3/minute")
async def test(request: Request):
return 'test'
```
|
closed
|
2023-03-03T04:35:11Z
|
2024-10-11T23:17:50Z
|
https://github.com/laurentS/slowapi/issues/129
|
[] |
stilleshan
| 4
|
netbox-community/netbox
|
django
| 18,749
|
Swagger: Bulk Delete of IPAddress
|
### Deployment Type
Self-hosted
### NetBox Version
v4.2.3
### Python Version
3.10
### Steps to Reproduce
Using the swagger to bulk delete multiple IPAddresses does not work. The request body as per the OpenAPI Spect a list of IPAddress model
curl -X 'DELETE' \
'http://netbox.nokia.com:8090/api/ipam/ip-addresses/' \
-H 'accept: */*' \
-H 'Content-Type: application/json' \
-H 'X-CSRFTOKEN: GG45Az6VW9x5EouvV3tCX0obbqa80xZ2sCdJujXyXMfskd9Fdpcn4AfwdyXrRHWG' \
-d '[
{
"address": "1.1.1.1/32"
}
]'
### Expected Behavior
Successful deletion of ipAddresses.
The API spec should match as per the behavior of the server. If the expectation is to provide list of ipAddressIDs as a request to bulkdelete, the OpenAPI spec should reflect the same.
### Observed Behavior
However on executing HTTP 400 is returned
[
{
"id": [
"This field is required."
]
}
]
|
open
|
2025-02-27T04:34:25Z
|
2025-02-28T00:54:53Z
|
https://github.com/netbox-community/netbox/issues/18749
|
[
"type: bug",
"status: needs owner",
"severity: low"
] |
soumiksamanta
| 0
|
huggingface/transformers
|
python
| 36,697
|
Support Flex Attention for encoder only models (XLMRoberta, ModernBERT etc...)
|
### Feature request
With the addition of flex attention support through #36643, encoder only models still lack this feature.
XLMRoberta, ModernBERT (and EuroBERT in the future) are very common for RAG setups (embedding + reranker).
Allowing them to support arbitrary attention patterns can be useful.
### Motivation
Support for arbitrary attention patterns can be useful for research/production.
### Your contribution
test
|
open
|
2025-03-13T12:24:08Z
|
2025-03-24T17:03:09Z
|
https://github.com/huggingface/transformers/issues/36697
|
[
"Feature request"
] |
ccdv-ai
| 3
|
supabase/supabase-py
|
fastapi
| 93
|
The "contains" query filter passes verbatim string "values" to Supabase in place of actual values.
|
**Describe the bug**
The "contains" filter appears to incorrectly pass the verbatim string "values" to Supabase, in place of the intended query values.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a Supabase client connected to a schema with table "table_name" having the JSONB column "column_name".
2. Invoke the "contains" filter with `supabase.table('table_name').select('*').cs('column_name','["value_name"]').execute()`
3. See the error: `{'data': {'message': 'invalid input syntax for type json', 'code': '22P02', 'hint': None, 'details': 'Token "values" is invalid.'}, 'status_code': 400}`
**Expected behavior**
This should return any rows from "table_name" where the JSONB entry in "column_name" contains the value "value_name".
|
closed
|
2021-12-09T01:23:05Z
|
2022-02-04T16:51:34Z
|
https://github.com/supabase/supabase-py/issues/93
|
[
"bug"
] |
aicushman
| 3
|
openapi-generators/openapi-python-client
|
rest-api
| 503
|
More thorough docstrings on APIs and models
|
**Is your feature request related to a problem? Please describe.**
Openapi documents can have a lot of information about apis and types and the client isn't currently adding
a good chunk of that information tothe docstring
Leading to people needing to refer back to the openapi file instead of the library.
**Describe the solution you'd like**
I think we could improve this situation. We should add as much context as is relevant to the docstrings.
From there they would be visible in the IDE and visible in any generated module documentation.
It would be nice if they were well formatted and maybe cross-refrenced but that's not the point of this issue.
The point is to have as much information as possible in the docstrings (inside the IDE or generated module documentation without looking at the openapi document)
currently API and model generation only uses the description field.
It looks like the api has at least a description and summary attribute which documents may or may not use
The models have attrs which have a description and example field. I can't find a way for attrs to get per attr documentation but at the very least it could be added to the class.
**Additional context**
I was thinking of starting with something like
def sync(
{{ arguments(endpoint) | indent(4) }}
) -> Optional[{{ return_string }}]:
"""{% if endpoint.summary %}{{ endpoint.summary | wordwrap(73)}}
{% else %}
{{ endpoint.title | wordwrap(73)}}
{% endif %}
{{ endpoint.description | wordwrap(73)}}"""
for endpoint_module.py.jinja. Well that plus if we have query fields instead of a model also including description and example for each of them
|
closed
|
2021-09-27T09:42:51Z
|
2021-12-19T02:53:50Z
|
https://github.com/openapi-generators/openapi-python-client/issues/503
|
[
"โจ enhancement"
] |
rtaycher
| 2
|
xorbitsai/xorbits
|
numpy
| 237
|
CLN: Remove code related to `clean_up_func`
|
"clean_up_func" is used to reduce the serialization overhead of functions, but the previous implementation was too complex and too tightly coupled with Ray. We are deleting this part of the code for now.
|
closed
|
2023-02-23T08:57:18Z
|
2023-02-24T06:34:13Z
|
https://github.com/xorbitsai/xorbits/issues/237
|
[
"enhancement"
] |
aresnow1
| 1
|
exaloop/codon
|
numpy
| 326
|
ValueError: optional is None. Raised from: std.internal.types.optional.unwrap:0. .codon/lib/codon/stdlib/internal/types/optional.codon:80:5
|
I have run the following python program program7.py by codon.
class L6Z424Xp ():
ki702Y9a30 = 27
LQR770XidT = "4YiBNWaE0"
def C7788020M ( self , mx7bY1E77 : bool ) :
f75NO49 = ( ( ( self.ki702Y9a30 ) ) ) ;
Yx6l2635count=0
while (self.LQR770XidT)!=None :
Yx6l2635count=Yx6l2635count+1
if Yx6l2635count>2:
break
S6y2v5iJ2C = ( self.ki702Y9a30 ) ;
if __name__ == '__main__':
wt0717sZ = False
SKQjq0w= L6Z424Xp();
SKQjq0w.C7788020M(wt0717sZ)
print(SKQjq0w.ki702Y9a30)
print(SKQjq0w.LQR770XidT)
And it output error message as follows:
ValueError: optional is None
Raised from: std.internal.types.optional.unwrap:0
.codon/lib/codon/stdlib/internal/types/optional.codon:80:5
Backtrace:
[0x7f08afd228b5] GCC_except_table74 at .codon/lib/codon/stdlib/internal/types/optional.codon:80:5
[0x7f08afd2294f] GCC_except_table74 at codon_program7.py:7:29
[0x7f08afd2776e] GCC_except_table74 at codon_program7.py:15:20
(core dumped) .codon/bin/codon run codon_program7.py
I hava run it by both python3 and pypy3 and output the following content:
27
4YiBNWaE0
The related files can be found in https://github.com/starbugs-qurong/python-compiler-test/tree/main/codon/python_7
|
closed
|
2023-04-04T10:43:51Z
|
2023-04-11T10:53:16Z
|
https://github.com/exaloop/codon/issues/326
|
[] |
starbugs-qurong
| 2
|
nerfstudio-project/nerfstudio
|
computer-vision
| 2,967
|
[splatfacto] TypeError: project_gaussians_forward(): incompatible function arguments.
|
**Describe the bug**
I'd like to train method "splatfacto" with custom dataset.
I already check "nerfacto" with my dataset, first.
an error is related to *mismatch of arguments in project_gaussians_forward function* and I guess the reason is mismatching gsplat and nerfstudio version.
major python library version is,
nerfstudio 1.0.2
gsplat 0.1.5
Below are the error messages.
Traceback (most recent call last):
File "/home/jovyan/myconda_envs/autonerfstudio_v2/bin/ns-train", line 8, in <module>
sys.exit(entrypoint())
File "/data/clap/projects/jglee/dlt_rl/autonerf_gitlab/autonerf/nerfstudio/nerfstudio/scripts/train.py", line 262, in entrypoint
main(
File "/data/clap/projects/jglee/dlt_rl/autonerf_gitlab/autonerf/nerfstudio/nerfstudio/scripts/train.py", line 247, in main
launch(
File "/data/clap/projects/jglee/dlt_rl/autonerf_gitlab/autonerf/nerfstudio/nerfstudio/scripts/train.py", line 189, in launch
main_func(local_rank=0, world_size=world_size, config=config)
File "/data/clap/projects/jglee/dlt_rl/autonerf_gitlab/autonerf/nerfstudio/nerfstudio/scripts/train.py", line 100, in train_loop
trainer.train()
File "/data/clap/projects/jglee/dlt_rl/autonerf_gitlab/autonerf/nerfstudio/nerfstudio/engine/trainer.py", line 250, in train loss, loss_dict, metrics_dict = self.train_iteration(step)
File "/data/clap/projects/jglee/dlt_rl/autonerf_gitlab/autonerf/nerfstudio/nerfstudio/utils/profiler.py", line 112, in inner
out = func(*args, **kwargs)
File "/data/clap/projects/jglee/dlt_rl/autonerf_gitlab/autonerf/nerfstudio/nerfstudio/engine/trainer.py", line 471, in train_iteration
_, loss_dict, metrics_dict = self.pipeline.get_train_loss_dict(step=step)
File "/data/clap/projects/jglee/dlt_rl/autonerf_gitlab/autonerf/nerfstudio/nerfstudio/utils/profiler.py", line 112, in inner
out = func(*args, **kwargs)
File "/data/clap/projects/jglee/dlt_rl/autonerf_gitlab/autonerf/nerfstudio/nerfstudio/pipelines/base_pipeline.py", line 300, in get_train_loss_dict
model_outputs = self._model(ray_bundle) # train distributed data parallel model if world_size > 1
File "/home/jovyan/myconda_envs/autonerfstudio_v2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jovyan/myconda_envs/autonerfstudio_v2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs) File "/data/clap/projects/jglee/dlt_rl/autonerf_gitlab/autonerf/nerfstudio/nerfstudio/models/base_model.py", line 143, in forward
return self.get_outputs(ray_bundle)
File "/data/clap/projects/jglee/dlt_rl/autonerf_gitlab/autonerf/nerfstudio/nerfstudio/models/splatfacto.py", line 740, in get_outputs
self.xys, depths, self.radii, conics, comp, num_tiles_hit, cov3d = project_gaussians( # type: ignore
File "/home/jovyan/myconda_envs/autonerfstudio_v2/lib/python3.9/site-packages/gsplat/project_gaussians.py", line 60, in project_gaussians
return _ProjectGaussians.apply(
File "/home/jovyan/myconda_envs/autonerfstudio_v2/lib/python3.9/site-packages/torch/autograd/function.py", line 553, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/jovyan/myconda_envs/autonerfstudio_v2/lib/python3.9/site-packages/gsplat/project_gaussians.py", line 111, in forward
) = _C.project_gaussians_forward(
File "/home/jovyan/myconda_envs/autonerfstudio_v2/lib/python3.9/site-packages/gsplat/cuda/__init__.py", line 9, in call_cuda
return getattr(_C, name)(*args, **kwargs)
TypeError: project_gaussians_forward(): incompatible function arguments. The following argument types are supported:
1. (arg0: int, arg1: torch.Tensor, arg2: torch.Tensor, arg3: float, arg4: torch.Tensor, arg5: torch.Tensor, arg6: torch.Tensor, arg7: float, arg8: float, arg9: float, arg10: float, arg11: int, arg12: int, arg13: Tuple[int, int, int], arg14: float) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]
Invoked with: 50000, Parameter containing:
tensor([[ 3.8227, 4.1500, -1.1714],
[ 4.5931, -1.0955, 1.0090],
[-2.4343, 2.9364, 4.4077],
...,
[ 4.2985, -4.9558, -4.1033],
[-2.8893, 2.3538, -0.4661],
[-3.4045, -3.0413, -3.5534]], device='cuda:0', requires_grad=True), tensor([[0.2192, 0.2192, 0.2192],
[0.0771, 0.0771, 0.0771],
[0.1027, 0.1027, 0.1027],
...,
[0.1316, 0.1316, 0.1316],
[0.1561, 0.1561, 0.1561],
[0.1754, 0.1754, 0.1754]], device='cuda:0', grad_fn=<ExpBackward0>), 1, tensor([[-0.6499, -0.3086, -0.4516, -0.5277],
[ 0.2193, -0.6681, -0.6980, -0.1355],
[-0.2783, 0.4081, 0.3069, -0.8135],
...,
[ 0.0702, -0.9207, -0.0157, 0.3835],
[ 0.4044, 0.1410, 0.4986, -0.7536],
[-0.5802, -0.2840, 0.7633, -0.0014]], device='cuda:0',
grad_fn=<DivBackward0>), tensor([[ 8.3462e-01, 5.5083e-01, 4.2144e-18, -1.7042e-01],
[ 1.3996e-16, -2.0442e-16, -1.0000e+00, -7.0658e-09],
[-5.5083e-01, 8.3462e-01, -2.4771e-16, 4.0456e-01]], device='cuda:0'), tensor([[ 8.3462e-01, 5.5083e-01, 4.2144e-18, -1.7042e-01],
[ 1.3996e-16, -2.0442e-16, -1.0000e+00, -7.0658e-09],
[-5.5083e-01, 8.3462e-01, -2.4771e-16, 4.0356e-01],
[-5.5083e-01, 8.3462e-01, -2.4771e-16, 4.0456e-01]], device='cuda:0'), 320.0, 240.0, 319.5, 239.5, 480, 640, 16, 0.01
I also tried gsplat 0.1.4, but same errors.
|
closed
|
2024-02-28T07:43:18Z
|
2024-02-28T08:14:08Z
|
https://github.com/nerfstudio-project/nerfstudio/issues/2967
|
[] |
jeonggwanlee
| 2
|
davidsandberg/facenet
|
computer-vision
| 731
|
Data augmentation for different JPEG compression ratios.
|
During the test, I noticed that the different JPEG compression ratio could cause a big shift on the embeddings.
There has been some data augmentation already, like FLIP, ROTATION, RANDOM CROP, I don't know if it is necessary to also do some data augmentation during the training.
|
open
|
2018-04-30T09:32:50Z
|
2018-04-30T09:40:00Z
|
https://github.com/davidsandberg/facenet/issues/731
|
[] |
darwin-xu
| 0
|
Lightning-AI/pytorch-lightning
|
machine-learning
| 19,975
|
Hyperparameter logging with multiple loggers only works partially (TensorBoard and CSV)
|
### Bug description
Inside my `LightningModule`, I use `self.logger.log_hyperparams(...)` to log, e.g.:
```python
self.logger.log_hyperparams(dict(lr=lr, weight_decay=weight_decay))
```
In my trainer initialization, I define multiple loggers, e.g.:
```python
trainer = Trainer(
logger=[
TensorBoardLogger(dir, name="tb_logs"),
CSVLogger(dir, name="csv_logs"),
],
...
)
```
When fitting my model with this setup, the hyperparameters are only logged in the folder `tb_logs` which belongs to the TensorBoard logger (see code above). In the folder of the CSV logger, there is an empty hyperparameter file. I would have expected to have the same hyperparameter file as in the TensorBoard logger.
This occurs even after model fitting is complete, i.e., I can rule out that this occurs because the log is not yet flushed to disk.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_
|
open
|
2024-06-14T07:34:16Z
|
2024-06-14T07:34:16Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19975
|
[
"bug",
"needs triage"
] |
simon-forb
| 0
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,951
|
Discord detects selenium (undetected-chromedriver)
|
Discord detects selenium (undetected-chromedriver)
|
open
|
2024-07-17T10:44:19Z
|
2024-07-17T10:44:19Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1951
|
[] |
dexforint
| 0
|
statsmodels/statsmodels
|
data-science
| 9,371
|
Docstring fixes: mis-display of the formula
|
<img width="653" alt="Screenshot 2024-09-27 093838" src="https://github.com/user-attachments/assets/a0341984-3c26-4872-be69-773cf5a7cbf4">
Guess it should be $B\varepsilon_t$
https://www.statsmodels.org/stable/generated/statsmodels.tsa.vector_ar.svar_model.SVAR.html
|
closed
|
2024-09-27T00:47:20Z
|
2024-10-02T09:37:18Z
|
https://github.com/statsmodels/statsmodels/issues/9371
|
[] |
qiaoxi-li
| 0
|
piccolo-orm/piccolo
|
fastapi
| 955
|
Add a method to the `Array` column for getting the number of dimensions of the array
|
In Piccolo API, and Piccolo Admin we need an easy way to get the number of dimensions of an array column, because we have to treat arrays with 2 or more dimensions differently.
For example:
```python
>>> Array(Varchar())._get_dimensions()
1
>>> Array(Array(Varchar()))._get_dimensions()
2
```
|
closed
|
2024-03-15T22:24:20Z
|
2024-03-19T23:39:48Z
|
https://github.com/piccolo-orm/piccolo/issues/955
|
[
"enhancement"
] |
dantownsend
| 0
|
yunjey/pytorch-tutorial
|
pytorch
| 57
|
found bug in variational auto encoder
|
Traceback (most recent call last):
File "main.py", line 69, in <module>
torchvision.utils.save_image(fixed_x.data.cpu(), './data/real_images.png')
AttributeError: 'torch.FloatTensor' object has no attribute 'data'
just move to_var above save,here is what it should be:
```
--- a/tutorials/03-advanced/variational_auto_encoder/main.py
+++ b/tutorials/03-advanced/variational_auto_encoder/main.py
@@ -65,8 +65,8 @@ data_iter = iter(data_loader)
# fixed inputs for debugging
fixed_z = to_var(torch.randn(100, 20))
fixed_x, _ = next(data_iter)
-torchvision.utils.save_image(fixed_x.data.cpu(), './data/real_images.png')
fixed_x = to_var(fixed_x.view(fixed_x.size(0), -1))
+torchvision.utils.save_image(fixed_x.data.cpu(), './data/real_images.png')
```
|
closed
|
2017-08-05T16:17:02Z
|
2017-08-10T15:58:52Z
|
https://github.com/yunjey/pytorch-tutorial/issues/57
|
[] |
jtoy
| 1
|
dynaconf/dynaconf
|
flask
| 1,112
|
[bug] Index merging does not process merging markers
|
**Describe the bug**
When using the new tunder syntax `___` to merge by index, in some cases the merging markers and other lazy markers are not being resolved.
**To Reproduce**
`app.py`
```python
from dynaconf import Dynaconf
settings = Dynaconf(
DATA=[
{"name": "Bruno"},
{"name": "John"},
]
)
print(settings.DATA)
```
## on console
```console
# SUCCESS
โฏ DYNACONF_DATA___0=1 python app.py
[1, {'name': 'John'}] # OK
โฏ DYNACONF_DATA___0__foo=bar python app.py
[{'foo': 'bar', 'name': 'Bruno'}, {'name': 'John'}] # OK
โฏ DYNACONF_DATA___0="{foo='Bar', dynaconf_merge=true}" python app.py
[{'foo': 'Bar', 'name': 'Bruno'}, {'name': 'John'}] # OK
# BUGS
โฏ DYNACONF_DATA___0="{foo='Bar'}" python app.py
[{'foo': 'Bar', 'name': 'Bruno'}, {'name': 'John'}] # BUG
# BUG ^ whole element 0 should have been replaced by {foo:bar}
โฏ DYNACONF_DATA___0="@merge foo=bar" python app.py
[Merge({'foo': 'bar'}) on 135394164649824, {'name': 'John'}] # BUG
# BUG ^ Merge(...) object was not evaluated and merge not performed
โฏ DYNACONF_DATA___0="@reset {new=dict}" python app.py
[Reset({new=dict}) on 125453429809824, {'name': 'John'}] # BUG
# BUG ^ Reset(...) object was not evaluated
```
## Comparing with Dict behavior
```python
from dynaconf import Dynaconf
settings = Dynaconf(DATA={"name": "Bruno"})
print(settings.DATA)
```
```console
โฏ DYNACONF_DATA="{foo='Bar'}" python app.py
{'foo': 'Bar'}
โฏ DYNACONF_DATA__foo=Bar python app.py
{'foo': 'Bar', 'name': 'Bruno'}
โฏ DYNACONF_DATA="@merge foo=bar" python app.py
{'foo': 'bar', 'name': 'Bruno'}
โฏ DYNACONF_DATA="{foo='Bar', dynaconf_merge=true}" python app.py
{'foo': 'Bar', 'name': 'Bruno'}
```
|
closed
|
2024-07-01T14:55:50Z
|
2024-07-08T18:16:48Z
|
https://github.com/dynaconf/dynaconf/issues/1112
|
[
"bug"
] |
rochacbruno
| 1
|
deepinsight/insightface
|
pytorch
| 2,141
|
Web Demo distance metrics
|
Hi what distance metric used in web demo and did you used normalized vectors or did you reduced the dimensions of the embeddings while verifying?
|
closed
|
2022-10-14T15:30:19Z
|
2022-11-29T09:36:24Z
|
https://github.com/deepinsight/insightface/issues/2141
|
[] |
shekarneo
| 2
|
TencentARC/GFPGAN
|
deep-learning
| 175
|
่ฏท้ฎ่ฎญ็ปๅบ็ๅชๆฏๆจกๅๅๆฐๅ๏ผๆ่งๆฏๅ็ผฉๆ ผๅผ็๏ผๅฆๆๆณ่ฆๆดๅๆไธไธชๅฎๆดๆจกๅๅ่ฟ่กๅค็๏ผๅบ่ฏฅๆไนๅ๏ผๆ็ธๅ
ณๆ็จๅ๏ผ
|
open
|
2022-03-13T07:49:29Z
|
2022-03-13T07:49:29Z
|
https://github.com/TencentARC/GFPGAN/issues/175
|
[] |
Asuka001100
| 0
|
|
apify/crawlee-python
|
web-scraping
| 300
|
Add URL validation
|
Currently, for end-user "requests" params we define:
```python
requests: Sequence[str | BaseRequestData | Request]
```
which, unfortunately, matches a single-string request (e.g. `requests='https://crawlee.dev'`) as well.
We could make it more strict and update it to just:
```python
requests: list[str | BaseRequestData | Request]
```
so static type checkers can let us know in case someone is trying to pass a single string request there.
Or maybe use some custom type for either list, tuple, or set.
Or, add some URL pattern-matching validation, see Pydantic's [Network Types](https://docs.pydantic.dev/latest/api/networks/).
|
closed
|
2024-07-15T13:12:35Z
|
2024-07-23T13:51:30Z
|
https://github.com/apify/crawlee-python/issues/300
|
[
"enhancement",
"t-tooling"
] |
vdusek
| 0
|
mouredev/Hello-Python
|
fastapi
| 71
|
็ฝ่ต่ขซ้ปไบๆณจๅๅผๅธธๅนณๅฐไธ็ปๆๆฌพๆไนๅ๏ผ
|
ๅบ้ปๅจ่ฏขๅพฎ๏ผlyh20085150ๆฃ๏ผ1217771269
็ฝ็ปไธ่ตๅ่ขซ้ป๏ผๅฎขๆ่ฏดไฝ ็่ดฆๅทๅผๅธธ๏ผๆๆฌพๅฎกๆ ธ้ฃๆงๆตๆฐดไธ่ถณ๏ผๅบๆฌพ็ซฏๅฃ็ญไธ็ปไฝ ๆๆฌพ็ปดๆค๏ผ่ดฆๅทๅฏไปฅๆญฃๅธธ็ปๅฝ๏ผๅฐฑๆฏไธ่ฝๅบๆฌพ๏ผไปไปฌ่ฟไนๅๅฐฑๆฏไธบไบ่ฎฉไฝ ๆ้ฑ้ฝ่พๅๅป๏ผๅ่ฎฉไฝ ๅ
ๅผ็ปง็ปญไธบไปไปฌ
ๅ้ ๅฉ็๏ผๅฝไฝ ้ๅฐ่ฟ็งๆ
ๅตๅไธไธ่ฆ่ทๅฎขๆๅ็ไบๆง๏ผๅฆๆไฝ ่ฐฉ้ชๅฎขๆ้ฃไฝ ็่ดฆๅทๅฐฑไผ่ขซๅป็ป๏ผๅๆถไฝ ็ๅไน่ขซๆธ
็ฉบ๏ผ้ฃๅฐฑๅฝปๅบไธๅธๆๆฝๅไบใ
ๅบ้ปๅจ่ฏขๅพฎ๏ผlyh20085150 ๆฃ๏ผ1217771269
ๆไปฌ่ฆ่ฎฐไฝ๏ผ1ใ้้็ปดๆคไธ็ปๆๆฌพ๏ผๅฆๆไฝ ็ๅฉๅทจๅคง๏ผไฝ ไธๅฎ่ฆ่ฎฐไฝ๏ผไธ็ด็บ ็ผ ๅชไผๅฏผ่ด่ขซๅฐๅท๏ผ่ไธๅฎขๆไผๆพๅ็ง็็ฑๆๅปถๆถ้ด๏ผๆพๅ็ง็็ฑๆจ่ฑใ2 ใๅนณๅฐไธๆฆๅบ็ฐๆๆฌพไธๅฐ็ณป็ป็ปดๆค๏ผๆญฃๅจ
ๅฎกๆ ธ๏ผ่ฟไบๅฐฑๆฏ้ๅฐ้ปๅนณๅฐไบใ3ใ้ปๅนณๅฐๅๅฃ็ปๅธธๆฏ๏ผๆๅใไธ็งๆฏ็ปดๆคใ็ปดๆค่ฟ็งๆญฃๅธธ็ๆธธๆ๏ผๅชๆฏๆ ๆณๆๆฌพ๏ผๅฏไปฅๆพๆ๏ผๅ ไธบๅนณๅฐๅธๆๅจ็ปดๆคๆ้ดๆจๆๆๆ็ไฝ้ข้ฝ่พๅๅปใ4ใๅช่ฆ่ดฆๅทไธ่ฆ
ๆญฃๅธธ็ปๅฝๅนถไธๅๅฏไปฅ่ช็ฑ่ฝฌๆข๏ผๅฐฑๅฏไปฅ้่ฟๆๆฏๆนๅผๆ้ฑๆๅบๆฅใ5ใ่ขซ้ปๅ้ๅคๆไบคๆๆฌพ๏ผๆไธๆฌกๆฒกๆๆๅ๏ผๅฐฑๆฏๅๅฐ็ฎก็ๅๆ็ปๅฎกๆ ธ๏ผไปฅไธไธค็นไผๅฏผ่ด่ดฆๅท่ขซๅป็ปใ
่ถๆฏ่ฟ็งๆ
ๅต่ถ่ฆๆฏไฝ่ตทๆฅ๏ผ้ๅฐ้ฎ้ข๏ผ่งฃๅณ้ฎ้ข๏ผ้ๅฐ้พๅค่งฃๅณ้พๅค๏ผ็ปง็ปญๅ่ก๏ผๅฝ็ถไธๆฏ่ฏด่ฎฉๆไปฌๅๆๅ
ฅๆดๅค็่ต้๏ผ่ๆฏๆไปฌ้่ฆๆพๅฐๆญฃ็กฎ็ๆนๆณๅป้ขๅฏนๅ่งฃๅณ่ฟไปถไบ๏ผๅฆๆๆจ็ฎๅ้ๅฐไบ
็ฝ่ต่ตขไบไธ่ฎฉๆ็ฐ่ขซ้ป็ๆ
ๅต่ฏทๅๆถ่็ณปๆไปฌไธบๆจๅค็ใ

|
closed
|
2024-08-28T04:21:00Z
|
2024-10-16T05:27:42Z
|
https://github.com/mouredev/Hello-Python/issues/71
|
[] |
lyh2008
| 0
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 597
|
cv2.error: (-215:Assertion failed) !_src.empty() in function 'cvtColor
|
While running the "# train model for 40 epochs" block in `cars segmentation (camvid).ipynb` I get the following error:
```
cv2.error: OpenCV(4.5.5) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor
```
Is this a compatibility error with the version of openCV I have (4.5.5)? Is there a list of dependencies and their compatible versions listed somewhere?
|
closed
|
2022-05-02T21:48:11Z
|
2022-05-02T21:57:20Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/597
|
[] |
waspinator
| 1
|
geex-arts/django-jet
|
django
| 354
|
No module named 'colorfieldjet' error, when db migrate
|
Hi,
I am getting the below error when i migrate jet db.
ModuleNotFoundError: No module named 'colorfieldjet'
I am trying to install django-jet for my project. help appreciated.
|
open
|
2018-09-08T10:28:24Z
|
2018-09-08T10:28:24Z
|
https://github.com/geex-arts/django-jet/issues/354
|
[] |
vinayaksanga
| 0
|
uxlfoundation/scikit-learn-intelex
|
scikit-learn
| 2,312
|
Add polynomial_kernel Function
|
The polynomial kernel converts data via a polynomial into a new space. This is easy difficulty, but requires significant benchmarking to find when
the scikit-learn-intelex implementation provides better performance. This project will focus on the public API and including the benchmarking
results for a seamless, high-performance user experience. Combines with the other kernel projects to a medium time commitment.
Scikit-learn definition can be found at:
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.polynomial_kernel.html
The onedal interface can be found at:
https://github.com/uxlfoundation/scikit-learn-intelex/blob/main/onedal/primitives/kernel_functions.py#L98
|
open
|
2025-02-10T07:41:20Z
|
2025-02-10T07:55:23Z
|
https://github.com/uxlfoundation/scikit-learn-intelex/issues/2312
|
[
"help wanted",
"good first issue",
"hacktoberfest"
] |
icfaust
| 0
|
computationalmodelling/nbval
|
pytest
| 41
|
Kernel should be interrupted on cell timeout
|
Currently, if a cell reaches timeout, it seems that it simply continues to the next cell, without interrupting the kernel. So e.g. if a cell enters an infinite loop, all following cells will try to execute, but the kernel will still be running the previous cell.
|
closed
|
2017-02-02T15:43:00Z
|
2017-02-08T16:13:49Z
|
https://github.com/computationalmodelling/nbval/issues/41
|
[] |
vidartf
| 0
|
mitmproxy/pdoc
|
api
| 67
|
__pdoc_file_module__ is shown in documentation
|

I'm running `pdoc --html --html-dir /tmp/aur aur.py --html-no-source --overwrite` on this commit: https://github.com/cdown/aur/tree/cf715ac51c9542d5717da6b6570cbda76099e9b2
```
% pdoc --version
0.3.1
```
|
closed
|
2015-09-19T22:21:13Z
|
2021-01-19T15:22:42Z
|
https://github.com/mitmproxy/pdoc/issues/67
|
[] |
cdown
| 2
|
fastapi/fastapi
|
python
| 13,175
|
Duplicated OperationID when adding route with multiple methods
|
### Discussed in https://github.com/fastapi/fastapi/discussions/8449
<div type='discussions-op-text'>
<sup>Originally posted by **bruchar1** March 30, 2022</sup>
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options ๐
### Example Code
```python
router.add_api_route(
"/clear",
clear,
methods=["POST", "DELETE"]
)
```
### Description
Seems to be caused by #4650.
The new `generate_unique_id()` function uses `list(route.methods)[0].lower()` as suffix for the `operation_id`. Therefore, in my example, both post and delete endpoints get `_post` suffix for operation_id, causing it to no longer be unique.
It then issues a "UserWarning: Duplicate Operation ID"
### Operating System
Windows
### Operating System Details
_No response_
### FastAPI Version
0.75.0
### Python Version
3.10.2
### Additional Context
_No response_</div>
|
open
|
2025-01-08T08:52:57Z
|
2025-01-08T08:55:08Z
|
https://github.com/fastapi/fastapi/issues/13175
|
[
"question",
"question-migrate"
] |
Kludex
| 1
|
ydataai/ydata-profiling
|
data-science
| 995
|
cannot import name 'soft_unicode' from 'markupsafe'
|
### Current Behaviour
Used colab with 3.2.0
```
!pip install pandas-profiling==3.2.0
import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport
df = pd.DataFrame(np.random.rand(100, 5), columns=["a", "b", "c", "d", "e"])
```
it shows
ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/usr/local/lib/python3.7/dist-packages/markupsafe/__init__.py)
### Expected Behaviour
no error
### Data Description
None
### Code that reproduces the bug
```Python
!pip install pandas-profiling==3.2.0
import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport
df = pd.DataFrame(np.random.rand(100, 5), columns=["a", "b", "c", "d", "e"])
```
### pandas-profiling version
2.3.0
### Dependencies
```Text
markupsafe==2.0.1
```
### OS
Mac
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
|
closed
|
2022-06-03T16:18:44Z
|
2022-09-30T18:39:05Z
|
https://github.com/ydataai/ydata-profiling/issues/995
|
[
"bug ๐",
"code quality ๐"
] |
DaiZack
| 4
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,956
|
Status Change Request Shows Pending - Possible Frontend Error
|
### What version of GlobaLeaks are you using?
version 4.14.4
### What browser(s) are you seeing the problem on?
Chrome
### What operating system(s) are you seeing the problem on?
Linux
### Describe the issue
When attempting to change the status, the request processing time is resulting in a "pending" status. I suspect there might be an error on the frontend side. It seems that the parameters in the payload for status changes (specifically for "open" and "closed" statuses) might not be handled correctly, leading to unexpected behavior.
**Steps to Reproduce:**
1. Navigate to the page where status changes are initiated.
2. Attempt to change the status to "open" or "closed."
3. Observe that the request processing time is displaying a "pending" status.
**Expected Behavior:**
The status change request should be processed promptly, and the UI should reflect the updated status without any delay.


### Proposed solution
_No response_
|
closed
|
2024-01-16T06:49:21Z
|
2024-01-19T20:35:20Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3956
|
[
"T: Bug",
"C: Client"
] |
faizanjaved38
| 1
|
jschneier/django-storages
|
django
| 729
|
Get screenshot of mp4
|
When using `django-storage` in production server it has no local. Then I can't get the `video_length nor screenshot` because I have to access to `path` in the disk
```bash
ipdb> instance.video
<FieldFile: videos/something.flv_360p_j5GJ19m.mp4>
ipdb> instance.video.path
*** NotImplementedError: This backend doesn't support absolute paths.
ipdb> dir(instance.video)
['DEFAULT_CHUNK_SIZE', '__bool__', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__enter__', '__eq__', '__exit__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_committed', '_del_file', '_file', '_get_file', '_require_file', '_set_file', 'chunks', 'close', 'closed', 'delete', 'encoding', 'field', 'file', 'fileno', 'flush', 'instance', 'isatty', 'multiple_chunks', 'name', 'newlines', 'open', 'path', 'read', 'readable', 'readinto', 'readline', 'readlines', 'save', 'seek', 'seekable', 'size', 'storage', 'tell', 'truncate', 'url', 'writable', 'write', 'writelines']
ipdb> instance.video.file
<S3Boto3StorageFile: videos/something.flv_360p_j5GJ19m.mp4>
ipdb> video_length = clean_duration(get_length(instance.video.file))
*** TypeError: expected str, bytes or os.PathLike object, not S3Boto3StorageFile
```
**Question**
How to get access a file and pass it as a argument when I use `django-storages`?
**Original question:**
https://stackoverflow.com/questions/57230814/how-to-get-thumbnail-of-mp4-when-upload-it-with-django-storages
|
closed
|
2019-07-27T09:33:47Z
|
2019-09-01T21:16:55Z
|
https://github.com/jschneier/django-storages/issues/729
|
[] |
elcolie
| 1
|
graphistry/pygraphistry
|
pandas
| 373
|
[BUG] g.register(org_name) not functional in some cases
|
**Describe the bug**
g.register() for various configurations are not working as expected.
**To Reproduce**
Code, including data, than can be run without editing:
```
pip install git+https://github.com/graphistry/pygraphistry@org_sso_login --user
graphistry.__version__ #check to make sure branch version
graphistry.register(api=3, protocol="https", server="", username="", password=""), #works
graphistry.register(api=3, protocol="https", server="", username="", password="", token="") #works
graphistry.register(api=3, protocol="https", server="",username="", password="",idp_name="", token="") #works
graphistry.register(api=3, protocol="https", server="", username="", password="",idp_name="") #works
#the next g.register()'s do not work
graphistry.register(api=3, protocol="https", server="", token="") #doesnt work
graphistry.register(api=3, protocol="https", server="",username="", password="",org_name="", token="")#doesnt work
graphistry.register(api=3, protocol="https", server="", username="", password="",org_name="") #doesnt work
```
**Expected behavior**
What should have happened
g.register() shouldve registered in all cases
**Actual behavior**
What did happen
it didnt work in some cases
**Screenshots**
If applicable, any screenshots to help explain the issue





**Browser environment (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari] chrome latest
- Version [e.g. 22]
**Graphistry GPU server environment**
- Where run [e.g., Hub, AWS, on-prem] aws eks [demo notebook](https://eks-dev2.grph.xyz/notebook/lab/tree/demos/for_analysis.ipynb)
- If self-hosting, Graphistry Version [e.g. 0.14.0, see bottom of a viz or login dashboard]
- If self-hosting, any OS/GPU/driver versions
**PyGraphistry API client environment**
- Where run [e.g., Graphistry 2.35.9 Jupyter] graphistry juypter
- Version [e.g. 0.14.0, print via `graphistry.__version__`]
'0.26.1+39.gec009c7'
- Python Version [e.g. Python 3.7.7]
- python3.8
**Additional context**
Add any other context about the problem here.
|
closed
|
2022-07-15T19:38:29Z
|
2022-10-20T00:27:22Z
|
https://github.com/graphistry/pygraphistry/issues/373
|
[
"bug",
"p1"
] |
webcoderz
| 3
|
explosion/spaCy
|
nlp
| 13,591
|
[BUG] -- Arguments `enable` and `disable` not working as expected in `spacy.load`
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
### The problem
This raises error `E1042`.
```python
import spacy
spacy.load("en_core_web_sm", enable=["senter"])
```
```python-traceback
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[4], line 2
1 # Error E1042.
----> 2 nlp = spacy.load("en_core_web_sm", enable=["senter"])
File ~\ADO\ml_kg\env\Lib\site-packages\spacy\__init__.py:51, in load(name, vocab, disable, enable, exclude, config)
27 def load(
28 name: Union[str, Path],
29 *,
(...)
34 config: Union[Dict[str, Any], Config] = util.SimpleFrozenDict(),
35 ) -> Language:
36 """Load a spaCy model from an installed package or a local path.
37
38 name (str): Package name or model path.
(...)
49 RETURNS (Language): The loaded nlp object.
50 """
---> 51 return util.load_model(
52 name,
53 vocab=vocab,
54 disable=disable,
55 enable=enable,
56 exclude=exclude,
57 config=config,
58 )
File ~\ADO\ml_kg\env\Lib\site-packages\spacy\util.py:465, in load_model(name, vocab, disable, enable, exclude, config)
463 return get_lang_class(name.replace("blank:", ""))()
464 if is_package(name): # installed as package
--> 465 return load_model_from_package(name, **kwargs) # type: ignore[arg-type]
466 if Path(name).exists(): # path to model data directory
467 return load_model_from_path(Path(name), **kwargs) # type: ignore[arg-type]
File ~\ADO\ml_kg\env\Lib\site-packages\spacy\util.py:501, in load_model_from_package(name, vocab, disable, enable, exclude, config)
484 """Load a model from an installed package.
485
486 name (str): The package name.
(...)
498 RETURNS (Language): The loaded nlp object.
499 """
500 cls = importlib.import_module(name)
--> 501 return cls.load(vocab=vocab, disable=disable, enable=enable, exclude=exclude, config=config)
File ~\ADO\ml_kg\env\Lib\site-packages\en_core_web_sm\__init__.py:10, in load(**overrides)
9 def load(**overrides):
---> 10 return load_model_from_init_py(__file__, **overrides)
File ~\ADO\ml_kg\env\Lib\site-packages\spacy\util.py:682, in load_model_from_init_py(init_file, vocab, disable, enable, exclude, config)
680 if not model_path.exists():
681 raise IOError(Errors.E052.format(path=data_path))
--> 682 return load_model_from_path(
683 data_path,
684 vocab=vocab,
685 meta=meta,
686 disable=disable,
687 enable=enable,
688 exclude=exclude,
689 config=config,
690 )
File ~\ADO\ml_kg\env\Lib\site-packages\spacy\util.py:539, in load_model_from_path(model_path, meta, vocab, disable, enable, exclude, config)
537 overrides = dict_to_dot(config, for_overrides=True)
538 config = load_config(config_path, overrides=overrides)
--> 539 nlp = load_model_from_config(
540 config,
541 vocab=vocab,
542 disable=disable,
543 enable=enable,
544 exclude=exclude,
545 meta=meta,
546 )
547 return nlp.from_disk(model_path, exclude=exclude, overrides=overrides)
File ~\ADO\ml_kg\env\Lib\site-packages\spacy\util.py:587, in load_model_from_config(config, meta, vocab, disable, enable, exclude, auto_fill, validate)
584 # This will automatically handle all codes registered via the languages
585 # registry, including custom subclasses provided via entry points
586 lang_cls = get_lang_class(nlp_config["lang"])
--> 587 nlp = lang_cls.from_config(
588 config,
589 vocab=vocab,
590 disable=disable,
591 enable=enable,
592 exclude=exclude,
593 auto_fill=auto_fill,
594 validate=validate,
595 meta=meta,
596 )
597 return nlp
File ~\ADO\ml_kg\env\Lib\site-packages\spacy\language.py:1973, in Language.from_config(cls, config, vocab, disable, enable, exclude, meta, auto_fill, validate)
1965 warnings.warn(
1966 Warnings.W123.format(
1967 enable=enable,
1968 enabled=enabled,
1969 )
1970 )
1972 # Ensure sets of disabled/enabled pipe names are not contradictory.
-> 1973 disabled_pipes = cls._resolve_component_status(
1974 list({*disable, *config["nlp"].get("disabled", [])}),
1975 enable,
1976 config["nlp"]["pipeline"],
1977 )
1978 nlp._disabled = set(p for p in disabled_pipes if p not in exclude)
1980 nlp.batch_size = config["nlp"]["batch_size"]
File ~\ADO\ml_kg\env\Lib\site-packages\spacy\language.py:2153, in Language._resolve_component_status(disable, enable, pipe_names)
2151 # If any pipe to be enabled is in to_disable, the specification is inconsistent.
2152 if len(set(enable) & to_disable):
-> 2153 raise ValueError(Errors.E1042.format(enable=enable, disable=disable))
2155 return tuple(to_disable)
ValueError: [E1042] `enable=['senter']` and `disable=['senter']` are inconsistent with each other.
If you only passed one of `enable` or `disable`, the other argument is specified in your pipeline's configuration.
In that case pass an empty list for the previously not specified argument to avoid this error.
```
Based on the error message I tried setting the `disable` argument to an empty list, but this raises the same error:
```python
# Another error E1042.
nlp = spacy.load("en_core_web_sm", enable=["senter"], disable=[])
```
```python-traceback
ValueError: [E1042] `enable=['senter']` and `disable=['senter']` are inconsistent with each other.
If you only passed one of `enable` or `disable`, the other argument is specified in your pipeline's configuration.
In that case pass an empty list for the previously not specified argument to avoid this error.
```
This lead me to believe that something is wrong with the `disable` argument.
```python
# Runs, but does not return as expected.
nlp = spacy.load("en_core_web_sm", disable=[])
nlp.disabled # ['senter']
```
### Hacky solutions
I've found at least two ways to get around it.
1. Bypass the `enable`/`disable` arguments and supply a dictionary to the `config` argument:
```python
# Runs and returns as expected.
nlp = spacy.load("en_core_web_sm", config={"nlp": {"disabled": []}})
nlp.disabled # []
"senter" in nlp.pipe_names # True
```
2. Or load the model as-is and enable the component after-the-fact:
```python
# Runs and returns as expected.
nlp = spacy.load("en_core_web_sm")
nlp.enable_pipe("senter")
nlp.disabled # []
"senter" in nlp.pipe_names # True
```
### Thoughts?
I don't really like either of the hacky solutions as I'd expect the `enable`/`disable` arguments to handle this. I'd be willing to submit a PR if this is indeed a bug.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Windows 10
* Python Version Used: 3.12.4
* spaCy Version Used: 3.7.5
* Environment Information: Results of `pip list`
```text
Package Version Editable project location
-------------------------- -------------- ---------------------------
aiofiles 24.1.0
annotated-types 0.7.0
anyio 4.4.0
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asttokens 2.4.1
async-lru 2.0.4
attrs 23.2.0
Babel 2.15.0
beautifulsoup4 4.12.3
black 24.4.2
bleach 6.1.0
blis 0.7.11
cachetools 5.4.0
catalogue 2.0.10
certifi 2024.7.4
cffi 1.16.0
cfgv 3.4.0
charset-normalizer 3.3.2
click 8.1.7
cloudpathlib 0.18.1
colorama 0.4.6
comm 0.2.2
confection 0.1.5
contourpy 1.2.1
curated-tokenizers 0.0.9
curated-transformers 0.1.1
cycler 0.12.1
cymem 2.0.8
debugpy 1.8.2
decorator 5.1.1
defusedxml 0.7.1
distlib 0.3.8
en-core-web-lg 3.7.1
en-core-web-md 3.7.1
en-core-web-sm 3.7.1
en-core-web-trf 3.7.3
executing 2.0.1
fastapi 0.110.3
fastjsonschema 2.20.0
filelock 3.15.4
fonttools 4.53.1
fqdn 1.5.1
fsspec 2024.6.1
h11 0.14.0
httpcore 1.0.5
httpx 0.27.0
huggingface-hub 0.24.5
identify 2.6.0
idna 3.7
ipykernel 6.29.5
ipython 8.26.0
ipython-genutils 0.2.0
ipywidgets 8.1.3
isoduration 20.11.0
isort 5.13.2
jedi 0.19.1
Jinja2 3.1.4
json5 0.9.25
jsonpointer 3.0.0
jsonschema 4.23.0
jsonschema-specifications 2023.12.1
jupyter 1.0.0
jupyter_client 8.6.2
jupyter-console 6.6.3
jupyter_core 5.7.2
jupyter-events 0.10.0
jupyter-lsp 2.2.5
jupyter_server 2.14.2
jupyter_server_terminals 0.5.3
jupyterlab 4.2.4
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.3
jupyterlab_widgets 3.0.11
kiwisolver 1.4.5
langcodes 3.4.0
language_data 1.2.0
marisa-trie 1.2.0
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.9.1
matplotlib-inline 0.1.7
mdurl 0.1.2
mistune 3.0.2
mpmath 1.3.0
murmurhash 1.0.10
mypy 1.10.1
mypy-extensions 1.0.0
nbclassic 1.1.0
nbclient 0.10.0
nbconvert 7.16.4
nbformat 5.10.4
neo4j 5.22.0
nest-asyncio 1.6.0
networkx 3.3
nodeenv 1.9.1
notebook 7.2.1
notebook_shim 0.2.4
numpy 1.26.4
overrides 7.7.0
packaging 24.1
pandocfilters 1.5.1
parso 0.8.4
pathspec 0.12.1
peewee 3.16.3
pillow 10.4.0
pip 24.2
pip-system-certs 4.0
platformdirs 4.2.2
pre-commit 3.7.1
preshed 3.0.9
prodigy 1.15.6
prodigy_pdf 0.2.2
prometheus_client 0.20.0
prompt_toolkit 3.0.47
psutil 6.0.0
pure_eval 0.2.3
pycparser 2.22
pydantic 2.8.2
pydantic_core 2.20.1
Pygments 2.18.0
PyJWT 2.8.0
pyparsing 3.1.2
pypdfium2 4.20.0
pytesseract 0.3.10
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-json-logger 2.0.7
pytz 2024.1
pywin32 306
pywinpty 2.0.13
PyYAML 6.0.1
pyzmq 26.0.3
qtconsole 5.5.2
QtPy 2.4.1
radicli 0.0.25
referencing 0.35.1
regex 2024.7.24
requests 2.32.3
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rich 13.7.1
rpds-py 0.19.0
ruff 0.5.2
safetensors 0.4.3
Send2Trash 1.8.3
setuptools 70.3.0
shellingham 1.5.4
six 1.16.0
smart-open 7.0.4
sniffio 1.3.1
soupsieve 2.5
spacy 3.7.5
spacy-alignments 0.9.1
spacy-curated-transformers 0.2.2
spacy-legacy 3.0.12
spacy-llm 0.7.2
spacy-loggers 1.0.5
spacy-lookups-data 1.0.5
spacy-transformers 1.3.5
srsly 2.4.8
stack-data 0.6.3
starlette 0.37.2
sympy 1.13.1
terminado 0.18.1
thinc 8.2.5
tinycss2 1.3.0
tokenizers 0.15.2
toolz 0.12.1
torch 2.4.0
tornado 6.4.1
tqdm 4.66.4
traitlets 5.14.3
transformers 4.36.2
typeguard 3.0.2
typer 0.12.3
types-python-dateutil 2.9.0.20240316
typing_extensions 4.12.2
uri-template 1.3.0
urllib3 2.2.2
uvicorn 0.26.0
virtualenv 20.26.3
wasabi 1.1.3
wcwidth 0.2.13
weasel 0.4.1
webcolors 24.6.0
webencodings 0.5.1
websocket-client 1.8.0
widgetsnbextension 4.0.11
wrapt 1.16.0
```
|
open
|
2024-08-05T21:59:38Z
|
2024-10-22T14:54:21Z
|
https://github.com/explosion/spaCy/issues/13591
|
[] |
it176131
| 1
|
OpenBB-finance/OpenBB
|
python
| 6,825
|
[๐น๏ธ] Follow on X
|
### What side quest or challenge are you solving?
Follow on X
### Points
50
### Description
Follow on X
### Provide proof that you've completed the task

|
closed
|
2024-10-20T13:31:20Z
|
2024-10-21T12:58:32Z
|
https://github.com/OpenBB-finance/OpenBB/issues/6825
|
[] |
sateshcharan
| 1
|
wger-project/wger
|
django
| 1,694
|
Performance of `api/v2/exercisebaseinfo/?limit=900`
|
## Use case
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
-->
In my setup, when i open `/exercise/overview/` it takes 2-3 minutes to load the content of `api/v2/exercisebaseinfo/?limit=900`. I run it on raspberry's and similar, so it may just be a hardware limitation.
## Workaround
So i came up with the following workaround. I set `EXERCISE_CACHE_TTL` to 25 hours and run a cronjob every 24 hours to warm up the cache:
```bash
python3 manage.py warmup-exercise-api-cache --force
```
This way it will reset the TTL of the exercise cache every 24 hours. This work fine and it is blazing fast now ;-) I think updating a exercise will warmup it's cache induvidually so, i don't have to wait till the next day for the updates (but i am not sure).
## Proposal
<!--
Briefly but precisely describe what you would like wger to be able to do.
Consider attaching images showing what you are imagining.
-->
It would be nice if the `warmup-exercise-api-cache` could be refactured similar to `sync_exercises`, so that celery task can be created. This would eliminate the fact that a cronjob is needed to warmup the cache.
|
open
|
2024-06-11T15:39:18Z
|
2024-06-12T13:58:54Z
|
https://github.com/wger-project/wger/issues/1694
|
[] |
bbkz
| 1
|
PokeAPI/pokeapi
|
api
| 361
|
How do we know who to request for code reviews?
|
For example, I know #351 is looking for another reviewer, but it is not obvious to me who the second person should be. Is there a way to mark off certain parts of the code so that we know who to ask for a review to meet the two reviewer requirement?
One option could be to use [CODEOWNERS](https://help.github.com/articles/about-codeowners/).
|
closed
|
2018-09-09T00:55:47Z
|
2018-09-09T20:26:33Z
|
https://github.com/PokeAPI/pokeapi/issues/361
|
[
"question"
] |
neverendingqs
| 5
|
RobertCraigie/prisma-client-py
|
pydantic
| 209
|
`update_many` should not include relational fields
|
<!--
Thanks for helping us improve Prisma Client Python! ๐ Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
Updating relational fields in the `updateMany` mutation is not supported yet: https://github.com/prisma/prisma/issues/3143
However, the types we generate suggest that it is supported.
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
Generate the client used for internal testing and observe the generated type:
```py
class UserUpdateManyMutationInput(TypedDict, total=False):
"""Arguments for updating many records"""
id: str
name: str
email: Optional[str]
posts: 'PostUpdateManyWithoutRelationsInput'
profile: 'ProfileUpdateOneWithoutRelationsInput'
```
This should be:
```py
class UserUpdateManyMutationInput(TypedDict, total=False):
"""Arguments for updating many records"""
id: str
name: str
email: Optional[str]
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Generated `updateMany` types should not include relational fields
|
closed
|
2022-01-05T23:25:27Z
|
2022-01-06T10:11:00Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/209
|
[
"bug/2-confirmed",
"kind/bug"
] |
RobertCraigie
| 0
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 516
|
ERROR: No matching distribution found for tensorflow==1.15
|
I'm trying to install the prerequisites but am unable to install tensorflow version 1.15. Does anyone have a workaround? Using python3-pip.
|
closed
|
2020-09-01T04:13:46Z
|
2020-09-04T05:12:03Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/516
|
[] |
therealjr
| 3
|
ned2/slapdash
|
plotly
| 7
|
Remove usage of Events
|
Events are being phased out from Dash. Need to remove the various references throughout Slapdash to them.
|
closed
|
2018-10-29T12:44:28Z
|
2018-11-03T04:39:53Z
|
https://github.com/ned2/slapdash/issues/7
|
[] |
ned2
| 1
|
plotly/plotly.py
|
plotly
| 4,533
|
only the last 8 figures shown when using for-loop
|
I have a jupyter-notebook, where I plot several plotly scatter figures via a for-loop:
```
for i in myColumnList:
for j in myColumnList
fig = px.scatter(dft, x=i, y=j)
fig.show()
```
As soon as more than 8 figures are shown, the older ones start to vanish.
How can I fix this?
plotly==5.16.0
jupyter==1.0.0
jupyter-client==6.1.12
jupyter-console==6.4.2
jupyter-contrib-core==0.4.2
jupyter-server==1.24.0
jupyter_core==5.3.1
jupyterlab_server==2.24.0
|
closed
|
2024-02-27T22:10:41Z
|
2024-02-28T15:37:01Z
|
https://github.com/plotly/plotly.py/issues/4533
|
[] |
Koliham
| 1
|
chaos-genius/chaos_genius
|
data-visualization
| 908
|
[BUG] Link to documentation instead of localhost by default (from alerts)
|
## Describe the bug
When [`CHAOSGENIUS_WEBAPP_URL`](https://docs.chaosgenius.io/docs/Operator_Guides/Configuration/Config%20Parameters#required-parameters) isn't setup correctly, all alerts (slack and email) have links ("View KPI", "Alerts dashboard", etc.) that go to localhost by default.
This can be confusing as it does not indicate how to fix it. It would be better to link to a page/section in the documentation clearly explaining that this variable isn't setup and how to set it up.
## Explain the environment
- **Chaos Genius version**: v0.5.0 and above
- **OS Version / Instance**: all
- **Deployment type**: all
## Current behavior
Links to localhost:8080
## Expected behavior
Should link to documentation if not setup
|
open
|
2022-04-08T09:04:28Z
|
2022-10-03T04:10:25Z
|
https://github.com/chaos-genius/chaos_genius/issues/908
|
[
"๐ bug",
"โalerts"
] |
Samyak2
| 0
|
cobrateam/splinter
|
automation
| 539
|
Trace logs
|
Is there any way to enable [selenium trace logs](https://github.com/SeleniumHQ/selenium/wiki/Logging) ?
|
open
|
2017-04-06T13:39:39Z
|
2018-08-27T01:09:10Z
|
https://github.com/cobrateam/splinter/issues/539
|
[] |
jpic
| 0
|
plotly/dash-cytoscape
|
plotly
| 215
|
[Bug ]demo usage-image-export.py results in Invalid argument error
|
<!--
Thanks for your interest in Plotly's Dash Cytoscape Component!
Note that GitHub issues in this repo are reserved for bug reports and feature
requests. Implementation questions should be discussed in our
[Dash Community Forum](https://community.plotly.com/c/dash).
Before opening a new issue, please search through existing issues (including
closed issues) and the [Dash Community Forum](https://community.plotly.com/c/dash).
When reporting a bug, please include a reproducible example! We recommend using
the [latest version](https://github.com/plotly/dash-cytoscape/blob/master/CHANGELOG.md)
as this project is frequently updated. Issues can be browser-specific so
it's usually helpful to mention the browser and version that you are using.
-->
#### Description
I downloaded github.com/plotly/dash-cytoscape/blob/main/demos/usage-image-export.py and executed it using python 3.12.3. It normally starts, but in the browser I get the bellow error. Tested with Chrome, Firefox, Edge browser under Windows. I would like to use the export in my own Dash Cytoscape application.
````
Invalid argument `generateImage.type` passed into Cytoscape with ID "cytoscape".
Expected one of ["svg","png","jpg","jpeg"].
(This error originated from the built-in JavaScript code that runs Dash apps. Click to see the full stack trace or open your browser's console.)
Error: Invalid argument `generateImage.type` passed into Cytoscape with ID "cytoscape".
Expected one of ["svg","png","jpg","jpeg"].
at propTypeErrorHandler (http://127.0.0.1:8050/_dash-component-suites/dash/dash-renderer/build/dash_renderer.v2_16_1m1712603851.dev.js:8118:9)
at CheckedComponent (http://127.0.0.1:8050/_dash-component-suites/dash/dash-renderer/build/dash_renderer.v2_16_1m1712603851.dev.js:3729:70)
at renderWithHooks (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_1m1712603851.14.0.js:14938:20)
at updateFunctionComponent (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_1m1712603851.14.0.js:17169:22)
at beginWork (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_1m1712603851.14.0.js:18745:18)
at HTMLUnknownElement.callCallback (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_1m1712603851.14.0.js:182:16)
at Object.invokeGuardedCallbackDev (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_1m1712603851.14.0.js:231:18)
at invokeGuardedCallback (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_1m1712603851.14.0.js:286:33)
at beginWork$1 (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_1m1712603851.14.0.js:23338:9)
at performUnitOfWork (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_1m1712603851.14.0.js:22292:14)
````
#### Steps/Code to Reproduce
python github.com/plotly/dash-cytoscape/blob/main/demos/usage-image-export.py
<!--
Example:
```python
import dash
import dash_cytoscape as cyto
from dash import html
app = dash.Dash(__name__)
app.layout = html.Div([
cyto.Cytoscape(
id='cytoscape',
elements=[
{'data': {'id': 'one', 'label': 'Node 1'}, 'position': {'x': 50, 'y': 50}},
{'data': {'id': 'two', 'label': 'Node 2'}, 'position': {'x': 200, 'y': 200}},
{'data': {'source': 'one', 'target': 'two','label': 'Node 1 to 2'}}
],
layout={'name': 'preset'}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
If the code is too long, feel free to put it in a public gist and link
it in the issue: https://gist.github.com
-->
#### Versions
<!--
Please run the following snippet and paste the output below:
from __future__ import print_function
import dash; print("Dash", dash.__version__)
import dash_renderer; print("Dash Renderer", dash_renderer.__version)
import dash_cytoscape; print("Dash Cytoscape", dash_cytoscape.__version__)
-->
Dash 2.16.1
Dash Cytoscape 1.0.0
<!--
Thanks for taking the time to help up improve this component. Dash Cytoscape
would not be possible without awesome contributors like you!
-->
|
open
|
2024-04-12T08:24:15Z
|
2024-04-14T11:21:56Z
|
https://github.com/plotly/dash-cytoscape/issues/215
|
[] |
bfr42
| 1
|
charlesq34/pointnet
|
tensorflow
| 62
|
Issue to download trial data
|
Hello,
I tried to download the trial data (ModelNET40) and I get the following error:
--2017-12-05 16:45:27-- https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip;
Resolving shapenet.cs.stanford.edu (shapenet.cs.stanford.edu)...
Connecting to shapenet.cs.stanford.edu (shapenet.cs.stanford.edu)|... connected.
HTTP request sent, awaiting response... 404 Not Found
2017-12-05 16:45:28 ERROR 404: Not Found.
--2017-12-05 16:45:28-- http://unzip/
Resolving unzip (unzip)... failed: No such host is known. .
wget: unable to resolve host address 'unzip'
--2017-12-05 16:45:30-- http://modelnet40_ply_hdf5_2048.zip/
Resolving modelnet40_ply_hdf5_2048.zip (modelnet40_ply_hdf5_2048.zip)... failed: No such host is known. .
wget: unable to resolve host address 'modelnet40_ply_hdf5_2048.zip'
'mv' is not recognized as an internal or external command,
operable program or batch file.
'rm' is not recognized as an internal or external command,
operable program or batch file.
I have checked that I have access in the repository.
Could you advise me on what to do?
Thank you.
|
closed
|
2017-12-05T17:03:38Z
|
2018-01-09T03:43:10Z
|
https://github.com/charlesq34/pointnet/issues/62
|
[] |
evagap
| 1
|
sunscrapers/djoser
|
rest-api
| 537
|
AttributeError: 'Settings' object has no attribute 'LOGIN_FIELD'
|
python: 3.7.6 or 3.8.5
django:3.1.1
djoser: 2.0.5
i custom UserCreateSerializer
i got error :
`AttributeError: 'Settings' object has no attribute 'LOGIN_FIELD'
`
but LOGIN_FIELD exist in https://github.com/sunscrapers/djoser/blob/c5a6f85efb55bc569529c45d98d3992f7ff7fc8b/djoser/conf.py#L31
-----
-----
my code:
# settings.py
```python
DJOSER = {
... ...
'SERIALIZERS': {
'user_create': 'user.serializers.UserCreateSerializer_my',
},
... ...
}
```
# serializers.py
```python
from django.contrib.auth import authenticate, get_user_model
from django.contrib.auth.password_validation import validate_password
from django.core import exceptions as django_exceptions
from django.db import IntegrityError, transaction
from rest_framework import exceptions, serializers
from rest_framework.exceptions import ValidationError
from django.contrib.auth import authenticate, login
from djoser import utils
from djoser.compat import get_user_email, get_user_email_field_name
from django.conf import settings
from djoser.serializers import UserCreateSerializer
from drf_recaptcha.fields import ReCaptchaV2Field, ReCaptchaV3Field
from django.contrib.auth.models import AbstractUser,AbstractBaseUser
User = get_user_model()
class UserCreateSerializer_my(UserCreateSerializer):
recaptcha = ReCaptchaV2Field()
class Meta:
model = User
fields = tuple(User.REQUIRED_FIELDS) + (
settings.LOGIN_FIELD,
settings.USER_ID_FIELD,
"password",
"recaptcha",
)
```
|
closed
|
2020-09-28T08:18:56Z
|
2020-11-06T15:26:51Z
|
https://github.com/sunscrapers/djoser/issues/537
|
[] |
ghost
| 0
|
sinaptik-ai/pandas-ai
|
pandas
| 684
|
Multi-table QnA
|
@gventuri, I got a quick question.
I saw the multi dataframe setup and I really liked it. But does it has to match columns to give a response from multiple tables?
or can it iterate over all the data frames and provide the answer most relevant? (This is really important for me)
|
closed
|
2023-10-25T07:37:16Z
|
2024-06-01T00:12:02Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/684
|
[] |
Dipankar1997161
| 0
|
dynaconf/dynaconf
|
flask
| 741
|
3.18 - It looks like this has broken integration with environment variables.
|
@rochacbruno @jyejare It looks like this has broken integration with environment variables.
If I have dynaconf 3.1.8 installed, and the combination of:
1. settings.yaml field key (nested one level from top map) is present, value is empty
2. validator with default=None for given field key
3. envvar set for nested field key (DYNACONF_something__field_key)
4. settings object configuration that looks like: https://github.com/SatelliteQE/robottelo/blob/master/robottelo/config/__init__.py#L18
My envvar value is no longer set in the resulting box instance, it takes the validator default of `None`.
I confirmed its this line in particular by modifying the file in site-packages with 3.1.8 installed.
_Originally posted by @mshriver in https://github.com/rochacbruno/dynaconf/pull/729#discussion_r853220515_
|
closed
|
2022-04-19T21:45:11Z
|
2022-06-02T19:21:51Z
|
https://github.com/dynaconf/dynaconf/issues/741
|
[
"bug",
"HIGH"
] |
rochacbruno
| 12
|
gto76/python-cheatsheet
|
python
| 26
|
Virtual environment, tests, setuptools and pip
|
I love this cheat sheet!
It would be great to add some points about how to make production-grade code, such as:
- virtual environment (e.g., with virtualenvwrapper)
- test (unittest, pytest, mock, tox)
- packaging and installation (setuptools, pip, ...)
Some points about how "import" works would be great too (one of the most used yet not understood feature in Python)
I realize that those are quite broad topics (especially the last one), but that would sure be very helpful
|
closed
|
2019-02-12T10:12:31Z
|
2019-08-13T08:04:48Z
|
https://github.com/gto76/python-cheatsheet/issues/26
|
[] |
javier-m
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.