repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
nolar/kopf
|
asyncio
| 700
|
memo no longer in startup event?
|
## Question
<!-- What problem do you currently face and see no solution for it? -->
By doing a basic example with kopf:
```python
@kopf.on.startup()
async def startup(memo: kopf.Memo, **_):
pass
TypeError: startup() missing 1 required positional argument: 'memo'
```
Nevertheless, it works with other handlers:
```python
@kopf.on.create("example.com")
async def create(spec, memo: kopf.Memo, **kwargs):
```
<!-- If possible, explain what other ways did you try to solve the problem? -->
I am using the latest version of kopf 1.29.2
## Checklist
- [ X] I have read the [documentation](https://kopf.readthedocs.io/en/latest/) and searched there for the problem
- [ X] I have searched in the [GitHub Issues](https://github.com/nolar/kopf/issues?utf8=%E2%9C%93&q=) for similar questions
## Keywords
<!-- Which keywords did you search for in the documentation/issue for this problem? -->
memo
|
closed
|
2021-02-27T07:11:28Z
|
2021-02-27T08:46:19Z
|
https://github.com/nolar/kopf/issues/700
|
[
"question"
] |
masantiago
| 2
|
SYSTRAN/faster-whisper
|
deep-learning
| 132
|
CUBLAS_STATUS_INVALID_VALUE on nvidia t4 when running behind gunicorn
|
When I run a normal python script that loads the model onto the gpu and runs it, everything works normally. Once I put it behind gunicorn I always get CUBLAS_STATUS_INVALID_VALUE not matter what compute_type I choose. Running on a T4 with cuda 11.6
```python
faster_whisper/transcribe.py", line 483, in encode
return self.model.encode(features, to_cpu=to_cpu)
RuntimeError: cuBLAS failed with status CUBLAS_STATUS_INVALID_VALUE
```
|
closed
|
2023-04-10T01:21:02Z
|
2023-04-12T07:23:05Z
|
https://github.com/SYSTRAN/faster-whisper/issues/132
|
[] |
daxaxelrod
| 6
|
Yorko/mlcourse.ai
|
data-science
| 679
|
Improve translation in part 6 on feature selection
|
[A comment on Kaggle](https://www.kaggle.com/kashnitsky/topic-6-feature-engineering-and-feature-selection/comments#1149832)
|
closed
|
2021-01-19T12:20:06Z
|
2021-12-22T17:50:48Z
|
https://github.com/Yorko/mlcourse.ai/issues/679
|
[
"minor_fix"
] |
Yorko
| 0
|
pyeve/eve
|
flask
| 1,543
|
[Question] Why was default.py removed in 8.1 and what do we replace the resolve_default_values method with?
|
### Expected Behavior
I expect to be able to import the method resolve_default_values. It was removed in eve version 8.1
```python
from eve.defaults import resolve_default_values
```
### Actual Behavior
Getting the following error;
Unresolved reference 'resolve_default_values'
### Environment
* Python version: 3.8.10
* Eve version: 1.1.5
*
|
closed
|
2025-01-06T11:57:16Z
|
2025-01-07T21:10:51Z
|
https://github.com/pyeve/eve/issues/1543
|
[] |
OzyOzk
| 0
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,181
|
Progress is not saved in checkpoint
|
I tried training the original datasets with colab, but the checkpoint doesn't save the progress
Does anyone have a similar problem?
<img width="886" alt="スクリーンショット 2020-11-09 224351" src="https://user-images.githubusercontent.com/69844075/98548619-175cb300-22dd-11eb-847f-d8bbfe764f68.png">
|
open
|
2020-11-09T13:44:35Z
|
2020-12-20T17:25:31Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1181
|
[] |
click-stack
| 4
|
huggingface/datasets
|
nlp
| 6,590
|
Feature request: Multi-GPU dataset mapping for SDXL training
|
### Feature request
We need to speed up SDXL dataset pre-process. Please make it possible to use multiple GPUs for the [official SDXL trainer](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) :)
### Motivation
Pre-computing 3 million of images takes around 2 days.
Would be nice to be able to be able to do multi-GPU (or even better – multi-GPU + multi-node) vae and embedding precompute...
### Your contribution
I'm not sure i can wrap my head around the multi-GPU mapping...
Plus it's too expensive for me to take x2 A100 and spend a day just figuring out the staff since I don't have a job right now.
|
open
|
2024-01-15T13:06:06Z
|
2024-01-15T13:07:07Z
|
https://github.com/huggingface/datasets/issues/6590
|
[
"enhancement"
] |
kopyl
| 0
|
pydantic/pydantic
|
pydantic
| 11,443
|
Broken validation of fields in `pydantic==2.11.0a{1,2}
|
### Initial Checks
- [x] I confirm that I'm using Pydantic V2
### Description
In [napari](https://github.com/napari/napari) and [app-model](https://github.com/pyapp-kit/app-model) we have CI job for pre-release tests. Since pydantic pydantic==2.11.0a1` CI start failing.
I have identified this, it is caused by passing 3 arguments to ` MenuItemBase._validator` method where `MenuItemBase` is inherits from pydantic `BaseModel`.
```
Traceback (most recent call last):
File "/home/czaki/Projekty/app-model/tests/bug_reproduce.py", line 9, in <module>
m = MenuItem(command=a, when=None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czaki/.pyenv/versions/app-model/lib/python3.12/site-packages/pydantic/main.py", line 230, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: MenuItemBase._validate() takes 2 positional arguments but 3 were given
Process finished with exit code 1
```
I have found that in `pydantic==2.10.6` the `_validator` gets class (as it is classmethod) plus `MenuItem` object as argument. In `pydantic=2.11.a2` it gets dict and `ValidationInfo` instance. The second object is unexpected as `_validator` expecting only two arguments.
We return `_validate` from `__get_validators__` classmethod.
https://github.com/pyapp-kit/app-model/blob/dfcc42f9f5887c8f1e609db671b1599305855f01/src/app_model/types/_menu_rule.py#L40-L56
I have checked that even removing pydantic1/2 compatibility layer (`pydantic_compat` package) do not solve problem. for me
### Example Code
```Python
from app_model import Action
from app_model.types import MenuItem
a = Action(
id="cmd.id",
title="Test title",
callback=lambda: "hi",
)
m = MenuItem(command=a, when=None)
```
### Python, Pydantic & OS Version
```Text
pydantic==2.11.0a1/2
All OS. https://github.com/pyapp-kit/app-model/actions/runs/13319935044
https://github.com/napari/napari/actions/runs/13262540422
```
|
closed
|
2025-02-14T18:33:47Z
|
2025-02-18T19:59:57Z
|
https://github.com/pydantic/pydantic/issues/11443
|
[
"bug V2",
"pending"
] |
Czaki
| 15
|
microsoft/RD-Agent
|
automation
| 506
|
RuntimeError: Failed to create chat completion after 10 retries
|
I'm encountering an issue where the program fails to create a chat completion after 10 retries. The error occurs when attempting to create an embedding using APIBackend().create_embedding(). Below is the error traceback:
<img width="1031" alt="Image" src="https://github.com/user-attachments/assets/86263eb2-0df2-4ff2-aa39-bfe546b9a232" />
Could anyone help me understand the root cause of this issue and suggest potential fixes? I have already tried restarting the system and re-running the process,the API is also running normally, but the issue persists.
|
open
|
2024-12-19T09:29:23Z
|
2025-01-10T05:20:53Z
|
https://github.com/microsoft/RD-Agent/issues/506
|
[
"question"
] |
Alexia0806
| 6
|
StackStorm/st2
|
automation
| 5,260
|
Can't assign pattern in rule criteria from datastore
|
## SUMMARY
While trying to assign a pattern to criteria:
i.e:
```
criteria:
trigger.body.some_field:
type: icontains
pattern: "{{ st2kv.system.some_ds_dict | decrypt_kv | st2kv.system.some_ds_dict.get('some_ds_dict_key') }}"
```
Also tried just:
` "{{ st2kv.system.some_str_from_ds }}" `
Docs example is'nt working as well:
```
criteria:
trigger.payload.build_number:
type: "equals"
pattern : "{{ st2kv.system.current_build_number }}"
```
Recieved:
`Object of type 'KeyValueLookup' is not JSON serializable`
Full traceback:
```
2021-05-10 12:05:33,521 ERROR [-] Failed to render pattern value "{{ st2kv.system.name }}" for key "trigger.body.sour
ce"
Traceback (most recent call last):
File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2reactor/rules/filter.py", line 128, in _check_criterion
criteria_context=payload_lookup.context
File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2reactor/rules/filter.py", line 206, in _render_criteria_pattern
context=criteria_context
File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2common/util/templating.py", line 72, in render_template_with_system_
context
rendered = render_template(value=value, context=context)
File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2common/util/templating.py", line 47, in render_template
rendered = template.render(context)
File "/opt/stackstorm/st2/lib/python3.6/site-packages/jinja2/asyncsupport.py", line 76, in render
return original_render(self, *args, **kwargs)
File "/opt/stackstorm/st2/lib/python3.6/site-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/opt/stackstorm/st2/lib/python3.6/site-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/opt/stackstorm/st2/lib/python3.6/site-packages/jinja2/_compat.py", line 37, in reraise
raise value.with_traceback(tb)
File "<template>", line 1, in top-level template code
File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2common/expressions/functions/data.py", line 106, in to_complex
return json.dumps(value)
File "/usr/lib/python3.6/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python3.6/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.6/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.6/json/encoder.py", line 180, in default
o.__class__.__name__)
TypeError: Object of type 'KeyValueLookup' is not JSON serializable
```
### STACKSTORM VERSION
`st2 3.1.0, on Python 3.6.9`
seems related to:
https://github.com/StackStorm/st2/issues/4043
@bigmstone
Saw you are related to that issue, any hints on how to resolve that/any workaround?
(I now that in action params in rule it manages to fetch ds values with no issues so maybe it's good to take that as reference)
|
closed
|
2021-05-10T07:16:59Z
|
2024-07-09T22:05:19Z
|
https://github.com/StackStorm/st2/issues/5260
|
[
"feature",
"service: rules engine"
] |
DavidMeu
| 4
|
piskvorky/gensim
|
machine-learning
| 2,609
|
bm25 score is not symmetrical
|
I think bm25 score is not symmetrical, but the code now fill the graph weights assume it is.
https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/summarization/graph.py#L310
Hope the authors can fix it. And will it influence the summary performance?
Or if I'm wrong, please let me know.
Thanks!
The code:
```python
from gensim.summarization.bm25 import get_bm25_weights
corpus = [
["black", "cat", "white", "cat"],
["cat", "outer", "space"],
["wag", "dog"]
]
get_bm25_weights(corpus)
# outputs:
[[1.1237959024144617, 0.1824377227735681, 0],
[0.11770175662810844, 1.1128701089187656, 0],
[0, 0, 1.201942644155272]]
```
|
closed
|
2019-09-25T12:51:16Z
|
2021-09-13T13:16:33Z
|
https://github.com/piskvorky/gensim/issues/2609
|
[] |
junfenglx
| 8
|
tqdm/tqdm
|
pandas
| 1,037
|
MacOS Github Actions `threading.wait`: Fatal Python error: Illegal instruction
|
- [x] I have marked all applicable categories:
+ [x] exception-raising bug
+ [ ] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [ ] new feature request
- [ ] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
I'm running a unit test on github CI. My project uses [IVIS](https://github.com/beringresearch/ivis), which in turn uses tqdm. It seems that it causing a `fatal error` though. I've added a link to the logs [here](https://github.com/RasaHQ/whatlies/pull/226/checks?check_run_id=1142851432#step:6:78).
The logs tell me that `tqdm-4.49.0` was downloaded in the python 3.7.9 env.
```
2020-09-21T08:09:36.3741500Z Collecting tqdm<5.0.0,>=4.38.0
2020-09-21T08:09:36.3836870Z Downloading tqdm-4.49.0-py2.py3-none-any.whl (69 kB)
```
In particular this is the output in the logs.
```
2020-09-21T08:15:00.6798900Z Fatal Python error: Illegal instruction
2020-09-21T08:15:00.6799190Z
2020-09-21T08:15:00.6799520Z Thread 0x0000700003d3c000 (most recent call first):
2020-09-21T08:15:00.6800300Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/threading.py", line 300 in wait
2020-09-21T08:15:00.6801140Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/threading.py", line 552 in wait
2020-09-21T08:15:00.6802980Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/tqdm/_monitor.py", line 69 in run
2020-09-21T08:15:00.6804240Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/threading.py", line 926 in _bootstrap_inner
2020-09-21T08:15:00.6805250Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/threading.py", line 890 in _bootstrap
2020-09-21T08:15:00.6805750Z
2020-09-21T08:15:00.6806140Z Current thread 0x000000010dbb2dc0 (most recent call first):
2020-09-21T08:15:00.6807940Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/ivis/data/knn.py", line 43 in build_annoy_index
2020-09-21T08:15:00.6809660Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/ivis/ivis.py", line 173 in _fit
2020-09-21T08:15:00.6811270Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/ivis/ivis.py", line 303 in fit
2020-09-21T08:15:00.6812310Z File "/Users/runner/work/whatlies/whatlies/whatlies/transformers/_transformer.py", line 58 in fit
2020-09-21T08:15:00.6813340Z File "/Users/runner/work/whatlies/whatlies/whatlies/transformers/_transformer.py", line 18 in __call__
2020-09-21T08:15:00.6814350Z File "/Users/runner/work/whatlies/whatlies/whatlies/embeddingset.py", line 374 in transform
2020-09-21T08:15:00.6815550Z File "/Users/runner/work/whatlies/whatlies/tests/test_transformers.py", line 53 in test_transformations_new_size
2020-09-21T08:15:00.6817280Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/python.py", line 180 in pytest_pyfunc_call
2020-09-21T08:15:00.6819130Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
2020-09-21T08:15:00.6820780Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/manager.py", line 87 in <lambda>
2020-09-21T08:15:00.6822580Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/manager.py", line 93 in _hookexec
2020-09-21T08:15:00.6824130Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
2020-09-21T08:15:00.6825710Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/python.py", line 1570 in runtest
2020-09-21T08:15:00.6827380Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/runner.py", line 153 in pytest_runtest_call
2020-09-21T08:15:00.6829230Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
2020-09-21T08:15:00.6830830Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/manager.py", line 87 in <lambda>
2020-09-21T08:15:00.6832450Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/manager.py", line 93 in _hookexec
2020-09-21T08:15:00.6834190Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
2020-09-21T08:15:00.6836390Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/runner.py", line 247 in <lambda>
2020-09-21T08:15:00.6838170Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/runner.py", line 294 in from_call
2020-09-21T08:15:00.6839740Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/runner.py", line 247 in call_runtest_hook
2020-09-21T08:15:00.6841810Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/runner.py", line 207 in call_and_report
2020-09-21T08:15:00.6843610Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/runner.py", line 117 in runtestprotocol
2020-09-21T08:15:00.6845470Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/runner.py", line 100 in pytest_runtest_protocol
2020-09-21T08:15:00.6847650Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
2020-09-21T08:15:00.6849980Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/manager.py", line 87 in <lambda>
2020-09-21T08:15:00.6854630Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/manager.py", line 93 in _hookexec
2020-09-21T08:15:00.6857140Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
2020-09-21T08:15:00.6864520Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/main.py", line 321 in pytest_runtestloop
2020-09-21T08:15:00.6907550Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
2020-09-21T08:15:00.6911390Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/manager.py", line 87 in <lambda>
2020-09-21T08:15:00.6918550Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/manager.py", line 93 in _hookexec
2020-09-21T08:15:00.6920910Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
2020-09-21T08:15:00.6922530Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/main.py", line 296 in _main
2020-09-21T08:15:00.6924750Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/main.py", line 240 in wrap_session
2020-09-21T08:15:00.6928050Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/main.py", line 289 in pytest_cmdline_main
2020-09-21T08:15:00.6930110Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/callers.py", line 187 in _multicall
2020-09-21T08:15:00.6933650Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/manager.py", line 87 in <lambda>
2020-09-21T08:15:00.6935350Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/manager.py", line 93 in _hookexec
2020-09-21T08:15:00.6936890Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pluggy/hooks.py", line 286 in __call__
2020-09-21T08:15:00.6938740Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/config/__init__.py", line 158 in main
2020-09-21T08:15:00.6940230Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/_pytest/config/__init__.py", line 180 in console_main
2020-09-21T08:15:00.6941100Z File "/Users/runner/hostedtoolcache/Python/3.7.9/x64/bin/pytest", line 8 in <module>
2020-09-21T08:15:00.6943850Z /Users/runner/work/_temp/63d8490e-ef7b-415f-aa09-6f72c7998b44.sh: line 2: 1907 Illegal instruction: 4 pytest --verbose --durations 0
```
I'll try to debug it on my side but I figured I'd share this here too.
|
open
|
2020-09-21T08:35:27Z
|
2024-01-02T17:00:48Z
|
https://github.com/tqdm/tqdm/issues/1037
|
[
"p0-bug-critical ☢",
"help wanted 🙏",
"need-feedback 📢",
"synchronisation ⇶"
] |
koaning
| 4
|
psf/black
|
python
| 3,705
|
IndentationError on valid Python triggered by fmt: off
|
When running black (tested with version 23.3.0) as follows:
```
import black
code = """
if True:
if True:
# fmt: off
pass
"""
black.format_str(code, mode=black.Mode())
```
This produces the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "src/black/__init__.py", line 1084, in format_str
File "src/black/__init__.py", line 1089, in _format_str_once
File "src/black/parsing.py", line 94, in lib2to3_parse
File "src/blib2to3/pgen2/driver.py", line 213, in parse_string
File "src/blib2to3/pgen2/driver.py", line 140, in parse_tokens
File "src/blib2to3/pgen2/driver.py", line 104, in __next__
File "src/blib2to3/pgen2/tokenize.py", line 529, in generate_tokens
start, end = pseudomatch.span(1)
File "<tokenize>", line 4
pass
IndentationError: unindent does not match any outer indentation level
```
Variations on this example that do not cause black to error:
1. Changing it to four space indents.
2. Removing the #fmt: off
3. Reducing the level of nesting to that of a single
This example seems extremely similar to #3702, but produces a different error from a different location so seemed worth reporting as its own thing.
|
closed
|
2023-05-25T14:21:30Z
|
2023-05-26T05:18:26Z
|
https://github.com/psf/black/issues/3705
|
[
"T: bug"
] |
DRMacIver
| 1
|
jonaswinkler/paperless-ng
|
django
| 1,375
|
[Other] Multiple correspondents and multiple types for a document
|
<!--
=> Discussions, Feedback and other suggestions belong in the "Discussion" section and not on the issue tracker.
=> If you would like to submit a feature request please submit one under https://github.com/jonaswinkler/paperless-ng/discussions/categories/feature-requests
=> If you encounter issues while installing of configuring Paperless-ng, please post that in the "Support" section of the discussions. Remember that Paperless successfully runs on a variety of different systems. If paperless does not start, it's probably is an issue with your system, and not an issue of paperless.
=> Don't remove the [Other] prefix from the title.
-->
Hi!
I have a use-case that would benefit a lot from the ability to set multiple correspondents to a document.
Specifically for medical records; having both the actual doctor as well as the hospital would be very helpful for sorting/searching records.
Alternatively being able to form a hierarchy in the correspondents admin page could work (i.e. set the document's correspondent to the doctor, but set in the admin page that the doctor is a "child" of the hospital correspondent).
Then filtering by the parent correspondent could return all children as well as documents set directly to the parent.
A similar system could also be useful for document types I'm sure, but I don't have specific examples that could illustrate that.
Is it something that would be reasonably feasible?
|
open
|
2021-10-11T03:44:28Z
|
2021-10-31T15:54:54Z
|
https://github.com/jonaswinkler/paperless-ng/issues/1375
|
[] |
anotherjulien
| 2
|
ploomber/ploomber
|
jupyter
| 550
|
link from videos section to our youtube channel
|
link to our youtube channel (add link at the top)
https://docs.ploomber.io/en/latest/videos.html
```
more videos available in our youtube channel
```
also, if there's a way to add a link that subscribes people, that's grat
|
closed
|
2022-02-06T22:45:49Z
|
2022-02-08T22:15:39Z
|
https://github.com/ploomber/ploomber/issues/550
|
[
"good first issue"
] |
edublancas
| 2
|
HIT-SCIR/ltp
|
nlp
| 593
|
安装报错
|
下面是报错信息,我只看了第一个ERROR,显示找不到符合版本要求的maturin,然后我去对应的PyPI页面查看,上面显示最新版本的确到了0.13,但是我使用pip install maturin=0.13.3安装对应的包,提示的报错信息也显示找不到对应的包
```
Collecting ltp
Using cached ltp-4.2.10-py3-none-any.whl (20 kB)
Collecting ltp-core
Using cached ltp_core-0.1.2-py3-none-any.whl (68 kB)
Collecting ltp-extension
Using cached ltp_extension-0.1.8.tar.gz (92 kB)
Installing build dependencies ... error
ERROR: Command errored out with exit status 1:
command: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-pwyxbntz/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'maturin>=0.13,<0.14'
cwd: None
Complete output (2 lines):
ERROR: Could not find a version that satisfies the requirement maturin<0.14,>=0.13 (from versions: 0.7.1, 0.7.2, 0.7.6, 0.7.7, 0.7.8, 0.7.9, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.9.0, 0.9.1, 0.9.4, 0.10.0, 0.10.2, 0.10.3, 0.10.4, 0.10.5, 0.10.6, 0.11.0, 0.11.1, 0.11.2, 0.11.3, 0.11.4, 0.11.5, 0.12.0, 0.12.1, 0.12.2, 0.12.3, 0.12.4, 0.12.5, 0.12.6, 0.12.7, 0.12.8, 0.12.9, 0.12.10, 0.12.11, 0.12.12, 0.12.13, 0.12.14, 0.12.15, 0.12.16, 0.12.17, 0.12.18b1, 0.12.18b2, 0.12.18, 0.12.19, 0.12.20)
ERROR: No matching distribution found for maturin<0.14,>=0.13
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/1d/1a/17a693f6c1c38c3b251133ee1f9a114e2e62d37d3a4ec767c6313b859612/ltp_extension-0.1.8.tar.gz#sha256=6432dcb6c313958dd810af00f415a3f24314862c461f3b42d8cfd2e1204a792d (from https://pypi.org/simple/ltp-extension/). Command errored out with exit status 1: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-pwyxbntz/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'maturin>=0.13,<0.14' Check the logs for full command output.
Using cached ltp_extension-0.1.7.tar.gz (92 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
ERROR: Command errored out with exit status 1:
command: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpdelqx2ob
cwd: /tmp/pip-install-t67622em/ltp-extension_11edac253305460f8fa5b79c20b8ddb5
Complete output (6 lines):
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/87/7e/3979012989da6f065fd59d4033914793741c7d05afde4cc2bcc4530eb87a/ltp_extension-0.1.7.tar.gz#sha256=959981e022f130698e2312f7166b48cfd47153fa3056e8ab5a319c64e7ceb5a0 (from https://pypi.org/simple/ltp-extension/). Command errored out with exit status 1: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpdelqx2ob Check the logs for full command output.
Using cached ltp_extension-0.1.6.tar.gz (92 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
ERROR: Command errored out with exit status 1:
command: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpln6ev_21
cwd: /tmp/pip-install-t67622em/ltp-extension_975f646366b746d3a27909cd3bacd72d
Complete output (6 lines):
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/47/f1/b158d80f7a9fa933baf7c8c0fae8963f16813e502d577e07d54bc5dc3f52/ltp_extension-0.1.6.tar.gz#sha256=54bae65d14c4d00c5303dd812ed51e0f5c684a54cc4098888ca47bce6666abc3 (from https://pypi.org/simple/ltp-extension/). Command errored out with exit status 1: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpln6ev_21 Check the logs for full command output.
Using cached ltp_extension-0.1.5.tar.gz (91 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
ERROR: Command errored out with exit status 1:
command: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpuezebxdh
cwd: /tmp/pip-install-t67622em/ltp-extension_d59daef088574afcb23398ccd4e58d99
Complete output (6 lines):
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/58/c6/da59a7e876fe89d690f0658f9a992940cff73da5f242db628a35166cd099/ltp_extension-0.1.5.tar.gz#sha256=5f4957eda11fffe8d9d838050c187277a84d53fb71009a321c850c60da154b05 (from https://pypi.org/simple/ltp-extension/). Command errored out with exit status 1: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpuezebxdh Check the logs for full command output.
Using cached ltp_extension-0.1.4.tar.gz (91 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
ERROR: Command errored out with exit status 1:
command: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpkcjrdwuk
cwd: /tmp/pip-install-t67622em/ltp-extension_58de40cc94754a539925400f66dd2cdb
Complete output (6 lines):
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/8c/be/a76b4f92375030db9a55895c93f504fac400bcaa484a879e2b58b16abd40/ltp_extension-0.1.4.tar.gz#sha256=9aa5aea2084c30377b0a38293a79f7793795779eed1e8d62e8def44f1b7ed10f (from https://pypi.org/simple/ltp-extension/). Command errored out with exit status 1: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpkcjrdwuk Check the logs for full command output.
Using cached ltp_extension-0.1.3.tar.gz (85 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
ERROR: Command errored out with exit status 1:
command: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmphk5gx0t4
cwd: /tmp/pip-install-t67622em/ltp-extension_3a87bdc16b414bb8a8e16fe377fc10a0
Complete output (6 lines):
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/b9/81/7174d97c4086a4716dc1e16d0e77bcba5f6672bb3349f34cc823d986f10f/ltp_extension-0.1.3.tar.gz#sha256=c71c44aa42ce1eb97abd691bb8dce8e2dff4bf183bfa8105992ffe144af6b81b (from https://pypi.org/simple/ltp-extension/). Command errored out with exit status 1: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmphk5gx0t4 Check the logs for full command output.
Using cached ltp_extension-0.1.2.tar.gz (85 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
ERROR: Command errored out with exit status 1:
command: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpus3wjjqq
cwd: /tmp/pip-install-t67622em/ltp-extension_0aacee16db334b948dccdccafc029b6f
Complete output (6 lines):
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/a1/e7/0c996c20e1095c83759554c4865279daf59d1e1ba9d143d1bd050b67a04d/ltp_extension-0.1.2.tar.gz#sha256=49cf79c07e5cbb946227ac5cb21f449c5c0e47692b86ec36740c6deff7441317 (from https://pypi.org/simple/ltp-extension/). Command errored out with exit status 1: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpus3wjjqq Check the logs for full command output.
Using cached ltp_extension-0.1.0.tar.gz (82 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
ERROR: Command errored out with exit status 1:
command: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmp7h36_7ob
cwd: /tmp/pip-install-t67622em/ltp-extension_d5343bf3948e48ae88d8f71714efa6e3
Complete output (6 lines):
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/a2/02/fe3a0d6deca17f7554db84889d988309e1fafd60f2c49c35fed9a87ae293/ltp_extension-0.1.0.tar.gz#sha256=602c31e6a7f7b2b0fd05f985837a86c992f2d93eb4a20d1ed4f25be518f6dcbd (from https://pypi.org/simple/ltp-extension/). Command errored out with exit status 1: /home/klwang/miniconda3/envs/match-baseline/bin/python /home/klwang/miniconda3/envs/match-baseline/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmp7h36_7ob Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement ltp-extension (from versions: 0.1.0, 0.1.2, 0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7, 0.1.8)
ERROR: No matching distribution found for ltp-extension
```
|
open
|
2022-09-19T03:37:38Z
|
2022-09-19T03:41:16Z
|
https://github.com/HIT-SCIR/ltp/issues/593
|
[] |
GuiQuQu
| 1
|
Lightning-AI/LitServe
|
fastapi
| 245
|
The current openai spec is only compatible with version openai==0.28.0 and cannot recognize newer versions
|
## 🐛 Bug
The current openai spec is only compatible with version 0.28.0 and cannot recognize newer versions.
|
closed
|
2024-08-29T07:58:35Z
|
2024-08-30T12:56:29Z
|
https://github.com/Lightning-AI/LitServe/issues/245
|
[
"bug",
"help wanted"
] |
GhostXu11
| 2
|
horovod/horovod
|
tensorflow
| 3,598
|
Can not install horovod[tensorflow] when using bazel
|
I have a model that depends on both tensorflow and horovod that I'm building in a bazel repo. Even though it successfully bazel builds, during runtime, I get the error that "horovod.tensorflow has not been built".
The pip packages and versions are:
tensorflow==2.5.1
horovod[tensorflow]==0.23.0
I have been reading about various solutions and I see that people recommend setting certain environmental variables (PATH and LD_LIBRARY_PATH) so I added the following flags to bazel build.
--action_env=PATH=$PATH:$HOME/openmpi/bin
--action_env=LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/openmpi/lib
Please find below the error message I get

|
closed
|
2022-07-11T19:49:42Z
|
2022-09-24T05:58:38Z
|
https://github.com/horovod/horovod/issues/3598
|
[
"wontfix"
] |
RajatDoshi
| 1
|
labmlai/annotated_deep_learning_paper_implementations
|
machine-learning
| 43
|
Vision Transformer
|
Paper: [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)
|
closed
|
2021-04-19T12:03:23Z
|
2021-07-17T09:56:19Z
|
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/43
|
[
"paper implementation"
] |
vpj
| 4
|
jupyterhub/repo2docker
|
jupyter
| 1,192
|
Default Julia version for the REQUIRE buildpack is 0.6.4, not the latest version
|
If use of REQUIRE file isn't deprecated, it would make sense that not having a pinned version should lead to the latest version of julia, rather than an ancient version.
If it is deprecated though, in favor of Project.toml, it can make sense still to just retain as it is. But, then we should declare it clearly here and there.
|
open
|
2022-10-10T09:46:29Z
|
2023-02-17T09:16:54Z
|
https://github.com/jupyterhub/repo2docker/issues/1192
|
[
"documentation",
"maintenance"
] |
consideRatio
| 1
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 856
|
已经解决了win10下的训练自己的数据问题,加Q群857449786 注明CycleGAN 共同研究
|
已经解决了win10下的训练自己的数据问题,加Q群857449786 注明CycleGAN 共同研究
|
open
|
2019-11-23T06:58:19Z
|
2019-11-23T06:58:19Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/856
|
[] |
QQ2737499951
| 0
|
davidsandberg/facenet
|
tensorflow
| 271
|
calculate_filtering_metrics.py error
|
hi, everyone
I run calculate_filtering_metrics.py to calculate the distence_to_center. I get the error.
raceback (most recent call last):
File "calculate_filtering_metrics.py", line 127, in <module>
main(parse_arguments(sys.argv[1:]))
File "calculate_filtering_metrics.py", line 54, in main
False, False, False, nrof_preprocess_threads=4, shuffle=False)
File "/root/linshq/facenet-master/src/facenet.py", line 124, in read_and_augument_data
labels = ops.convert_to_tensor(label_list, dtype=tf.int32)
TypeError: Expected int32, got range(0, 123) of type 'range' instead.
I trace the code. The error is in facenet.py
`def read_and_augument_data(image_list, label_list, image_size, batch_size, max_nrof_epochs,
random_crop, random_flip, random_rotate, nrof_preprocess_threads, shuffle=True):
images = ops.convert_to_tensor(image_list, dtype=tf.string)
labels = ops.convert_to_tensor(label_list, dtype=tf.int32)`
the label is an range, convert range to int32 occured errors.
TypeError: Expected int32, got range(0, 123) of type 'range' instead.
Is anyone have this problem? Thanks
|
closed
|
2017-05-10T12:52:33Z
|
2020-07-21T06:04:17Z
|
https://github.com/davidsandberg/facenet/issues/271
|
[] |
Roysky
| 11
|
waditu/tushare
|
pandas
| 1,768
|
通用行情接口无法对除股票以外的数据复权,文档中基金的复权公式错误
|
通用行情接口未对基金数据进行复权操作:

文档:https://tushare.pro/document/2?doc_id=127介绍的复权参考有误。

|
open
|
2025-03-05T16:54:16Z
|
2025-03-05T16:54:16Z
|
https://github.com/waditu/tushare/issues/1768
|
[] |
wintersnowlc
| 0
|
explosion/spaCy
|
nlp
| 13,535
|
Calling Spacy from Matlab throws errors
|
spaCy version 3.7.5
Location C:\Users\cse_s\AppData\Local\Programs\Python\Python312\Lib\site-packages\spacy
Platform Windows-11
Python version 3.12.3
Pipelines en_core_web_lg (3.7.1)
I want to call the Spacy code using Matlab. The Spacy code is as follows which work well using Pycharm IDE.
```
import spacy
nlp = spacy.load("en_core_web_lg")
doc = nlp("This is a sentence.")
```
However, the Matlab code throws errors
```
Error using numpy_ops>init thinc.backends.numpy_ops
Python Error: ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from
PyObject
Error in cupy_ops><module> (line 16)
Error in __init__><module> (line 17)
Error in api><module> (line 1)
Error in compat><module> (line 39)
Error in errors><module> (line 3)
Error in __init__><module> (line 6)
Error in test_spacy><module> (line 1)
Error in <frozen importlib>_call_with_frames_removed (line 228)
Error in <frozen importlib>exec_module (line 850)
Error in <frozen importlib>_load_unlocked (line 680)
Error in <frozen importlib>_find_and_load_unlocked (line 986)
Error in <frozen importlib>_find_and_load (line 1007)
Error in <frozen importlib>_gcd_import (line 1030)
Error in __init__>import_module (line 127)
```
The Matlab code is
```
pyenv;
py.importlib.import_module('test_spacy');
path_add = fileparts(which('test_spacy.py'));
if count(py.sys.path, path_add) == 0
insert(py.sys.path, int64(0), path_add);
end
```
test_spacy is the python file name that I have. How can I solve this issue?
|
open
|
2024-06-20T15:28:53Z
|
2024-07-08T19:40:24Z
|
https://github.com/explosion/spaCy/issues/13535
|
[] |
Saswati-Project
| 2
|
HumanSignal/labelImg
|
deep-learning
| 118
|
How to keep annotations of prev-image still exist when I open the next image?
|
Hi, thanks for your good job firstly!
My images are captured from video, so the prev-image are similar with next-image. but when I open next image, previous annotations disappear. I don't want to create then again, since they are useful for the next image.
I tried to modify you codes, but failed. can you tell me how to do it ?
|
closed
|
2017-07-17T13:36:50Z
|
2019-05-28T12:14:33Z
|
https://github.com/HumanSignal/labelImg/issues/118
|
[
"enhancement"
] |
z-huabao
| 4
|
gevent/gevent
|
asyncio
| 1,781
|
gevent 20.12.0 later doesn't mention requires_dist anymore
|
* gevent version: 20.12.0
* Python version: 3.7
* Operating System: Any
### Description:
For `gevent` version `<=20.9.0`, we would get the list of dependencies of the `gevent` package via `pypi` as:
```
$ curl https://pypi.org/pypi/gevent/20.9.0/json
{
....
"requires_dist": [
"zope.event",
"zope.interface",
"setuptools",
"greenlet (>=0.4.17) ; platform_python_implementation == \"CPython\"",
"cffi (>=1.12.2) ; platform_python_implementation == \"CPython\" and sys_platform == \"win32\"",
"dnspython (<2.0,>=1.16.0) ; extra == 'dnspython'",
"idna ; extra == 'dnspython'",
"repoze.sphinx.autointerface ; extra == 'docs'",
"sphinxcontrib-programoutput ; extra == 'docs'",
"psutil (>=5.7.0) ; (sys_platform != \"win32\" or platform_python_implementation == \"CPython\") and extra == 'monitor'",
"dnspython (<2.0,>=1.16.0) ; extra == 'recommended'",
"idna ; extra == 'recommended'",
"cffi (>=1.12.2) ; (platform_python_implementation == \"CPython\") and extra == 'recommended'",
"selectors2 ; (python_version == \"2.7\") and extra == 'recommended'",
"backports.socketpair ; (python_version == \"2.7\" and sys_platform == \"win32\") and extra == 'recommended'",
"psutil (>=5.7.0) ; (sys_platform != \"win32\" or platform_python_implementation == \"CPython\") and extra == 'recommended'",
"dnspython (<2.0,>=1.16.0) ; extra == 'test'",
"idna ; extra == 'test'",
"requests ; extra == 'test'",
"objgraph ; extra == 'test'",
"cffi (>=1.12.2) ; (platform_python_implementation == \"CPython\") and extra == 'test'",
"selectors2 ; (python_version == \"2.7\") and extra == 'test'",
"futures ; (python_version == \"2.7\") and extra == 'test'",
"mock ; (python_version == \"2.7\") and extra == 'test'",
"backports.socketpair ; (python_version == \"2.7\" and sys_platform == \"win32\") and extra == 'test'",
"contextvars (==2.4) ; (python_version > \"3.0\" and python_version < \"3.7\") and extra == 'test'",
"coverage (<5.0) ; (sys_platform != \"win32\") and extra == 'test'",
"coveralls (>=1.7.0) ; (sys_platform != \"win32\") and extra == 'test'",
"psutil (>=5.7.0) ; (sys_platform != \"win32\" or platform_python_implementation == \"CPython\") and extra == 'test'"
],
...
}
```
This `requires_dist` allows us to build packages (read FreeBSD pkg) for all dependencies and gevent itself. But since 20.12.0, the `requires_dist` comes out to be `null` which has made FreeBSD's gevent pkg to stop at `20.9.0` itself:
```
$ curl https://pypi.org/pypi/gevent/20.12.0/json
{
...
"requires_dist": null,
...
}
```
thus pkg shows:
```
$ pkg rquery %v py37-gevent
20.9.0
```
Can something please be done from gevent side to fix the dependencies?
|
closed
|
2021-03-26T14:25:36Z
|
2021-03-26T14:52:34Z
|
https://github.com/gevent/gevent/issues/1781
|
[] |
Rahul-RB
| 2
|
babysor/MockingBird
|
deep-learning
| 415
|
对于wiki教程的百度网盘链接的建议
|
建议提供一些其他网盘的链接,方便没有百度账号或不想安装百度网盘客户端的用户。
|
closed
|
2022-03-02T04:36:15Z
|
2022-03-07T06:15:35Z
|
https://github.com/babysor/MockingBird/issues/415
|
[] |
ghost
| 3
|
ray-project/ray
|
python
| 50,883
|
[Serve] Ray Serve APIs for users to define when the Ray Serve applications are ready to serve requests
|
### Description
It'd be useful for the Ray Serve API to allow users to configure settings such as custom timeouts for when applications are ready to serve requests.
### Use case
This would be useful for scenarios such as: https://github.com/ray-project/enhancements/pull/58#discussion_r1968439611, where a large number of non-declaratively created applications which frequently update may make it difficult for the controller to find a state where all Serve apps are in a "Ready" state.
|
open
|
2025-02-25T03:39:23Z
|
2025-02-25T17:29:47Z
|
https://github.com/ray-project/ray/issues/50883
|
[
"enhancement",
"triage",
"serve"
] |
ryanaoleary
| 0
|
wandb/wandb
|
tensorflow
| 8,919
|
[Q]: Train large model with LoRA and PP using One Single Sweep agent
|
### Ask your question
we have a scenario to train large model shareded across 8gpus in the same node using sweep config with LoRa and PP,
so is it possible to use only one sweep agent to train entire model. if this is doable, if so, how can archive using sweep.
Note: we tried with sweep agent and accelerate, it does only parallelize the agent.
|
closed
|
2024-11-19T23:20:50Z
|
2024-11-26T10:08:55Z
|
https://github.com/wandb/wandb/issues/8919
|
[
"ty:question",
"c:sweeps"
] |
rajeshitshoulders
| 3
|
flaskbb/flaskbb
|
flask
| 273
|
Deleting flagged posts fails
|
Steps to reproduce:
> Create a clean instance of flaskbb using at least sqlite or postgres databases (not tested mysql)
> Log in as any user and add a post
> Flag that post as any user
> As an admin attempt to delete the post - doesn't matter if it's the only post in the thread or a reply, doesn't matter which is flagged
Flaskbb will crash with "Internal Server Error", and trawling the [crash log](http://gjcp.net/deletecrash.txt) shows that something is attempting to set Report.post_id to None (line 60 of the log). Something similar happens when you try to delete a topic.
I suspect this is something to do with whether or not the database is going to cascade deletes but I don't understand SqlAlchemy enough to fix it.
|
closed
|
2017-01-14T10:05:59Z
|
2018-04-15T07:47:45Z
|
https://github.com/flaskbb/flaskbb/issues/273
|
[] |
gordonjcp
| 1
|
jacobgil/pytorch-grad-cam
|
computer-vision
| 25
|
Average Pool Layer
|
Hi,
I was going through the implementation, I found that the 'avgpool' layer is ignored. In the Class "ModelOutputs", in __call__(), the output from FeatureExtractor is simply passed to the classification part of the network, ignoring the "avgPool" altogether.
Can you please guide me what am I missing?
Regards,
Asim
|
closed
|
2019-12-10T06:25:29Z
|
2021-04-26T05:06:10Z
|
https://github.com/jacobgil/pytorch-grad-cam/issues/25
|
[] |
AsimJalwana
| 4
|
autogluon/autogluon
|
data-science
| 4,931
|
[BUG] Hello! I have an error message after execution of ValueError: At least some time series in train_data must have >= 97 observations. Please provide longer time series as train_data or reduce prediction_length, num_val_windows, or val_step_size.
|
**Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [ ] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**To Reproduce**
<!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged.
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com.
In short, we are going to copy-paste your code to run it and we expect to get the same result as you. -->
**Screenshots / Logs**
<!-- If applicable, add screenshots or logs to help explain your problem. -->
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
# Replace this code with the output of the following:
from autogluon.core.utils import show_versions
show_versions()
```
</details>
|
closed
|
2025-02-25T13:44:02Z
|
2025-03-13T10:33:00Z
|
https://github.com/autogluon/autogluon/issues/4931
|
[
"bug: unconfirmed",
"Needs Triage"
] |
wask24
| 1
|
marimo-team/marimo
|
data-science
| 3,631
|
AI assist fails for any openai o1 model
|
### Describe the bug
Hello, very happy to see that the openai o1 model family is now available through AI assist. However when I try to use any of the o1 models, I see this error:
`openai.BadRequestError: Error code: 400 - {'error': {'message': "Unsupported value: 'messages[0].role' does not support 'system' with this model.", 'type': 'invalid_request_error', 'param': 'messages[0].role', 'code': 'unsupported_value'}}`
I can confirm that I don't have any custom instructions input to my marimo settings, and I believe o1 models don't support system instruction, so assuming that marimo is trying to pass system instructions by default which triggers the issue.
### Environment
<details>
```
{
"marimo": "0.10.17",
"OS": "Darwin",
"OS Version": "24.1.0",
"Processor": "i386",
"Python Version": "3.9.0",
"Binaries": {
"Browser": "132.0.6834.112",
"Node": "--"
},
"Dependencies": {
"click": "8.1.7",
"docutils": "0.17.1",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.18.4",
"packaging": "24.2",
"psutil": "5.9.8",
"pygments": "2.18.0",
"pymdown-extensions": "10.12",
"pyyaml": "6.0.2",
"ruff": "0.8.3",
"starlette": "0.27.0",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.1"
},
"Optional Dependencies": {
"altair": "5.5.0",
"pandas": "2.2.3",
"pyarrow": "18.1.0"
},
"Experimental Flags": {
"chat_sidebar": true,
"tracing": false
}
}
```
</details>
### Code to reproduce
_No response_
|
closed
|
2025-01-30T20:39:20Z
|
2025-03-06T18:58:13Z
|
https://github.com/marimo-team/marimo/issues/3631
|
[
"bug",
"upstream"
] |
awooler
| 4
|
seleniumbase/SeleniumBase
|
pytest
| 3,034
|
uc_gui_click_captcha Method Fails to Locate and Bypass CAPTCHA on Genius.com
|
Hello,
I'm encountering an issue with the uc_gui_click_captcha method when trying to bypass the CAPTCHA on the website https://genius.com/. The method seems unable to correctly locate the CAPTCHA element on the page, resulting in a failure to bypass it.
Below is the code I used to test this:
```
from seleniumbase import SB
with SB(uc=True) as sb:
url = 'https://genius.com/api/search/multi?per_page=5&q=&q=creepin%20the%20weeknd&per_page=5'
sb.open(url)
sb.uc_gui_click_captcha()
print(sb.get_page_source())
```
The expected behavior is that the uc_gui_click_captcha method would correctly find and interact with the CAPTCHA to bypass it, but instead, it doesn't seem to detect the CAPTCHA at all.
Could you please look into this issue and provide guidance on how to resolve it? I appreciate any help you can offer.
Thank you!
https://github.com/user-attachments/assets/99bc3fb6-536b-41a6-a1c9-6ecf1146e8aa
|
closed
|
2024-08-17T06:11:14Z
|
2024-08-21T17:32:27Z
|
https://github.com/seleniumbase/SeleniumBase/issues/3034
|
[
"invalid usage",
"can't reproduce",
"UC Mode / CDP Mode"
] |
RoseGoli
| 6
|
iperov/DeepFaceLab
|
machine-learning
| 5,674
|
"OOM when allocating tensor" on pre-trained model
|
THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
Properly run 6) train SAEHD.bat
Out of memory error. I probably wouldn't write issue if I was not able to execute it once. But whatever I am trying, can't get it working again.
Error while executing 6) train SAEHD
Error details displayed in console below.
[console.txt](https://github.com/iperov/DeepFaceLab/files/11518770/console.txt)
I am using pre trained model called 'clinton'
https://mega.nz/file/3k40EBTI#Co3XQ8thnvykhgqctW-UmU6JHhioQXWKHkhB6QALdCI
*Describe, in some detail, the steps you tried that resulted in the behavior described above.*
1. Extract src and dst images, extract and review facesets
2. Run 7) merge SAEHD.bat
## Other relevant information
- ** GTX2080
- **Command lined used (if not specified in steps to reproduce)**: Check attached console.
- **Operating system and version:** Windows 10
- **Python version:** prebuilt
|
closed
|
2023-05-19T16:37:46Z
|
2023-05-22T18:46:59Z
|
https://github.com/iperov/DeepFaceLab/issues/5674
|
[] |
gb2111
| 0
|
jupyter/nbgrader
|
jupyter
| 1,130
|
Implement a generic API to support custom exchanges
|
After discussions at the Edinburgh hackathon, I think it makes sense to at least implement a generic API for the exchange which would then support custom exchanges. While nbgrader itself will still use the same filesystem-based exchange (and wait for hubauth to be ready), this would enable others to experiment with alternate exchange implementations that work better for their setups.
@perllaghu @BertR you mentioned you have done something close to this already---would you be interested in porting what you have back to nbgrader? I am thinking it might make sense to have a plugin similar to the one added by #1093.
|
open
|
2019-06-01T14:50:30Z
|
2022-06-23T10:20:10Z
|
https://github.com/jupyter/nbgrader/issues/1130
|
[
"enhancement",
"refactor"
] |
jhamrick
| 2
|
xorbitsai/xorbits
|
numpy
| 159
|
ENH: Support installing xorbits with AWS dependencies via pip install
|
Note that the issue tracker is NOT the place for general support. For
discussions about development, questions about usage, or any general questions,
contact us on https://discuss.xorbits.io/.
enhance setup.py to make it support ``pip install 'xorbits[aws]'``
|
closed
|
2023-01-11T08:42:30Z
|
2023-01-17T03:12:14Z
|
https://github.com/xorbitsai/xorbits/issues/159
|
[
"enhancement"
] |
ChengjieLi28
| 0
|
davidteather/TikTok-Api
|
api
| 429
|
[INSTALLATION] - ERROR: Failed building wheel for greenlet
|
**Describe the error**
ERROR: Failed building wheel for greenlet
Running setup.py clean for greenlet
Failed to build greenlet
Installing collected packages: urllib3, typing-extensions, selenium, pyee, idna, greenlet, chardet, certifi, selenium-stealth, requests, playwright, TikTokApi
Running setup.py install for greenlet ... error
ERROR: Command errored out with exit status 1:
**Desktop (please complete the following information):**
- OS: Ubuntu 18.04
- TikTokApi Version [e.g. 3.8.6] - if out of date upgrade before posting an issue
**Additional context**
I try to install tiktokapi on virtualenv however its I have this issue with installation.I use python 3.6.9
|
closed
|
2020-12-18T09:07:49Z
|
2020-12-20T01:43:00Z
|
https://github.com/davidteather/TikTok-Api/issues/429
|
[
"installation_help"
] |
mikegrep
| 2
|
PaddlePaddle/PaddleNLP
|
nlp
| 10,018
|
[Question]: PP-UIE模型本地部署问题
|
### 请提出你的问题
本人小白,想要在本地部署新发布的PP-UIE模型,按照说明,新建python文件,输入如下代码:
`from pprint import pprint
from paddlenlp import Taskflow
schema = ['时间', '选手', '赛事名称'] # Define the schema for entity extraction
ie = Taskflow('information_extraction',
schema= ['时间', '选手', '赛事名称'],
schema_lang="zh",
batch_size=1,
model="paddlenlp/PP-UIE-0.5B",
precision='float16')
pprint(ie("2月8日上午北京冬奥会自由式滑雪女子大跳台决赛中中国选手谷爱凌以188.25分获得金牌!")) # Better print results using pprint
`
无法直接下载PP-UIE-0.5B模型到本地,使用官方文档给出的另一种方法,把相关权重下载到工程中,结构如图所示,当再次运行时仍然报错,请问是哪里配置有问题吗?

|
closed
|
2025-03-06T15:50:47Z
|
2025-03-14T11:50:04Z
|
https://github.com/PaddlePaddle/PaddleNLP/issues/10018
|
[
"question"
] |
leaf19
| 1
|
feder-cr/Jobs_Applier_AI_Agent_AIHawk
|
automation
| 969
|
[DOCS]: Typo in the Usage section (collect mode)
|
### Affected documentation section
`Usage` section in README.md
### Documentation improvement description
The Usage section has a bullet point for `Using the colled mode`
Instead, it should be `Using the collect mode`
### Why is this change necessary?
Typo error
### Additional context
_No response_
|
closed
|
2024-11-29T08:00:24Z
|
2024-11-29T15:56:55Z
|
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/969
|
[
"documentation"
] |
abhilashshakti
| 0
|
pytest-dev/pytest-mock
|
pytest
| 372
|
Mock method don't work when test from other file fail
|
### Context
I wrote a test on a X file that mock a method and this test pass, but when I wrote other test on a Y file and this test failed the test on the X file failed too. I was debugging it and I perceived that the mock o X file did not worked when the test on Y file failed. But this only happen when I use `@pytest.mark.django_db` on the test.
### How to reproduce
I wrote a small project just to reproduce it: https://github.com/LeonardoFurtado/djangoProjectPytest
You just need to git pull it, run `pipenv shell` and `pipenv install` to setup the environment and run `pytest`.
After it you will see 2 tests failing.
Go to the `test_a_failed_test.py` file and comment the line `@pytest.mark.django_db`. Now only one test will fail.
This is happening because the mock `create_user` on `conftest.py` now is working after comment it.
This is a simple api:
```python
class ProfileView(APIView):
def post(self, request):
auth_context = get_auth_context(1) # expected: 2 but i will mock it as 3
if auth_context == 2:
return Response(
{
"success": False,
"error": "The get_auth_context is returning the number 2 instead of the mocked value 3.",
},
status=status.HTTP_400_BAD_REQUEST,
)
elif auth_context == 3:
return Response(
{
"success": True,
"error": "Ok. The get_auth_context is returning the mocked value",
},
status=status.HTTP_200_OK,
)
```
the test below is the test who pass and use the create_user mock I only pass when the create_user mock works.
```python
@pytest.mark.django_db
def test_create_profile(api_client, create_user):
data = {
"name": "Naruto Uzumaki",
}
response = api_client.post('/profiles/', data=data)
assert response.status_code == 200
```
the mock:
```python
@pytest.fixture
def create_user(mocker):
return mocker.patch("core.auth.get_auth_context", return_value=3)
```
This is the test I created to fail on purpose, when you comment `@pytest.mark.django_db` the other test stops failing for some reason because the mock above starts to work again.
```python
@pytest.mark.django_db
def test_random_test(api_client):
"""
If you comment this test, the test on file test_auth_context will pass
"""
response = api_client.patch(
'/route/that_doesnt_exists/',
format='json'
)
assert response.status_code == 200
```
If you need more details please tell me.
|
closed
|
2023-06-27T13:50:08Z
|
2023-06-30T18:03:21Z
|
https://github.com/pytest-dev/pytest-mock/issues/372
|
[
"question"
] |
LeonardoFurtado
| 4
|
plotly/dash
|
dash
| 3,136
|
Feedback from Dash 3.0.0rc1
|
Thank you Plotly Dash team for the release candidate and making React 18.3 the new default. We did a bunch of testing, but could not find any real regressions. We did want to provide two pieces of feedback:
1. We still got the `'pkgutil.find_loader' is deprecated and slated for removal in Python 3.14; use importlib.util.find_spec() instead` DeprecationWarning. Its from `venv\Lib\site-packages\dash\dash.py:1846`
2. After the update (via Poetry) we still saw that Dash constrains flask & werkzeug to older (3.0.6 / 3.0.3) versions:
`werkzeug 3.0.6 3.1.3 The comprehensive WSGI web application library.`
`flask 3.0.3 3.1.0 A simple framework for building complex web applications.`
Everything else ran very smooth. Thank you Dash team.
Stack: Python 3.12
``` powershell
(venv) PS C:\Users\Python\dwsc-site> poetry update
Updating dependencies
Resolving dependencies... (53.4s)
Package operations: 1 install, 1 update, 3 removals
- Removing dash-core-components (2.0.0)
- Removing dash-html-components (2.0.0)
- Removing dash-table (5.0.0)
- Installing stringcase (1.2.0)
- Updating dash (2.18.2 -> 3.0.0rc1)
```
|
closed
|
2025-01-28T22:55:44Z
|
2025-01-29T14:09:33Z
|
https://github.com/plotly/dash/issues/3136
|
[] |
ghaarsma
| 1
|
explosion/spaCy
|
deep-learning
| 12,079
|
Building entityrecognizer from json takes substantially shorter than loading model
|
Hello!
In my current project I'm working with two custom entityrecognizers (one for names, one for places), with a large number of patters (500k).
I noticed that adding these pipes to the standard dutch model like so:
```
nlp = sp.load("nl_core_news_lg")
# Make config file
config_names = {
"validate": False,
"overwrite_ents": True,
"ent_id_sep": "||",
}
config_places = {
"validate": False,
"overwrite_ents": False,
"ent_id_sep": "||",
}
# Create EntityRuler
ruler_names= nlp.add_pipe("entity_ruler", "names_ruler", before="ner", config=config_names)
ruler_places= nlp.add_pipe("entity_ruler", "places_ruler", after="names_ruler", config=config_places)
# add respective patterns
with nlp.select_pipes(enable="ner"):
ruler_names.add_patterns(names["namen"])
ruler_places.add_patterns(names["plaatsen"])
```
This takes around 40 seconds.
However, if I save this model and load it from disk (nlp.to_disk and spacy.load) takes about 4,5 minutes. Not much difference if I exclude all but the 2 rulers and the ner component.
Am I doing something wrong with saving and loading, or am I not implementing something that could give me a time save boost?
Thanks in advance for the help!
|
closed
|
2023-01-09T21:33:39Z
|
2023-01-11T10:40:50Z
|
https://github.com/explosion/spaCy/issues/12079
|
[
"perf / speed",
"feat / spanruler"
] |
SjoerdBraaksma
| 1
|
oegedijk/explainerdashboard
|
plotly
| 223
|
Citation?
|
Hello! Curious if you have a desired citation for this project.
Thanks 🙏
|
closed
|
2022-06-22T06:09:32Z
|
2023-02-18T21:05:32Z
|
https://github.com/oegedijk/explainerdashboard/issues/223
|
[] |
dylan-slack
| 1
|
allenai/allennlp
|
pytorch
| 5,559
|
Remove upper bound for nltk
|
Currently, allennlp 2.9.0 [requires](https://github.com/allenai/allennlp/blame/v2.9.0/setup.py#L58)
```
nltk <3.6.6
```
while allennlp-models 2.9.0 [requires](https://github.com/allenai/allennlp-models/blame/v2.9.0/requirements.txt#L11)
```
nltk >=3.6.5
```
This is the slimmest possible margin, and for anyone using allennlp-models, this is effectively an exact pin to `3.6.5`, and will strain (or in the worst case: break) the package solvers of either conda or pip.
For context: the change that introduced the cap here was #5540 in response to #5521 - that's completely fine as a stop-gap, but latest in the medium term this cap should be removed again.
CC @dirkgr @epwalsh
|
closed
|
2022-02-05T08:41:58Z
|
2022-02-10T23:16:02Z
|
https://github.com/allenai/allennlp/issues/5559
|
[
"bug"
] |
h-vetinari
| 1
|
nteract/papermill
|
jupyter
| 168
|
Create a guide for adding new languages to papermill
|
We should refactor the language extensions to be a plugin system where users can easily write new language or kernel support for mapping parameters. The steps are easy enough but a well written page to guide contributions would be it much more open to new contributors who want support.
|
closed
|
2018-08-12T21:28:32Z
|
2018-08-26T03:27:04Z
|
https://github.com/nteract/papermill/issues/168
|
[
"idea"
] |
MSeal
| 0
|
modin-project/modin
|
pandas
| 6,846
|
Skip unstable Unidist `to_csv` tests
|
to_csv tests hang on Unidist on MPI in CI. We just skip those for now.
|
closed
|
2024-01-08T15:31:41Z
|
2024-01-08T17:20:13Z
|
https://github.com/modin-project/modin/issues/6846
|
[
"Testing 📈",
"P1"
] |
anmyachev
| 0
|
plotly/dash
|
dash
| 2,919
|
React 18 warning defaultProps will be removed from function components
|
Running an app with React 18 emit a warning that defaultProps for functions components are deprecated.
Warning message:
`Support for defaultProps will be removed from function components in a future major release. Use JavaScript default parameters instead.`
We use the defaultProps for our own components generator in either the generated react-docgen metadata or the typescript transpiler. We do not support newer version of react-docgen (5.4.3 is our last supported version) that may have a fix for this in their code. The typescript transpiler needs to be adapted.
Affected core components:
- All html components.
- dcc
- Link
- Loading
- Location
- Tab
- Tooltip
|
closed
|
2024-07-11T14:57:46Z
|
2024-11-04T19:26:05Z
|
https://github.com/plotly/dash/issues/2919
|
[
"feature",
"P3",
"dash-3.0"
] |
T4rk1n
| 2
|
dropbox/sqlalchemy-stubs
|
sqlalchemy
| 145
|
sqlmypy is broken with mypy 0.750+
|
Given the sample code on [how to create a class with sqlalchemy](https://docs.sqlalchemy.org/en/13/orm/extensions/declarative/basic_use.html#basic-use):
```python
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class SomeClass(Base):
__tablename__ = 'some_table'
id = Column(Integer, primary_key=True)
name = Column(String(50))
```
and the sqlmypy plugin switched on in mypy.ini:
```ini
[mypy]
plugins = sqlmypy
```
Running mypy on said code fails with an internal error in the line that calls `declarative_base`:
```
$ mypy mp.py --show-traceback
mp.py:4: error: INTERNAL ERROR -- Please try using mypy master on Github:
https://mypy.rtfd.io/en/latest/common_issues.html#using-a-development-mypy-build
Please report a bug at https://github.com/python/mypy/issues
version: 0.770+dev.20f7f2dd71c21bde4d3d99f9ab69bf6670c7fa03
Traceback (most recent call last):
File "/home/dev/venv/bin/mypy", line 8, in <module>
sys.exit(console_entry())
File "/home/dev/venv/lib/python3.8/site-packages/mypy/__main__.py", line 8, in console_entry
main(None, sys.stdout, sys.stderr)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/main.py", line 89, in main
res = build.build(sources, options, None, flush_errors, fscache, stdout, stderr)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/build.py", line 180, in build
result = _build(
File "/home/dev/venv/lib/python3.8/site-packages/mypy/build.py", line 249, in _build
graph = dispatch(sources, manager, stdout)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/build.py", line 2649, in dispatch
process_graph(graph, manager)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/build.py", line 2956, in process_graph
process_stale_scc(graph, scc, manager)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/build.py", line 3049, in process_stale_scc
mypy.semanal_main.semantic_analysis_for_scc(graph, scc, manager.errors)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/semanal_main.py", line 78, in semantic_analysis_for_scc
process_top_levels(graph, scc, patches)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/semanal_main.py", line 199, in process_top_levels
deferred, incomplete, progress = semantic_analyze_target(next_id, state,
File "/home/dev/venv/lib/python3.8/site-packages/mypy/semanal_main.py", line 326, in semantic_analyze_target
analyzer.refresh_partial(refresh_node,
File "/home/dev/venv/lib/python3.8/site-packages/mypy/semanal.py", line 350, in refresh_partial
self.refresh_top_level(node)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/semanal.py", line 361, in refresh_top_level
self.accept(d)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/semanal.py", line 4679, in accept
node.accept(self)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/nodes.py", line 1062, in accept
return visitor.visit_assignment_stmt(self)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/semanal.py", line 1931, in visit_assignment_stmt
self.apply_dynamic_class_hook(s)
File "/home/dev/venv/lib/python3.8/site-packages/mypy/semanal.py", line 2182, in apply_dynamic_class_hook
hook(DynamicClassDefContext(call, lval.name, self))
File "/home/dev/venv/lib/python3.8/site-packages/sqlmypy.py", line 193, in decl_info_hook
add_metadata_var(ctx.api, info)
File "/home/dev/venv/lib/python3.8/site-packages/sqlmypy.py", line 133, in add_metadata_var
add_var_to_class('metadata', typ, info)
File "/home/dev/venv/lib/python3.8/site-packages/sqlmypy.py", line 95, in add_var_to_class
var._fullname = info.fullname() + '.' + name
TypeError: 'str' object is not callable
mp.py:4: : note: use --pdb to drop into pdb
```
It still works in 0.740, but fails on every version from 0.750 upwards until the current master, which I used to generate the traceback above.
|
closed
|
2020-03-11T16:31:03Z
|
2020-03-11T16:36:22Z
|
https://github.com/dropbox/sqlalchemy-stubs/issues/145
|
[] |
a-recknagel
| 1
|
Urinx/WeixinBot
|
api
| 272
|
自动回复功能报错
|
[*] 进行同步线路测试 ... ERROR:root:URLError = timed out
成功
不能自动回复,报错:typeerror
|
open
|
2019-07-30T04:32:35Z
|
2019-07-30T04:37:48Z
|
https://github.com/Urinx/WeixinBot/issues/272
|
[] |
zhfff4869
| 1
|
BeanieODM/beanie
|
pydantic
| 959
|
[BUG] wrong condition convertion on comparing two fields
|
**Describe the bug**
I need to compare two fields with `model.field1 > model.field2` but Beanio generated the wrong filter.
My model is like:
```python
class H(Document):
last_sms_time: datetime
next_sms_time: datetime
```
And `H.find(H.next_sms_time < H.last_sms_time).get_filter_query()` returned `{'next_sms_time': {'$lt': 'last_sms_time'}}` that compares `next_sms_time` with literal string 'last_sms_time'.
But the correct filter that will be recognized by Mongodb should be:
```javascript
{"$expr": {"$lt": ["$next_sms_time", "$last_sms_time"]}}
```
or with `$where`:
```javascript
{"$where": "this.next_sms_time < this.last_sms_time"}
```
**To Reproduce**
Sorry I will complete the sample later.
**Expected behavior**
See above.
**Additional context**
Nothing.
|
open
|
2024-07-02T03:56:53Z
|
2024-12-08T21:52:49Z
|
https://github.com/BeanieODM/beanie/issues/959
|
[
"feature request"
] |
Karmenzind
| 5
|
dynaconf/dynaconf
|
fastapi
| 374
|
Dynaconf can't find settings file when using pytest
|
Where do I put settings files / How do I configure Dynaconf so it will work with pytest? Currently I get the following error:
```
AttributeError: 'Settings' object has no attribute 'LOGGING'
```
It seems like Dynaconf is looking for the files in the tree the python command was invoked from and not relative the module itself.
Here is how my project structure (simplified) looks like:
```
.
├── ...
├── project
│ ├── app.py
│ ├── conf.py
│ ├── __init__.py
│ ├── settings.toml
└── tests
├── conftest.py
└── test_app.py
```
My `conf.py`:
```python
from dynaconf import Dynaconf
settings = Dynaconf(
envvar_prefix="DYNACONF", settings_files=["settings.toml", ".secrets.toml"],
)
```
Thanks in advance
|
closed
|
2020-07-27T11:03:03Z
|
2023-03-27T20:19:16Z
|
https://github.com/dynaconf/dynaconf/issues/374
|
[
"question"
] |
trallnag
| 9
|
ScrapeGraphAI/Scrapegraph-ai
|
machine-learning
| 88
|
blockScraper implementation
|
**Is your feature request related to a problem? Please describe.**
A scraper pipeline capable of retrieve all the similar blocks in a page, like ecommerce, weather, fly websites
**Describe the solution you'd like**
I have found this paper https://www.researchgate.net/publication/261360247_A_Web_Page_Segmentation_Approach_Using_Visual_Semantics
It deals specifically wti this issue
**Describe alternatives you've considered**
nope
**Additional context**

|
closed
|
2024-04-27T13:05:26Z
|
2024-07-21T14:47:21Z
|
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/88
|
[
"feature request"
] |
lurenss
| 3
|
lepture/authlib
|
flask
| 462
|
Exception if JWK set `kid` in single key and JWS does not specify `kid`
|
**Describe the bug**
This issue is related to #222
* `kid` is optional in JWK, [RFC7517](https://datatracker.ietf.org/doc/html/rfc7517#section-4.5)
* `kid` is optional in JWS header, [RFC7515](https://datatracker.ietf.org/doc/html/rfc7515#section-4.1.4)
If JWKS only one key specifies and the JWS header has no `kid` specified the following behaviour occurs:
* If key has no `kid` `jwt.decode` find and uses the key.
* If key has a `kid` `jwt.decode` fails and does not find the key and fails.
**Error Stacks**
```
Traceback (most recent call last):
File "test_jwk.py", line 46, in <module>
jwt.decode(
File "/home/synapse/synapse_dev/env/lib/python3.8/site-packages/authlib/jose/rfc7519/jwt.py", line 99, in decode
data = self._jws.deserialize_compact(s, load_key, decode_payload)
File "/home/synapse/synapse_dev/env/lib/python3.8/site-packages/authlib/jose/rfc7515/jws.py", line 102, in deserialize_compact
algorithm, key = self._prepare_algorithm_key(jws_header, payload, key)
File "/home/synapse/synapse_dev/env/lib/python3.8/site-packages/authlib/jose/rfc7515/jws.py", line 255, in _prepare_algorithm_key
key = key(header, payload)
File "/home/synapse/synapse_dev/env/lib/python3.8/site-packages/authlib/jose/rfc7519/jwt.py", line 91, in load_key
key_func = prepare_raw_key(key, header)
File "/home/synapse/synapse_dev/env/lib/python3.8/site-packages/authlib/jose/rfc7519/jwt.py", line 137, in prepare_raw_key
raise ValueError('Invalid JSON Web Key Set')
ValueError: Invalid JSON Web Key Set
```
**To Reproduce**
A minimal example to reproduce the behavior:
```python
from authlib.jose import jwt
# create valid JWKS
jwks_valid = {
"keys":[
{
"alg":"RS256",
"e":"AQAB",
"ext":True,
"key_ops":[
"verify"
],
"kty":"RSA",
"n":"n0O-kvAnHHTDyZntIrA6JfN7cZ7a5r6yLuZu4rotsbvdInK1fqmeatZ3ZqJgJ32WG5rljMzOYp7nqERuXKYhpPCGfAy_MiIBgi2DuVoMbCzqPyvblxZ-5GyywpAFrjuxyoYRw19JmdfeWQet8Slir8wJNt0VOxo4Ac8vdcwIkLkq64RxGtnXWYAgD1CsJQvrDYGf4dWy6Xn_6FKjrzXb1BMIHkUHh3mjFD6VbtCMv5BEt6cSD8eRr5t9GBf0Y9gEv_ZLVhFCieCPwOOnYvheLG1LWMpHBWcjfbkOYmyY5w9-NMdnrqkAwgTEwWqLqlg2_cEXUHf1aaYx4Y8HvL7Q3dCELlfjWiNJ0h0KoXDUsUclxogFlHVpQM646oXg88pprBzOSJwNZ6HASlgShGTmYSfXNyLb0S4jJdT3-_LITZc3DOq0caN-iFZeczo7s18u4Q7w6Dk16_YYvtgX-7NuXhBPGTHlMcB56_-kvzEBb3wOT3bjMXa3fphYldG407Kg89DsAqp2U7lSG2WrLmDZ9w9WcaMVnm2PiHM0RhcUZPRIWCxw5DePGVBR86TP-vGJc_K0S0MKNqCWEdlHsSd19q9VKbKiPFPrmoHqzczkAyLRi1nieYFCjWDPOoVWRjrpBeHCTB-33S1f44uGM7EogeNRdjkN6a_32P-AqIsTV8E"
}
]
}
# add `kid` to key in same JWKS
jwks_invalid = {
"keys":[
{
"kid": "DummyKey",
"alg":"RS256",
"e":"AQAB",
"ext":True,
"key_ops":[
"verify"
],
"kty":"RSA",
"n":"n0O-kvAnHHTDyZntIrA6JfN7cZ7a5r6yLuZu4rotsbvdInK1fqmeatZ3ZqJgJ32WG5rljMzOYp7nqERuXKYhpPCGfAy_MiIBgi2DuVoMbCzqPyvblxZ-5GyywpAFrjuxyoYRw19JmdfeWQet8Slir8wJNt0VOxo4Ac8vdcwIkLkq64RxGtnXWYAgD1CsJQvrDYGf4dWy6Xn_6FKjrzXb1BMIHkUHh3mjFD6VbtCMv5BEt6cSD8eRr5t9GBf0Y9gEv_ZLVhFCieCPwOOnYvheLG1LWMpHBWcjfbkOYmyY5w9-NMdnrqkAwgTEwWqLqlg2_cEXUHf1aaYx4Y8HvL7Q3dCELlfjWiNJ0h0KoXDUsUclxogFlHVpQM646oXg88pprBzOSJwNZ6HASlgShGTmYSfXNyLb0S4jJdT3-_LITZc3DOq0caN-iFZeczo7s18u4Q7w6Dk16_YYvtgX-7NuXhBPGTHlMcB56_-kvzEBb3wOT3bjMXa3fphYldG407Kg89DsAqp2U7lSG2WrLmDZ9w9WcaMVnm2PiHM0RhcUZPRIWCxw5DePGVBR86TP-vGJc_K0S0MKNqCWEdlHsSd19q9VKbKiPFPrmoHqzczkAyLRi1nieYFCjWDPOoVWRjrpBeHCTB-33S1f44uGM7EogeNRdjkN6a_32P-AqIsTV8E"
}
]
}
id_token = "eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL2xvZ2luLWRlbW8uY3VyaXR5LmlvL29hdXRoL3YyL29hdXRoLWFub255bW91cyIsImV4cCI6NDU2LCJpYXQiOjEyMywiYXVkIjoiY2xpZW50Iiwic3ViIjoidXNlciJ9.PmEkUY2nesh2hC4xuoi0-nDIMHM5X5ieWKMKUdSTG2r2rur_ti7jHggam3eYu-KLcZGCK0hobQXOV31YCDx8i8TqLMyw67lK3MnJXFWrYvdBSMt66Vpr3ZoCsjb3P0qkIydgpgTm4ObCKXc88cUxwhi6n0XZLSu0It55CCFlkjjFZey_jRn6YndMSRn65P9iXc2CJrMpNpAAeD4VgMRfHv5c-VxDhFidxf47ujMbm6Z4Bq6B6iIuwKMoPH1J3Y8tqYCkmhKrw_ExyjS4B2888ZZMc193GBlDvUcKsBcWA4cRstX0p1X4ncK_oTdiF902k7dNt8MrrONdrkOHtu4Rkq7pu-PHFstszfWKmLpFQNDpapbby2AKIqKKJhbUx5sfhALyVC-wcG51QVrQopWXU81MajCevOodV8a_SsjhOj9_ym0ReWykZ9QvauT0x5wCpJeWUnvRHh2jDJvdYK1uMqDw5kyv5yiXjNxfejKSlmuPSGQxIjCdZ9kn8UaB9T2zvskXtus5pYC6DEySRko5jJFjOJnMhdeDK1zGxfScAnMvH1npQKHH4nRN8DYMq9MiiaN7oKmckO5XhGW6qsHWwOjXSsmbbH64sCjnswHgult4MFQD3-KHow72Jbh9u6z0NJcre7fo_UN0NUdZ7-cdGdfyn2yIyZo5rbxatEQxN2E"
# valid
jwt.decode(
id_token,
key=jwks_valid,
)
# throw error
# line 46
jwt.decode(
id_token,
key=jwks_invalid,
)
```
**Expected behavior**
If no `kid` is specified in the JWS header, the only key in the JWKS should be used, regardless of whether it has a `kid` or not.
**Environment:**
- OS: Ubuntu 20.04.4 LTS
- Python Version: 3.8.10
- Authlib Version: 1.0.1
**Additional context**
I have created the dummy `id_token` and JWKS with https://oauth.tools/ - "CREATE JWT".
The export of the data (to reproduce) is attached here.
If wanted I can try to create a PR.
[Create-JWT.zip](https://github.com/lepture/authlib/files/8768448/Create-JWT.zip)
|
closed
|
2022-05-25T06:31:46Z
|
2023-11-21T13:43:12Z
|
https://github.com/lepture/authlib/issues/462
|
[
"bug"
] |
dklimpel
| 4
|
dmlc/gluon-cv
|
computer-vision
| 1,577
|
Can't install. Win10
|
C:\Users\a8679>activate mxnet
(mxnet) C:\Users\a8679>pip install gluoncv --upgrade
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting gluoncv
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/5a/d7/e6429bd8d6ee2ff8123f7afc197a5a60901bbf9afc47e1f30c484cf22540/gluoncv-0.9.0-py2.py3-none-any.whl (997 kB)
|████████████████████████████████| 997 kB 6.4 MB/s
Requirement already satisfied: pandas in c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages (from gluoncv) (1.1.0)
Requirement already satisfied: numpy in c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages (from gluoncv) (1.16.6)
Requirement already satisfied: requests in c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages (from gluoncv) (2.18.4)Requirement already satisfied: matplotlib in c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages (from gluoncv) (3.3.0)
Requirement already satisfied: Pillow in c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages (from gluoncv) (7.2.0)
Collecting autocfg
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/94/4d/71221541472d0e45351cf9b11b0c3d55c032b80dcf8bc0763b28791e3dd3/autocfg-0.0.6-py2.py3-none-any.whl (13 kB)
Collecting autogluon.core
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/5b/01/4d115e1f2b9a98b22d181ab75443479ecc4e09b77700876808edab415d37/autogluon.core-0.0.16b20201224-py3-none-any.whl (243 kB)
|████████████████████████████████| 243 kB 6.8 MB/s
Requirement already satisfied: tornado>=5.0.1 in c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages (from autogluon.core->gluoncv) (6.0.4)
Requirement already satisfied: graphviz<0.9.0,>=0.8.1 in c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages (from autogluon.core->gluoncv) (0.8.4)
Collecting dill==0.3.3
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/52/d6/79f40d230895fa1ce3b6af0d22e0ac79c65175dc069c194b79cc8e05a033/dill-0.3.3-py2.py3-none-any.whl (81 kB)
|████████████████████████████████| 81 kB 5.1 MB/s
Collecting autograd>=1.3
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/23/12/b58522dc2cbbd7ab939c7b8e5542c441c9a06a8eccb00b3ecac04a739896/autograd-1.3.tar.gz (38 kB)
Collecting ConfigSpace<=0.4.16
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/3a/f9/685e3cb6e2a87e4989b83950dfbe8cecc49fd158f4ff3d1368dc62014e8d/ConfigSpace-0.4.16.tar.gz (964 kB)
|████████████████████████████████| 964 kB ...
Installing build dependencies ... error
ERROR: Command errored out with exit status 2:
command: 'c:\users\a8679\anaconda3\envs\mxnet\python.exe' 'c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\a8679\AppData\Local\Temp\pip-build-env-4nrsevdw\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.tuna.tsinghua.edu.cn/simple -- setuptools wheel numpy Cython
cwd: None
Complete output (79 lines):
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting Cython
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/79/f4/28002920834b292a4dece255d59860c9996d16002f1bca30d7794e0b7884/Cython-0.29.21-cp37-cp37m-win_amd64.whl (1.6 MB)
Collecting numpy
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/5f/a5/24db9dd5c4a8b6c8e495289f17c28e55601769798b0e2e5a5aeb2abd247b/numpy-1.19.4-cp37-cp37m-win_amd64.whl (12.9 MB)
ERROR: Exception:
Traceback (most recent call last):
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_vendor\urllib3\response.py", line 438, in _error_catcher
yield
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_vendor\urllib3\response.py", line 519, in read
data = self._fp.read(amt) if not fp_closed else b""
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py", line 62, in read
data = self.__fp.read(amt)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\http\client.py", line 457, in read
n = self.readinto(b)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\http\client.py", line 501, in readinto
n = self.fp.readinto(b)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\ssl.py", line 1071, in recv_into
return self.read(nbytes, buffer)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\ssl.py", line 929, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\cli\base_command.py", line 224, in _main
status = self.run(options, args)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\cli\req_command.py", line 180, in wrapper
return func(self, options, args)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\commands\install.py", line 321, in run
reqs, check_supported_wheels=not options.target_dir
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 122, in resolve
requirements, max_rounds=try_to_avoid_resolution_too_deep,
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 445, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 339, in resolve
failure_causes = self._attempt_to_pin_criterion(name, criterion)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 207, in _attempt_to_pin_criterion
criteria = self._get_criteria_to_update(candidate)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 198, in _get_criteria_to_update
for r in self._p.get_dependencies(candidate):
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\resolution\resolvelib\provider.py", line 172, in get_dependencies
for r in candidate.iter_dependencies(with_requires)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\resolution\resolvelib\provider.py", line 171, in <listcomp>
r
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 257, in iter_dependencies
requires = self.dist.requires() if with_requires else ()
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 239, in dist
self._prepare()
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 226, in _prepare
dist = self._prepare_distribution()
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 319, in _prepare_distribution
self._ireq, parallel_builds=True,
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\operations\prepare.py", line 480, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\operations\prepare.py", line 505, in _prepare_linked_requirement
self.download_dir, hashes,
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\operations\prepare.py", line 257, in unpack_url
hashes=hashes,
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\operations\prepare.py", line 130, in get_http_url
from_path, content_type = download(link, temp_dir.path)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\network\download.py", line 163, in __call__
for chunk in chunks:
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\cli\progress_bars.py", line 168, in iter
for x in it:
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_internal\network\utils.py", line 88, in response_chunks
decode_content=False,
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_vendor\urllib3\response.py", line 576, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_vendor\urllib3\response.py", line 541, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip\_vendor\urllib3\response.py", line 443, in _error_catcher
raise ReadTimeoutError(self._pool, None, "Read timed out.")
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='pypi.tuna.tsinghua.edu.cn', port=443): Read timed out.
----------------------------------------
ERROR: Command errored out with exit status 2: 'c:\users\a8679\anaconda3\envs\mxnet\python.exe' 'c:\users\a8679\anaconda3\envs\mxnet\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\a8679\AppData\Local\Temp\pip-build-env-4nrsevdw\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.tuna.tsinghua.edu.cn/simple -- setuptools wheel numpy Cython Check the logs for full command output.
|
closed
|
2020-12-24T11:31:09Z
|
2021-05-04T16:34:39Z
|
https://github.com/dmlc/gluon-cv/issues/1577
|
[] |
StevenJokess
| 5
|
SYSTRAN/faster-whisper
|
deep-learning
| 1,100
|
Issue with segments when the model doesn't output end time for segment.
|
I have fine-tuned an hugging face model. This model is not outputting end time stamp token for the segment.
When I run faster whisper with this model and use "word_timestamps=False", it is giving the correct number of segments and the "text" is also correct.
Say the audio file is 6sec.
But when I run it using "word_timestamps=True" (and my model is not giving end time stamp token) on whole 6sec audio, it is computing the first segment correctly (say segment_end=2.54s). The code is taking window again from 2.54s to 6s and it is giving some weird/unnecessary/hallucinated output.
Exactly at line 1241 in transcribe.py, the expression evaluates to True. we assign "seek" to the end of the segment/timestamp of the last word (which is 2.54s) in this case. But "seek" should have been 6 .
If the model outputs end time stamp, the expression evaluates to False, seek would be 6.
|
open
|
2024-10-28T18:54:55Z
|
2024-11-11T09:20:16Z
|
https://github.com/SYSTRAN/faster-whisper/issues/1100
|
[] |
bchinnari
| 17
|
davidteather/TikTok-Api
|
api
| 817
|
Follow endpoint? private api?
|
hey, looking to buy follow endpoint and gorgon, khronos.
https://t.me/JosecLee
|
closed
|
2022-02-04T16:20:06Z
|
2022-02-04T16:53:24Z
|
https://github.com/davidteather/TikTok-Api/issues/817
|
[
"feature_request"
] |
h4ck1th4rd
| 0
|
vitalik/django-ninja
|
rest-api
| 556
|
Automatically convert JSON keys from Django to camelCase, and received data to snake_case
|
**Is your feature request related to a problem? Please describe.**
In Python we use `snake_case`, and in JavaScript we use `camelCase`. This always causing an issue in API's, because one side of the codebase ends up not using the correct naming standards.
**Describe the solution you'd like**
[djangorestframework-camel-case](https://github.com/vbabiy/djangorestframework-camel-case) solves this problem by automatically converting the JSON keys with some middleware. If it is POST'ed data, the middleware converts the JSON keys to snake_case for python consumption.
If it is data is a response, the middleware changes it from snake_case to camelCase for JavaScript consumption.
Would be great if there was something like this for Django Ninja.
|
closed
|
2022-09-07T12:34:26Z
|
2023-01-13T10:08:45Z
|
https://github.com/vitalik/django-ninja/issues/556
|
[] |
mangelozzi
| 1
|
pyg-team/pytorch_geometric
|
deep-learning
| 9,118
|
Sparse version of `nn.dense.dense_mincut_pool`
|
### 🚀 The feature, motivation and pitch
The [nn.dense.dense_mincut_pool](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.dense.dense_mincut_pool.html#torch_geometric.nn.dense.dense_mincut_pool) operator requires the input adjacency matrix to be dense. This requirement does not scale well as the matrix size grows. We may use operations on sparse matrix to address this issue.
### Alternatives
### Additional context
_No response_
|
open
|
2024-03-28T09:13:17Z
|
2024-04-26T12:39:12Z
|
https://github.com/pyg-team/pytorch_geometric/issues/9118
|
[
"feature"
] |
xiaohan2012
| 6
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 15,504
|
[Bug]: symptom after the plug-in is installed, the webui cannot be started
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
File "/Users/zhaoyaoyang/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/Users/zhaoyaoyang/stable-diffusion-webui/launch.py", line 39, in main
prepare_environment()
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 433, in prepare_environment
run_extensions_installers(settings_file=args.ui_settings_file)
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 275, in run_extensions_installers
run_extension_installer(path)
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 236, in run_extension_installer
stdout = run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env).strip()
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 103, in run
result = subprocess.run(**run_kwargs)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 2021, in _communicate
ready = selector.select(timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
### Steps to reproduce the problem
File "/Users/zhaoyaoyang/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/Users/zhaoyaoyang/stable-diffusion-webui/launch.py", line 39, in main
prepare_environment()
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 433, in prepare_environment
run_extensions_installers(settings_file=args.ui_settings_file)
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 275, in run_extensions_installers
run_extension_installer(path)
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 236, in run_extension_installer
stdout = run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env).strip()
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 103, in run
result = subprocess.run(**run_kwargs)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 2021, in _communicate
ready = selector.select(timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
### What should have happened?
File "/Users/zhaoyaoyang/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/Users/zhaoyaoyang/stable-diffusion-webui/launch.py", line 39, in main
prepare_environment()
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 433, in prepare_environment
run_extensions_installers(settings_file=args.ui_settings_file)
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 275, in run_extensions_installers
run_extension_installer(path)
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 236, in run_extension_installer
stdout = run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env).strip()
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 103, in run
result = subprocess.run(**run_kwargs)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 2021, in _communicate
ready = selector.select(timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
File "/Users/zhaoyaoyang/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/Users/zhaoyaoyang/stable-diffusion-webui/launch.py", line 39, in main
prepare_environment()
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 433, in prepare_environment
run_extensions_installers(settings_file=args.ui_settings_file)
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 275, in run_extensions_installers
run_extension_installer(path)
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 236, in run_extension_installer
stdout = run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env).strip()
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 103, in run
result = subprocess.run(**run_kwargs)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 2021, in _communicate
ready = selector.select(timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
### Console logs
```Shell
File "/Users/zhaoyaoyang/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/Users/zhaoyaoyang/stable-diffusion-webui/launch.py", line 39, in main
prepare_environment()
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 433, in prepare_environment
run_extensions_installers(settings_file=args.ui_settings_file)
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 275, in run_extensions_installers
run_extension_installer(path)
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 236, in run_extension_installer
stdout = run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env).strip()
File "/Users/zhaoyaoyang/stable-diffusion-webui/modules/launch_utils.py", line 103, in run
result = subprocess.run(**run_kwargs)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 2021, in _communicate
ready = selector.select(timeout)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
```
### Additional information
_No response_
|
open
|
2024-04-13T16:47:04Z
|
2024-04-13T23:41:11Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15504
|
[
"bug-report"
] |
sasahuala
| 1
|
ray-project/ray
|
python
| 51,436
|
CI test linux://python/ray/data:test_metadata_provider is flaky
|
CI test **linux://python/ray/data:test_metadata_provider** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8951#0195a5e5-5828-4bf0-9cc8-099cfc082058
- https://buildkite.com/ray-project/postmerge/builds/8951#0195a5c7-4b37-4311-b6a4-7380b0bb8cc1
- https://buildkite.com/ray-project/postmerge/builds/8942#0195a509-7acd-49d7-83d2-b5ade49c041b
- https://buildkite.com/ray-project/postmerge/builds/8942#0195a4dc-9f3f-426c-9a16-3756bdd6bc8e
DataCaseName-linux://python/ray/data:test_metadata_provider-END
Managed by OSS Test Policy
|
closed
|
2025-03-17T21:49:49Z
|
2025-03-21T19:46:20Z
|
https://github.com/ray-project/ray/issues/51436
|
[
"bug",
"triage",
"data",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] |
can-anyscale
| 21
|
pytorch/pytorch
|
deep-learning
| 149,324
|
Unguarded Usage of Facebook Internal Code?
|
### 🐛 Describe the bug
There is a [reference](https://github.com/pytorch/pytorch/blob/c7c3e7732443d7994303499bcb01781c9d59ab58/torch/_inductor/fx_passes/group_batch_fusion.py#L25) to `import deeplearning.fbgemm.fbgemm_gpu.fb.inductor_lowerings`, which we believe to be Facebook internal Python module based on description of this [commit](https://github.com/pytorch/benchmark/commit/e26cd75d042e880676a5f21873f2aaa72e178be1).
It looks like if the module isn't found, `torch` disables some `fbgemm` inductor lowerings.
Is this expected for this code snippet, or should this rely on publicly available `fbgemm`?
### Versions
Looks like this module is used as described above since torch's transition to open-source (at least).
cc @chauhang @penguinwu
|
open
|
2025-03-17T15:54:24Z
|
2025-03-17T20:29:04Z
|
https://github.com/pytorch/pytorch/issues/149324
|
[
"triaged",
"module: third_party",
"oncall: pt2"
] |
BwL1289
| 1
|
pydata/xarray
|
pandas
| 9,830
|
open_mfdataset and concat of datasets consisting of single data point along a dimension in datetime64 format on python 3.8.5
|
### What is your issue?
The described functionality didn't work. It somehow worked on my other computer in Python 3.8.1. I tried matching up the xarray versions.
It throwed the following warning:
/tmp/ipykernel_175396/3369060392.py:11: SerializationWarning: saving variable time_UTC with floating point data as an integer dtype without any _FillValue to use for NaNs
ds_combined.to_netcdf(os.path.join(netcdf_path, "combined.nc"), engine="netcdf4")
|
closed
|
2024-11-26T23:16:02Z
|
2024-11-27T00:29:40Z
|
https://github.com/pydata/xarray/issues/9830
|
[
"needs triage"
] |
nina-caldarella
| 2
|
serengil/deepface
|
machine-learning
| 1,070
|
error generated on find function call
|
Hello,
I'd like to find a face in an image database. This face is detected on one of the images in the database using the DeepFace.extract_faces() function (I've saved it).
To do this, I used the DeepFace.find() function and it generated an error.
Here is my script :
```
import cv2
import os
from deepface import DeepFace
img_path = "C:\\Users\\me\\Documents\\deepface\\photos\\1.jpg"
img = cv2.imread(img_path)
face_objs = DeepFace.extract_faces(
img_path=img_path,
target_size=(224, 224),
detector_backend='retinaface'
)
output_folder = "C:\\Users\\me\\Documents\\deepface\\faces"
os.makedirs(output_folder, exist_ok=True)
for i, face_obj in enumerate(face_objs):
x, y, w, h = face_obj['facial_area']['x'], face_obj['facial_area']['y'], face_obj['facial_area']['w'], face_obj['facial_area']['h']
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
face_img_path = os.path.join(output_folder, f"face_{i+1}.jpg")
face_img = img[y:y+h, x:x+w]
cv2.imwrite(face_img_path, face_img)
dfs = DeepFace.find(img_path="C:\\Users\\me\\Documents\\deepface\\faces\\face_5.jpg", db_path="C:\\Users\\me\\Documents\\deepface\\photos")
```
and the error was :
```
C:\Users\me\Documents\deepface>py tmp.py.txt
2024-03-08 15:59:26.240677: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
WARNING:tensorflow:From C:\Users\me\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
2024-03-08 15:59:30.159645: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "C:\Users\me\Documents\deepface\script.py.txt", line 32, in <module>
dfs = DeepFace.find(img_path="C:\\Users\\me\\Documents\\deepface\\faces\\face_5.jpg", db_path="C:\\Users\\me\\Documents\\script\\photos")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\me\AppData\Local\Programs\Python\Python311\Lib\site-packages\deepface\DeepFace.py", line 294, in find
return recognition.find(
^^^^^^^^^^^^^^^^^
File "C:\Users\me\AppData\Local\Programs\Python\Python311\Lib\site-packages\deepface\modules\recognition.py", line 185, in find
source_objs = detection.extract_faces(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\me\AppData\Local\Programs\Python\Python311\Lib\site-packages\deepface\modules\detection.py", line 99, in extract_faces
raise ValueError(
ValueError: Face could not be detected in C:\Users\me\Documents\deepface\faces\face_5.jpg.Please confirm that the picture is a face photo or consider to set enforce_detection param to False.
```
any idea ?
|
closed
|
2024-03-08T15:09:58Z
|
2024-03-08T16:00:46Z
|
https://github.com/serengil/deepface/issues/1070
|
[
"question"
] |
Minyar2004
| 5
|
aminalaee/sqladmin
|
asyncio
| 515
|
Custom columns?
|
### Checklist
- [x] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
In the admin interface, I want to have extra table columns in addition to model properties. For example, if there is a User model in relationship with UserGroups, I want to have each User row with the number of groups in addition to the groups.
Like this:
|id|groups|group count|
|-|-|-|
|123|(A) (B)|2|
### Describe the solution you would like.
We can keep the current API but extend `column_list` and `column_formatters`:
```python
class UserAdmin(ModelView, model=User):
column_list = [User.name, User.groups, 'group_count']
column_formatters = {'group_count': lambda m, a: len(User.groups)}
```
### Describe alternatives you considered
- Format model properties which are not in use (not always possible)
- Add the needed info to the existing columns (not convenient to use such tables)
Also, in terms of my example, if I want to show only the number of groups but not the list, I can only format the `groups` column to have the groups loaded by sqladmin. This means putting a single value in a relation column, so sqladmin adds unwanted links and brackets:

### Additional context
Brilliant project! Thank you.
|
closed
|
2023-06-10T19:16:29Z
|
2023-07-15T10:33:51Z
|
https://github.com/aminalaee/sqladmin/issues/515
|
[
"good first issue"
] |
tm-a-t
| 1
|
developmentseed/lonboard
|
data-visualization
| 357
|
Update shiny example on next pyshinywidgets release
|
Ref https://github.com/posit-dev/py-shinywidgets/issues/133
At that point we should probably also make an issue on shiny's docs to see if they want to link back to us, because they're already linking to pydeck.
|
open
|
2024-02-12T19:45:54Z
|
2024-02-12T19:46:34Z
|
https://github.com/developmentseed/lonboard/issues/357
|
[] |
kylebarron
| 0
|
QingdaoU/OnlineJudge
|
django
| 254
|
How to rejudge while contest is running
|
closed
|
2019-07-29T19:02:35Z
|
2020-12-10T18:28:32Z
|
https://github.com/QingdaoU/OnlineJudge/issues/254
|
[] |
beiyanpiki
| 0
|
|
AirtestProject/Airtest
|
automation
| 1,257
|
jenkins运行airtest windows测试项目失败
|
windows上启动的jenkins slave节点运行机器上的airtest Windows的UI自动化测试项目失败
```
File "d:\jenkins\workspace\ui_test\tool\tools.py", line 402, in login_and_enter
touch((0.5, 0.2))
File "D:\python\lib\site-packages\airtest\utils\logwraper.py", line 134, in wrapper
res = f(*args, **kwargs)
File "D:\python\lib\site-packages\airtest\core\api.py", line 373, in touch
pos = G.DEVICE.touch(pos, **kwargs) or pos
File "D:\python\lib\site-packages\airtest\core\win\win.py", line 300, in touch
start = self._action_pos(win32api.GetCursorPos())
pywintypes.error: (5, 'GetCursorPos', '拒绝访问。')
```
直接在windows测试机上运行测试项目能正常工作,使用jenkins工程执行就会报上面的错误;
已经尝试过:
1.使用管理员权限启动jenkins slave进程,依然复现;
2.指定python.exe绝对路径启动测试,依然复现;
3.确认了jenkins运行时的用户和登录到windows机器直接运行测试的用户是同一个用户;
疑问:是否因为jenkins的运行时不支持windows gui的系统操作?或者是有其他特殊的设置?
**python 版本:** `python3.9.6`
**airtest 版本:** `1.3.5`
**设备:**
- 系统: windows10
|
closed
|
2024-10-16T04:07:43Z
|
2024-10-17T08:48:29Z
|
https://github.com/AirtestProject/Airtest/issues/1257
|
[] |
Alipipe
| 1
|
xzkostyan/clickhouse-sqlalchemy
|
sqlalchemy
| 208
|
visit_primary_key_constraint() fails table creation with pk and fk
|
**Describe the bug**
When make sql CREATE query for table with `ForeignKey` and `primary_key` both we can get wrong query, which contains unneeded comma.
```
class UserCluster(utl_ch.Base):
__tablename__ = 'user_clusters'
__table_args__ = (engines.MergeTree(order_by='id'),)
id = Column(String(32), primary_key=True)
request_id = Column(String(32), ForeignKey('clusterization_requests.id'))
cluster_id = Column(String)
Base = get_declarative_base()
engine = create_engine('clickhouse://test:test@localhost:62180/test')
Base.metadata.create_all(engine)
```
`create_all` gives wrong SQL query:
```
CREATE TABLE user_clusters (
id FixedString(32),
request_id FixedString(32),
cluster_id String,
, <- wrong comma
FOREIGN KEY(request_id) REFERENCES clusterization_requests (id)
) ENGINE = MergeTree()
ORDER BY id
```
which then fall all table creation in clickhouse by error:
```
> raise DatabaseException(orig)
E clickhouse_sqlalchemy.exceptions.DatabaseException: Orig exception:
E Code: 62. DB::Exception: Syntax error: failed at position 149 (',') (line 8, col 2):
E ,
E FOREIGN KEY(request_id) REFERENCES clusterization_requests (id)
E ) ENGINE = MergeTree()
E ORDER BY id
E
E . Expected one of: table property (column, index, constraint) declaration, INDEX, CONSTRAINT, PROJECTION, PRIMARY KEY, column declaration, identifier. (SYNTAX_ERROR) (version 22.6.4.35 (official build))
```
**To Reproduce**
```
class UserCluster(utl_ch.Base):
__tablename__ = 'user_clusters'
__table_args__ = (engines.MergeTree(order_by='id'),)
id = Column(String(32), primary_key=True)
request_id = Column(String(32), ForeignKey('clusterization_requests.id'))
cluster_id = Column(String)
Base = get_declarative_base()
engine = create_engine('clickhouse://test:test@localhost:62180/test')
Base.metadata.create_all(engine)
```
After create_all, when creating all tables, code create list of strings of constraints in `sqlalchemy.sql.compiller.create_table_constraints()`. That **list contains empty string** which goes from `visit_primary_key_constraint()` which for primary key **returns `''` (empty string)**:
```
#.../clickhouse_sqlalchemy/drivers/compilers/ddlcompiler.py, line 80
def visit_primary_key_constraint(self, constraint, **kw):
# Do not render PKs.
return ''
```
By code from `sqlalchemy.sql.compiller.create_table_constraints()`, line 4507, looks like we need to return `None` from `visit_primary_key_constraint()` instead empty string to not to make unneeded coma, change code to:
```
#.../clickhouse_sqlalchemy/drivers/compilers/ddlcompiler.py, line 80
def visit_primary_key_constraint(self, constraint, **kw):
# Do not render PKs.
return None
```
**Expected behavior**
No comma in sql query.
**Versions**
- SqlAlchemy 1.4.40
- Version of package 0.2.2
- Python 3.9
|
closed
|
2022-10-25T09:35:23Z
|
2022-11-24T20:53:31Z
|
https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/208
|
[] |
SomeAkk
| 3
|
jupyter-book/jupyter-book
|
jupyter
| 1,341
|
Hide toc by default
|
I've searched the docs here, as well as the Sphinx template format that Jupyter books is based on, but cannot seem to find any info on setting the TOC to be collapsed by default. (So it would be a hamburger, not an arrow on the top left of the landing page.)
I also tried to look through the available html tags, as well as the generated html for the project, but to no avail.
Since this seems to be not documented, I can't point or link to any place that reveals the problem.
Have I missed something, or should this be tagged as a feature request?
|
open
|
2021-05-26T15:47:42Z
|
2021-05-26T15:47:43Z
|
https://github.com/jupyter-book/jupyter-book/issues/1341
|
[
"documentation"
] |
bHimes
| 1
|
graphistry/pygraphistry
|
jupyter
| 197
|
[DOCS] Multi-gpu demo
|
Demo of multi-GPU with RAPIDS.ai + Graphistry
- [ ] embeddable in Graphistry getting started dashboard: smart defaults listed
- [ ] show both out-of-core + multi-gpu
- [ ] encapsulated: no need to register etc. to get data
- [ ] tech: dask_cudf, bsql, & cuGraph. OK if split into multiple notebooks
- [ ] ultimately: x-post w/ rapids+blazing repos?
The taxi dataset is interesting. Maybe run that -> UMAP / k-nn -> viz?
Or maybe something better that is open access and more inherently graph-y, like app logs?
|
open
|
2021-01-13T22:09:56Z
|
2021-01-13T22:10:34Z
|
https://github.com/graphistry/pygraphistry/issues/197
|
[
"help wanted",
"docs",
"good-first-issue"
] |
lmeyerov
| 0
|
tflearn/tflearn
|
data-science
| 905
|
Broken releases and dependency management
|
There are couple problems that appearing from the very beginning:
- awful dependency management. Let users know which version of h5py and scipy you support.
- `segmentation fault 11` during import for following combination of dependencies:
```
bleach==1.5.0
h5py==2.7.1
html5lib==0.9999999
Markdown==2.6.9
numpy==1.13.1
olefile==0.44
Pillow==4.2.1
protobuf==3.4.0
scipy==0.19.1
six==1.10.0
tensorflow==1.3.0
tensorflow-tensorboard==0.1.6
tflearn==0.3.2
Werkzeug==0.12.2
```
Segmentation fault happens on mac os, ubuntu LTS.
More info here: https://medium.com/@denismakogon/compute-vision-hard-times-with-tflearn-4765841e90bf
|
open
|
2017-09-15T11:14:54Z
|
2017-11-19T14:42:56Z
|
https://github.com/tflearn/tflearn/issues/905
|
[] |
denismakogon
| 1
|
agronholm/anyio
|
asyncio
| 536
|
On asyncio, `Event.set()` sometimes fails to notify all waiting tasks
|
hi!
if:
1. task A is awaiting `Event.wait()`
1. `Event.set()` is called
1. the `wait()`'s cancel scope is cancelled before the event loop schedules A
then, on the asyncio backend only, task A's `wait()` raises cancelled instead of getting notified of the event.
in contrast, on the trio backend (and on pure trio), `Event.set()` still notifies all waiters (including task A) in this situation. trio does this by implementing `Event.wait` via `wait_task_rescheduled`.
here is a small reproducer: `test_event_wait_before_set_before_cancel` in 65d74e483cbcb161223bc9216b020156104f09c0. `assert wait_woke` fails on asyncio only.
this problem is also the direct cause of another bug: `MemoryObjectSendStream.send(item)` can raise cancelled after `item` has been delivered to a receiver![^1] this `MemoryObjectSendStream.send` issue is just the `send_event.wait()` case to #146's `receive_event.wait()` case, i believe—the underlying issue for both seems to be this `Event.wait()` problem[^2].
a couple relevant comments from njsmith: https://github.com/agronholm/anyio/issues/146#issuecomment-673754182, https://github.com/agronholm/anyio/pull/147#issuecomment-674615765.
[^1]: raising cancelled after a receiver gets the item is a violation of trio's cancellation semantics for API in the `trio` namespace (https://trio.readthedocs.io/en/stable/reference-core.html#cancellation-and-primitive-operations). since `anyio` is documented as following `trio`'s cancellation model, i would guess that `anyio` is also intended to adhere to this (but in the `anyio` namespace)? if so, i think it would be good to document this at https://anyio.readthedocs.io/en/stable/cancellation.html and to document the couple rare exceptions to this rule (e.g. https://trio.readthedocs.io/en/stable/reference-io.html#trio.abc.SendStream.send_all) too.
[^2]: nit: #146 actually had two underlying issues: the problem that `MemoryObjectReceiveStream.receive()` didn't `checkpoint_if_cancelled()` before trying `receive_nowait()` (on all backends), as well as this `Event.wait()` issue (on asyncio only).
|
closed
|
2023-03-08T08:31:31Z
|
2023-07-25T19:21:31Z
|
https://github.com/agronholm/anyio/issues/536
|
[] |
gschaffner
| 6
|
sktime/sktime
|
scikit-learn
| 7,619
|
[DOC] Code walkthrough for new contributors
|
#### Describe the issue linked to the documentation
There are no guides for new contributors to go through to get an overview of the entire code base (what each folder is meant for, what are the important files, etc).
#### Suggest a potential alternative/fix
A detailed walkthrough of the entire code base so that readers get somewhat of an idea as to whats going on when they try contributing to the organization.
|
closed
|
2025-01-08T13:25:27Z
|
2025-02-21T09:58:15Z
|
https://github.com/sktime/sktime/issues/7619
|
[
"documentation"
] |
satvshr
| 4
|
RomelTorres/alpha_vantage
|
pandas
| 53
|
Inconstiney in SYMOBOL for Indian stock exchnage
|
Hi While using the api found the inconsistent behavior some symbol work from TIME_SERIES_DAILY but the same symbol does not work when I use it in TIME_SERIES_INTRADAY
example NSE index symbole (NSEI took from yahoo as suggested from others) works in TIME_SERIES_INTRADAY, not in TIME_SERIES_DAILY
ex
https://www.alphavantage.co/query?function=TIME_SERIES_INTRADAY&symbol=NSEI&interval=5min&outputsize=full&apikey=key
https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol= NSEI&outputsize=full&apikey=key
Another example of stock PVR code PVR.NS (as per yahoo finance) works for daily but not work for intraday
https://www.alphavantage.co/query?function=TIME_SERIES_INTRADAY&symbol=PVR.NS&interval=5min&outputsize=full&apikey=key
|
closed
|
2018-03-22T04:35:37Z
|
2018-03-23T10:16:58Z
|
https://github.com/RomelTorres/alpha_vantage/issues/53
|
[] |
me-shivaprasad
| 1
|
LibrePhotos/librephotos
|
django
| 919
|
Remove face when looking at a photo if wrongfully assigned
|
**Describe the enhancement you'd like**
When looking at a photo and the detected face have the option to remove a wrong assignment, we can do this by choosing unknown other, but a quick button would be nice.
**Describe why this will benefit the LibrePhotos**
In cases of background strangers being detected as a known person wrongfully, it would help a lot.
|
closed
|
2023-07-14T09:35:43Z
|
2023-09-14T11:49:44Z
|
https://github.com/LibrePhotos/librephotos/issues/919
|
[
"enhancement"
] |
scepterus
| 1
|
pandas-dev/pandas
|
data-science
| 60,369
|
DOC: Fix docstring typo
|
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/main/pandas/core/series.py
### Documentation problem
The docstring for the __arrow_c_stream__ method in the Series class uses the word "behaviour".
### Suggested fix for documentation
Suggested to rewrite as "behavior", which is the American English spelling, to maintain consistency with the rest of the Pandas codebase.
|
closed
|
2024-11-20T07:05:18Z
|
2024-11-22T20:15:05Z
|
https://github.com/pandas-dev/pandas/issues/60369
|
[
"Docs"
] |
jct102
| 1
|
google-research/bert
|
tensorflow
| 524
|
How to get the previous layers output of the BERT using tf_hub?
|
I need the last 4 layers of the bert output to get the embeddings of each token. I can get those from the source code. Is there a way to get these outputs from the Tf_hub as described in the collab?
|
closed
|
2019-03-26T07:40:29Z
|
2020-02-07T12:55:04Z
|
https://github.com/google-research/bert/issues/524
|
[] |
saikrishna9494
| 5
|
scikit-learn/scikit-learn
|
machine-learning
| 30,624
|
Inconsistency in shapes of `coef_` attributes between `LinearRegression` and `Ridge` when parameter `y` is 2D with `n_targets = 1`
|
### Describe the bug
This issue comes from my (possibly incorrect) understanding that `LinearRegression` and `Ridge` classes should handle the dimensions of the `X` and `y` parameters to the `fit` method in the same way in a sense that the *same* pair of `(X, y)` parameter values provided to *both* `LinearRegression.fit()` and `Ridge.fit()` methods should produce the `coef_` attribute values of the *same shape* in both classes.
But it appears that in case of a 2D shaped parameter `y` of the form `(n_samples, n_targets)` with `n_targets = 1` passed into the `fit` method, the resulting shapes of `coef_` attribute differ between `LinearRegression` and `Ridge` classes.
### Steps/Code to Reproduce
```python
import numpy as np
import sklearn
X = np.array([10, 15, 21]).reshape(-1, 1)
y = np.array([50, 70, 63]).reshape(-1, 1)
assert X.shape == (3, 1), f"Shape of X must be (n_samples = 3, n_features = 1)"
assert y.shape == (3, 1), f"Shape of y must be (n_samples = 3, n_targets = 1)"
linX = sklearn.linear_model.LinearRegression()
linX.fit(X, y)
ridgeX = sklearn.linear_model.Ridge(alpha=10**9.5)
ridgeX.fit(X, y)
assert linX.coef_.shape == ridgeX.coef_.shape, f"Shapes of coef_ attributes do not agree. LinearRegression has {linX.coef_.shape}. Ridge has {ridgeX.coef_.shape}"
```
### Expected Results
The example code should produce no output and throw no error.
According to the [`LinearRegression` docs](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#), the resulting value of the `coef_` attribute should be 2D shaped as `(n_targets = 1, n_features = 1)`. This is what happens in my minimal code example, indeed.
The [docs for the `Ridge` class](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) are less detailed but the parameter and attribute names, types, and shapes are the same for `X`, `y`, and `coef_`. I can't think of a reason why the logic of how the shapes of `X` and `y` parameters translate into the shape of the `coef_` attribute should be any different from the `LinearRegression` class.
Since the *same* input pair `(X, y)` was given to both `LinearRegression` and `Ridge` instances, I expect the shape of the `coef_` attribute of the `Ridge` instance to be 2D as well. The shape should be exactly as per [`Ridge` docs](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) state. In my case the shape would be `(n_targets = 1, n_features = 1)`, just as with the `LinearRegression` instance.
### Actual Results
An assertion error is produced by the sample code because the shapes of the `coef_` attribute are different between the `LinearRegression` and `Ridge` instances:
```pytb
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
[<ipython-input-5-de4d6a04a66d>](https://localhost:8080/#) in <cell line: 16>()
14 ridgeX.fit(X, y)
15
---> 16 assert linX.coef_.shape == ridgeX.coef_.shape, f"Shapes of coef_ attributes do not agree. LinearRegression has {linX.coef_.shape}. Ridge has {ridgeX.coef_.shape}"
AssertionError: Shapes of coef_ attributes do not agree. LinearRegression has (1, 1). Ridge has (1,)
```
Thus,
* `LinearRegression.coef_` is of the shape `(n_targets, n_features)`
* `Ridge.coef_` is of the shape `(n_features,)`
I don't think this is right?
### Versions
```shell
System:
python: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
executable: /usr/bin/python3
machine: Linux-6.1.85+-x86_64-with-glibc2.35
Python dependencies:
sklearn: 1.6.0
pip: 24.1.2
setuptools: 75.1.0
numpy: 1.26.4
scipy: 1.13.1
Cython: 3.0.11
pandas: 2.2.2
matplotlib: 3.10.0
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 2
prefix: libopenblas
filepath: /usr/local/lib/python3.10/dist-packages/numpy.libs/libopenblas64_p-r0-0cf96a72.3.23.dev.so
version: 0.3.23.dev
threading_layer: pthreads
architecture: Haswell
user_api: blas
internal_api: openblas
num_threads: 2
prefix: libopenblas
filepath: /usr/local/lib/python3.10/dist-packages/scipy.libs/libopenblasp-r0-01191904.3.27.so
version: 0.3.27
threading_layer: pthreads
architecture: Haswell
user_api: openmp
internal_api: openmp
num_threads: 2
prefix: libgomp
filepath: /usr/local/lib/python3.10/dist-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0
version: None
```
|
closed
|
2025-01-11T09:12:56Z
|
2025-01-17T17:12:26Z
|
https://github.com/scikit-learn/scikit-learn/issues/30624
|
[
"Bug",
"Needs Triage"
] |
olliefr
| 1
|
plotly/dash-core-components
|
dash
| 899
|
SyntaxError when running Demo App
|
I am trying to run the core-components demo on my machine (OSX 11). I followed the readme setting things up.
```
git clone https://github.com/plotly/dash-core-components
python -m venv venv && . venv/bin/activate
pip install "dash[dev,testing]"
npm i --ignore-scripts && npm run build
npm start
```
When I load the page in Chrome I get the following error in the console.
```
output.js:895 Uncaught SyntaxError: Unexpected token ','
```
where output.js:895 reads
```
__webpack_require__.oe = function(err) { console.error(err); throw err; };,var getCurrentScript = function() {
```
I am not sure how to fix this problem as the file is auto-generated.
Any help is appreciated as I am new to webpack etc.
|
open
|
2020-12-09T11:57:02Z
|
2020-12-28T01:10:15Z
|
https://github.com/plotly/dash-core-components/issues/899
|
[] |
bjonen
| 2
|
donnemartin/system-design-primer
|
python
| 882
|
The weighted round-robin link goes to the wrong website
|
The weighted round-robin link in README-zh-TW.md needs to be fixed to synchronize with the English version.
Before:
* http://g33kinfo.com/info/archives/2657
After:
* https://www.jscape.com/blog/load-balancing-algorithms
|
open
|
2024-07-05T03:06:07Z
|
2024-12-02T01:13:10Z
|
https://github.com/donnemartin/system-design-primer/issues/882
|
[
"needs-review"
] |
s10079000
| 1
|
pytorch/pytorch
|
python
| 149,532
|
Lintunner running on newly added files despite being explicitly excluded in .lintrunner.toml
|
In my [PR 148936](https://github.com/pytorch/pytorch/pull/148936), lintrunner is [failing with CLANGTIDY](https://github.com/pytorch/pytorch/actions/runs/13927137669/job/38974556917?pr=148936) despite me adding the newly added files to the `exclude_patterns` of the CLANGTIDY rule in `.lintrunner.toml`. Per @malfet, these CUDA files should not be linted against CLANGTIDY, but I can't figure out a way to exclude them. It also seems like they may be occuring on generated or included files (linter states error is in `usr/include/c++/11/cmath`) which I think also shouldn't be linted against. How can I resolve the linter error or at least understand what's causing it?
# [lintrunner error](https://github.com/pytorch/pytorch/actions/runs/13927137669/job/38974556917?pr=148936)
And a bunch more similar ones. See the [failing job itself](https://github.com/pytorch/pytorch/actions/runs/13927137669/job/38974556917?pr=148936) for all the errors.
```
>>> Lint for ../../usr/include/c++/11/cmath:
Error (CLANGTIDY) [clang-diagnostic-error]
constexpr function 'fpclassify' without __host__ or __device__ attributes
cannot overload __device__ function with the same signature; add a
__host__ attribute, or build with -fno-cuda-host-device-constexpr
534 |
535 |#ifndef __CORRECT_ISO_CPP11_MATH_H_PROTO_FP
536 | constexpr int
>>> 537 | fpclassify(float __x)
538 | { return __builtin_fpclassify(FP_NAN, FP_INFINITE, FP_NORMAL,
539 | FP_SUBNORMAL, FP_ZERO, __x); }
540 |
```
# What I've tried
1. I tried adding the new .cuh and .cu files from my PR to the exclude section of the CLANGTIDY rule in `.lintrunner.toml`, as is shown in the PR.
2. I tried narrowing the scope of lintrunner in [PR 149345](https://github.com/pytorch/pytorch/pull/149345). However this didn't work as it stopped lintrunner from linting any files, e.g. I tested purposefully adding a lint error in that PR and it wasn't caught.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @ZainRizvi @kit1980 @huydhn @clee2000
|
open
|
2025-03-19T17:32:59Z
|
2025-03-20T19:19:19Z
|
https://github.com/pytorch/pytorch/issues/149532
|
[
"module: ci",
"module: lint",
"triaged",
"module: devx"
] |
TovlyFB
| 0
|
aleju/imgaug
|
deep-learning
| 802
|
Change saturation of yellow tone
|
Hello,
I'm looking for a way to change the strength of the yellow tones in the image.
My first thought was to change the temperature of the image with `ChangeColorTemperature()`m however that throws a known error ([Issue #720 ](https://github.com/aleju/imgaug/issues/720)) .
My next idea was to change the image from RGB to a different colorspace and then augment only one of the channels, however CMYK is not available as a colorspace so that also doesn't work.
Any help would be highly appreciated!
|
open
|
2021-12-01T16:09:05Z
|
2021-12-01T16:09:05Z
|
https://github.com/aleju/imgaug/issues/802
|
[] |
MariaKalt
| 0
|
huggingface/datasets
|
computer-vision
| 7,306
|
Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values).
|
### Describe the bug
When creating a dataset from a list of datapoints, information is lost of the individual items.
Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below.
-> What is the best way to create a dataset from a list of datapoints?
---
e.g.:
**When running this code:**
```python
from datasets import load_dataset, Dataset
commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True)
datapoint = next(iter(commonvoice_data))
out = [datapoint]
new_data = Dataset.from_list(out) #this loses datatype information
new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information
```
**We get the following**:
---
1. `datapoint`: (the original datapoint)
```
'audio': {'path': 'it_test_0/common_voice_it_23606167.mp3', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
2.21619011e-05, 2.72628222e-05, 0.00000000e+00]), 'sampling_rate': 48000}
```
Original Dataset Features:
```
>>> commonvoice_data.features
'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None)
```
- Here we see column "audio", has the proper values (both `path` & and `array`) and has the correct datatype (Audio).
----
2. new_data[0]:
```
# Cannot be printed (as it prints the entire array).
```
New Dataset 1 Features:
```
>>> new_data.features
'audio': {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)}
```
- Here we see that the column "audio", has the correct values, but is not the Audio datatype anymore.
---
3. new_data2[0]:
```
'audio': {'path': None, 'array': array([0., 0., 0., ..., 0., 0., 0.]), 'sampling_rate': 48000},
```
New Dataset 2 Features:
```
>>> new_data2.features
'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None),
```
- Here we see that the column "audio", has the correct datatype, but all the array & path values were lost!
### Steps to reproduce the bug
## Run:
```python
from datasets import load_dataset, Dataset
commonvoice_data = load_dataset("mozilla-foundation/common_voice_17_0", "it", split="test", streaming=True)
datapoint = next(iter(commonvoice_data))
out = [datapoint]
new_data = Dataset.from_list(out) #this loses datatype information
new_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information
```
### Expected behavior
## Expected:
```datapoint == new_data[0]```
AND
```datapoint == new_data2[0]```
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.26.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
|
open
|
2024-12-05T09:07:53Z
|
2024-12-05T09:09:38Z
|
https://github.com/huggingface/datasets/issues/7306
|
[] |
ai-nikolai
| 0
|
ultralytics/ultralytics
|
pytorch
| 19,228
|
🔧 Add Support for Uniform & Neighbor-Based Label Smoothing in YOLO Object Detection
|
### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
Currently, YOLO’s classification loss function does not support label smoothing, which can lead to overconfident predictions and reduced generalization, especially in noisy datasets. While uniform label smoothing is a common regularization technique, object detection can benefit from a more structured approach where probability mass is distributed only to immediate neighboring classes instead of all classes.
This issue proposes adding configurable label smoothing with two modes:
- **Standard Uniform Label Smoothing:** Distributes the smoothing factor equally across all classes.
- **Custom Neighbor-Based Label Smoothing:** Assigns probability mass only to a specified number of adjacent classes, ensuring that misclassifications within similar categories are penalized less.
### Use case
1️⃣ **Standard Uniform Label Smoothing (Existing Method in Classification)**
If there are 10 classes and class 4 is the correct label with a smoothing factor of 0.2, the target distribution becomes:
`[0.02, 0.02, 0.02, 0.82, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02]`
This prevents overconfidence but treats all incorrect classes equally, even if some are semantically closer to class 4.
2️⃣ **Custom Neighbor-Based Label Smoothing (Proposed Enhancement)**
Instead of distributing the smoothing equally across all classes, it will only assign weights to a fixed number of adjacent classes.
For example, if class 4 is the true label, smoothing factor = 0.2, and we smooth into one adjacent class (num_neighbors = 1):
`[0, 0, 0.1, 0.8, 0.1, 0, 0, 0, 0, 0]`
Here, 10% of the probability is assigned to class 3 and 5, which are the immediate neighbors.
If num_neighbors = 2:
`[0, 0.05, 0.05, 0.8, 0.05, 0.05, 0, 0, 0, 0]`
Now, class 2, 3, 5, and 6 receive part of the smoothing.
This approach better models real-world misclassifications, as mistakes are often between similar categories (e.g., different car models, dog breeds, or aircraft types).
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR!
|
open
|
2025-02-13T09:40:36Z
|
2025-02-13T09:41:07Z
|
https://github.com/ultralytics/ultralytics/issues/19228
|
[
"enhancement",
"detect"
] |
uttammittal02
| 1
|
mage-ai/mage-ai
|
data-science
| 4,820
|
remote_variables_dir with aws s3 path in integration Pipeline [Urgent]
|
**Is your feature request related to a problem? Please describe.**
remote_variables_dir is not working with data integration pipeline, it is working with standard pipeline only, because of that our variables are getting saved in EFS, because of that our EFS cost is getting increasing
|
open
|
2024-03-25T11:05:11Z
|
2024-03-26T18:47:46Z
|
https://github.com/mage-ai/mage-ai/issues/4820
|
[
"enhancement"
] |
vikasgoyal31071992
| 2
|
BayesWitnesses/m2cgen
|
scikit-learn
| 537
|
Issue with xgboost export in python: not same values splits
|
Hi,
Thanks for the work on this package. I'm using m2cgen to convert a XGBoost model into VBA code. But when using the code produce by m2cgen I've got some predictions that are really different from the one get after training my model in Python. Here are some examples:
Capture
And I've looked in the XGBoost booster after training and compare to the output (in Python) form m2cgen. Here is what I have from m2cgen
```
import math
def sigmoid(x):
if x < 0.0:
z = math.exp(x)
return z / (1.0 + z)
return 1.0 / (1.0 + math.exp(-x))
def score(input):
if input[1] < 2.0:
if input[1] < 1.0:
var0 = -0.3193863
else:
var0 = -0.046659842
else:
if input[7] < 867.94:
var0 = 0.058621403
else:
var0 = 0.25975806
if input[4] < 5654.47:
if input[3] < 0.38662624:
var1 = -0.029487507
else:
var1 = 0.16083813
else:
if input[1] < 2.0:
var1 = -0.32378462
else:
var1 = -0.08247565
if input[0] < 1.0:
if input[2] < 0.8:
var2 = -0.15353489
else:
var2 = 0.081936955
else:
if input[4] < 2989.61:
var2 = 0.13463722
else:
var2 = -0.042515814
if input[5] < 0.11556604:
if input[12] < 0.11059804:
var3 = -0.1621976
else:
var3 = 0.30593434
else:
if input[11] < 661.39:
var3 = 0.0063493266
else:
var3 = 0.15387529
if input[9] < 0.12683104:
if input[19] < 197.56:
var4 = -0.25690553
else:
var4 = 0.06560632
else:
if input[8] < 0.11749347:
var4 = -0.018011741
else:
var4 = 0.10678521
if input[7] < 1790.11:
if input[8] < 0.11749347:
var5 = -0.091719724
else:
var5 = 0.048037946
else:
if input[1] < 3.0:
var5 = 0.058297392
else:
var5 = 0.18175843
if input[6] < 1351.78:
if input[10] < 3.0:
var6 = -0.0012290713
else:
var6 = 0.10081242
else:
if input[17] < 0.07381933:
var6 = -0.12741692
else:
var6 = 0.038392954
if input[1] < 3.0:
if input[15] < 0.12838633:
var7 = -0.081163615
else:
var7 = 0.019387348
else:
if input[20] < 0.29835963:
var7 = 0.1156334
else:
var7 = -0.17409053
if input[5] < 0.062735535:
if input[3] < 0.5642857:
var8 = -0.2049814
else:
var8 = 0.12192867
else:
if input[13] < 17.0:
var8 = -0.0035746796
else:
var8 = 0.10629323
if input[19] < 179.98:
if input[4] < 15379.7:
var9 = -0.010353668
else:
var9 = -0.19715081
else:
if input[21] < 1744.96:
var9 = 0.08414988
else:
var9 = -0.31387258
if input[9] < 0.12683104:
if input[19] < 90.45:
var10 = -0.15493616
else:
var10 = 0.05997152
else:
if input[11] < -1390.57:
var10 = -0.12933072
else:
var10 = 0.028274538
if input[14] < 3.0:
if input[7] < 652.72:
var11 = -0.061523404
else:
var11 = 0.018090146
else:
if input[20] < -0.015413969:
var11 = 0.122180216
else:
var11 = -0.07323579
if input[18] < 35.0:
if input[17] < 0.105689526:
var12 = -0.058067013
else:
var12 = 0.035271224
else:
if input[20] < 0.42494825:
var12 = 0.067990474
else:
var12 = -0.13910332
if input[8] < 0.11749347:
if input[22] < 0.06889495:
var13 = -0.109115146
else:
var13 = -0.011202088
else:
if input[16] < -161.82:
var13 = -0.01581455
else:
var13 = 0.10806873
if input[18] < 8.0:
if input[17] < 0.0007647209:
var14 = -0.10060249
else:
var14 = 0.04555326
else:
if input[15] < 0.15912667:
var14 = 0.0012086431
else:
var14 = 0.061486576
if input[11] < -1708.65:
if input[1] < 4.0:
var15 = -0.14637202
else:
var15 = 0.10264576
else:
if input[19] < 2421.29:
var15 = 0.008009123
else:
var15 = 0.17349313
if input[20] < 0.21551265:
if input[20] < -0.14049701:
var16 = -0.069627054
else:
var16 = 0.012490782
else:
if input[7] < 4508.38:
var16 = -0.13310793
else:
var16 = 0.2982378
if input[4] < 10364.37:
if input[18] < 46.0:
var17 = -0.00067418563
else:
var17 = 0.07025912
else:
if input[19] < 32.3:
var17 = -0.11449907
else:
var17 = 0.102952585
if input[12] < 0.11059804:
if input[9] < 0.06418919:
var18 = -0.12425961
else:
var18 = -0.0036558604
else:
if input[9] < 0.06418919:
var18 = 0.3158906
else:
var18 = 0.06434954
var19 = sigmoid(var0 + var1 + var2 + var3 + var4 + var5 + var6 + var7 + var8 + var9 + var10 + var11 + var12 + var13 + var14 + var15 + var16 + var17 + var18)
return [1.0 - var19, var19]
```
And this is what I have in the booster:
```
def score_booster(input):
if input[1]<2:
if input[1]<1:
var0=-0.319386303
else:
var0=-0.0466598421
else:
if input[7]<867.940002:
var0=0.0586214028
else:
var0=0.259758055
if input[4]<5654.47021:
if input[3]<0.386626244:
var1=-0.0294875074
else:
var1=0.160838127
else:
if input[1]<2:
var1=-0.32378462
else:
var1=-0.0824756473
if input[0]<1:
if input[2]<0.800000012:
var2=-0.153534889
else:
var2=0.0819369555
else:
if input[4]<2989.61011:
var2=0.134637222
else:
var2=-0.0425158143
if input[5]<0.115566038:
if input[12]<0.110598043:
var3=-0.162197605
else:
var3=0.30593434
else:
if input[11]<661.390015:
var3=0.00634932658
else:
var3=0.153875291
if input[9]<0.12683104:
if input[19]<197.559998:
var4=-0.256905526
else:
var4=0.0656063184
else:
if input[8]<0.117493473:
var4=-0.0180117413
else:
var4=0.106785208
if input[7]<1790.10999:
if input[8]<0.117493473:
var5=-0.0917197242
else:
var5=0.0480379462
else:
if input[1]<3:
var5=0.058297392
else:
var5=0.181758434
if input[6]<1351.78003:
if input[10]<3:
var6=-0.00122907129
else:
var6=0.10081242
else:
if input[17]<0.0738193318:
var6=-0.127416924
else:
var6=0.0383929536
if input[1]<3:
if input[15]<0.128386334:
var7=-0.081163615
else:
var7=0.0193873476
else:
if input[20]<0.298359632:
var7=0.115633398
else:
var7=-0.174090534
if input[5]<0.0627355352:
if input[3]<0.564285696:
var8=-0.204981402
else:
var8=0.12192867
else:
if input[13]<17:
var8=-0.00357467961
else:
var8=0.106293231
if input[19]<179.979996:
if input[4]<15379.7002:
var9=-0.0103536677
else:
var9=-0.197150812
else:
if input[21]<1744.95996:
var9=0.0841498822
else:
var9=-0.313872576
if input[9]<0.12683104:
if input[19]<90.4499969:
var10=-0.154936165
else:
var10=0.0599715188
else:
if input[11]<-1390.56995:
var10=-0.129330724
else:
var10=0.028274538
if input[14]<3:
if input[7]<652.719971:
var11=-0.061523404
else:
var11=0.0180901457
else:
if input[20]<-0.0154139688:
var11=0.122180216
else:
var11=-0.0732357875
if input[18]<35:
if input[17]<0.105689526:
var12=-0.0580670126
else:
var12=0.0352712236
else:
if input[20]<0.424948245:
var12=0.0679904744
else:
var12=-0.139103323
if input[8]<0.117493473:
if input[22]<0.0688949525:
var13=-0.109115146
else:
var13=-0.0112020876
else:
if input[16]<-161.820007:
var13=-0.0158145502
else:
var13=0.108068727
if input[18]<8:
if input[17]<0.000764720899:
var14=-0.100602493
else:
var14=0.0455532596
else:
if input[15]<0.159126669:
var14=0.00120864308
else:
var14=0.0614865758
if input[11]<-1708.65002:
if input[1]<4:
var15=-0.14637202
else:
var15=0.102645762
else:
if input[19]<2421.29004:
var15=0.00800912268
else:
var15=0.173493132
if input[20]<0.215512648:
if input[20]<-0.140497014:
var16=-0.069627054
else:
var16=0.012490782
else:
if input[7]<4508.37988:
var16=-0.13310793
else:
var16=0.298237801
if input[4]<10364.3701:
if input[18]<46:
var17=-0.000674185634
else:
var17=0.0702591166
else:
if input[19]<32.2999992:
var17=-0.11449907
else:
var17=0.102952585
if input[12]<0.110598043:
if input[9]<0.0641891882:
var18=-0.124259613
else:
var18=-0.00365586043
else:
if input[9]<0.0641891882:
var18=0.31589061
else:
var18=0.0643495396
return (var0, var1, var2, var3, var4, var5, var6, var7, var8, var9, var10, var11, var12, var13, var14, var15, var16, var17, var18)
```
For me, but I'm not expert, it seems that in the function returned by m2cgen the floats are 32 bits floats in the if/else conditions and not in the booster. So if in my data one sample has the value of the split m2cgen is not giving back the right value. Is there a trick to force floats to 64 bits?
Thanks in advance for your return,
Antoine
|
open
|
2022-08-19T08:03:04Z
|
2024-11-14T11:35:50Z
|
https://github.com/BayesWitnesses/m2cgen/issues/537
|
[] |
antoinemertz
| 2
|
gevent/gevent
|
asyncio
| 1,430
|
Support Python 3.8
|
Now that beta1 is out, it's time to start work on supporting 3.8.
First step is to add it to CI and make sure the generic tests pass.
Then the next step is to add its stdlib unit tests and make them pass.
|
closed
|
2019-06-11T15:11:38Z
|
2020-04-09T07:06:53Z
|
https://github.com/gevent/gevent/issues/1430
|
[
"Type: Enhancement",
"PyVer: python3"
] |
jamadden
| 9
|
dropbox/PyHive
|
sqlalchemy
| 425
|
Format characters in SQL comments raises error with SQLAlchemy connection
|
**Context:**
Pandas `read_sql` is now raising a warning suggesting moving from a DB-API connection to a SQLAlchemy connection. Hence we are trying to make the switch.
**Issue:**
When using a SQLAlchemy connection, if the query has any format characters in it, then an error is raised. No issue with a DB-API connection.
**Example query:**
```sql
-- Format character in a comment %
SELECT 1
```
**Likely Cause**
When deciding whether to format the operation, PyHive checks if the given `parameters` is None.
https://github.com/dropbox/PyHive/blob/d199a1bd55c656b5c28d0d62f2d3f2e6c9a82a54/pyhive/presto.py#L256
However, SQLAlchemy (1.4.31) always at least passes an empty dictionary (since it builds these params off of kwargs which default to an empty dict) so it is never None.
https://github.com/sqlalchemy/sqlalchemy/blob/2eac6545ad08db83954dd3afebf4894a0acb0cea/lib/sqlalchemy/engine/base.py#L1196
**Likely Fix**
Just need to also check if params is an empty dict:
```python
if params is None or not params:
```
|
open
|
2022-02-03T15:46:40Z
|
2022-02-03T15:46:40Z
|
https://github.com/dropbox/PyHive/issues/425
|
[] |
derek-pyne
| 0
|
ultralytics/yolov5
|
pytorch
| 12,474
|
**Blob error** Unable to export yolov5 model in openvino
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hey, I am exporting my yolov5 custom model in openvino but it is giving error, I have trained the model with image size 360 and when I export the model in openvino it is successfully exported but I want to export the model in int8 but it is giving error.
**[ GENERAL_ERROR ] Can't set input blob with name: images, because model input (shape=[1,3,384,384]) and blob (shape=(1.3.640.640)) are incompatible**
### Additional
_No response_
|
closed
|
2023-12-06T06:47:33Z
|
2024-01-16T00:21:21Z
|
https://github.com/ultralytics/yolov5/issues/12474
|
[
"question",
"Stale"
] |
AbhishekPSI7042
| 2
|
ultralytics/yolov5
|
machine-learning
| 12,631
|
Exception: 'Detect' object has no attribute 'grid'. Cache may be out of date, try `force_reload=True` or see https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading for help.
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Loading my model:
```python model = torch.hub.load(r"C:\Users\Aarik Ghosh\Programming Projects\Aimmer\yolov5","custom",r"C:\Users\Aarik Ghosh\Programming Projects\Aimmer\best.pt",source="local",force_reload=True)```
The error:
```Exception: 'Detect' object has no attribute 'grid'. Cache may be out of date, try `force_reload=True` or see https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading for help.```
### Additional
_No response_
|
closed
|
2024-01-15T14:16:41Z
|
2024-02-25T00:22:01Z
|
https://github.com/ultralytics/yolov5/issues/12631
|
[
"question",
"Stale"
] |
aarikg
| 2
|
geex-arts/django-jet
|
django
| 180
|
'jet' is not a registered namespace
|
```
Django Version: 1.10.5
Exception Type: NoReverseMatch
Exception Value: 'jet' is not a registered namespace
```
> Using Python3.5
Steps:
- Run `python3 -m pip install django-jet`
- Added `'jet'` as the first element in the `INSTALLED_APPS`
- Run `python3 manage.py migrate`
I am able to import jet using python3: `python -c "import jet"`
|
closed
|
2017-02-19T10:40:06Z
|
2024-08-16T16:49:44Z
|
https://github.com/geex-arts/django-jet/issues/180
|
[] |
szabolcsdombi
| 15
|
taverntesting/tavern
|
pytest
| 478
|
How to get interpolated variables to stay dictionaries instead of being coerced into strings?
|
Two dictionaries are loaded from fixtures in conftest.py:
tmp_author_alias_a and tmp_author_alias_b.
They are same structure with differing values:
```python
@pytest.fixture
def tmp_author_alias_a():
return {
"first_name": "Patrick",
"last_name": "Neve",
# . . .
"author_id": "dcfab002-d02f-4895-9557-b55f304af92d",
"id": "ea36f093-1cae-403b-9c6e-3fe18b617221"
}
```
They appear in the tavern script in - usefixtures:
```yaml
- usefixtures:
- encode_web_token
- conjure_uuid
- conjure_uuid_2
- tmp_author_alias_a
- tmp_author_alias_b
```
They need to be in the request json as list of dictionaries.
Entered this way:
```yaml
authors:
- {tmp_author_alias_a}
- {tmp_author_alias_b}
```
throws BadSchemaError: cannot define an empty value in test . . .
This way:
```yaml
authors:
- "{tmp_author_alias_a}"
- "{tmp_author_alias_b}"
In line 39 dict_util.py format_keys the list is pulled in as a node_class:
```
formatted = val # node_class: ['temp_author_alias_a', 'temp_author_alias_b']
box_vars = Box(variables)
```
That is as a list of strings.
Each is then sent back through format_keys() (in line 48)
```
formatted = [format_keys(item, box_vars) for item in val]
```
as a string into which the dict is interpolated (line 51-53)
```
elif isinstance(val, (ustr, str)):
try:
formatted = val.format(**box_vars)
```
Any way to get the list of dictionaries into the request json?
A nice poser for your afternoon diversion! ;-)
|
closed
|
2019-11-19T00:11:17Z
|
2022-01-08T17:59:13Z
|
https://github.com/taverntesting/tavern/issues/478
|
[] |
pmneve
| 10
|
yt-dlp/yt-dlp
|
python
| 11,770
|
NSFW tweet requires authentication
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
US
### Provide a description that is worded well enough to be understood
download with cookies paramter but error
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--cookies', './twitter', 'https://x.com/allvd/status/1864320120?t=fcgxGtr__B6GuQ9PhtHXOw&s=09', '-o', '%(id)s.10B.%(ext)s', '-v']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.06 from yt-dlp/yt-dlp [4bd265539] (pip)
[debug] Python 3.9.2 (CPython x86_64 64bit) - Linux-5.10.0-33-amd64-x86_64-with-glibc2.31 (OpenSSL 1.1.1w 11 Sep 2023, glibc 2.31)
[debug] exe versions: ffmpeg 6.1-static (setts), ffprobe 6.1-static
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2023.11.17, mutagen-1.47.0, requests-2.32.3, sqlite3-3.34.1, urllib3-2.1.0, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1837 extractors
[twitter] Extracting URL: https://x.com/allvd/status/186432120?t=fcgxGtr__B6GuQ9PhtHXOw&s=09
[twitter] 1864320115921310120: Downloading guest token
[twitter] 1864320115921310120: Downloading GraphQL JSON
ERROR: [twitter] 1864320115921310120: NSFW tweet requires authentication. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (twitter) to provide account credentials. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
File "/et/teleg/py/lib/python3.9/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/et/teleg/py/lib/python3.9/site-packages/yt_dlp/extractor/twitter.py", line 1422, in _real_extract
status = self._extract_status(twid)
File "/et/teleg/py/lib/python3.9/site-packages/yt_dlp/extractor/twitter.py", line 1400, in _extract_status
status = self._graphql_to_legacy(self._call_graphql_api(self._GRAPHQL_ENDPOINT, twid), twid)
File "/et/teleg/py/lib/python3.9/site-packages/yt_dlp/extractor/twitter.py", line 1277, in _graphql_to_legacy
self.raise_login_required('NSFW tweet requires authentication')
File "/et/teleg/py/lib/python3.9/site-packages/yt_dlp/extractor/common.py", line 1258, in raise_login_required
raise ExtractorError(msg, expected=True)
```
|
closed
|
2024-12-09T04:16:43Z
|
2024-12-09T06:09:47Z
|
https://github.com/yt-dlp/yt-dlp/issues/11770
|
[
"question"
] |
jnxyatmjx
| 10
|
scikit-learn-contrib/metric-learn
|
scikit-learn
| 29
|
transform() method for the shogun_LMNN
|
Going by the current implementation, it lacks a transform() and metric() for the Shogun wrapping implementation of the LMNN.
In case it is required / Unless someone is currently working on it, I would like to work on it.
Open to suggestions!
|
closed
|
2016-09-15T00:25:25Z
|
2016-09-15T18:43:57Z
|
https://github.com/scikit-learn-contrib/metric-learn/issues/29
|
[] |
anirudt
| 3
|
ClimbsRocks/auto_ml
|
scikit-learn
| 386
|
edge case: let the user pass in a .csv file as X_test_already_transformed
|
we'll probably want to add in some logging, or make that the return value, in place of what we currently have for return_X_transformed.
and, i guess in that case, we wouldn't want to automatically delete it from ram once we're done with it.
|
open
|
2018-02-13T02:37:49Z
|
2018-02-13T02:37:49Z
|
https://github.com/ClimbsRocks/auto_ml/issues/386
|
[] |
ClimbsRocks
| 0
|
pywinauto/pywinauto
|
automation
| 683
|
how to get the reference of already launched application
|
Hi Vasily,
I have created an object map file which includes all the objects present in my application based on the app value as below.
for ex:
app = Application(backend='uia').start(appPath)
objButton = app.window(title='xyz').window(control_type='Button', found_index=0)
Value of 'app' will get change if i reopen the application and hence objButton will be invalid.
Is there any method to get the reference of already opened application by providing any parameter?
Note: I am using pyatom/atomac to test the same application in MAC machine. I am using getAppRefByBundleID(bundleId) method to get the reference of application. So whenever i reopen the application, i just call this method and update all the objects in object map.
Thanks in advance.
|
closed
|
2019-03-05T11:35:13Z
|
2019-03-06T13:55:06Z
|
https://github.com/pywinauto/pywinauto/issues/683
|
[
"question"
] |
prathibha-ashok
| 2
|
axnsan12/drf-yasg
|
rest-api
| 151
|
Bad ref_name and can't force it
|
My `User` object is named as `Creator` in the swagger.
I tried to force it with `ref_name` in the `Meta` but it is ignored. I rename many other objects without problem.
`creator` is the nested field referencing the user is many other objects.
```python
class UserSerializer(serializers.ModelSerializer):
class Meta:
model =User
fields = ('id', 'email', ...)
ref_name = 'User'
class ProjectSerializer(serializers.ModelSerializer):
creator = UserSerializer(read_only=True)
class Meta:
model = Project
fields = ('id', 'name', 'creator',)
```
|
closed
|
2018-06-27T12:30:13Z
|
2018-08-06T13:50:41Z
|
https://github.com/axnsan12/drf-yasg/issues/151
|
[] |
Amoki
| 4
|
aws/aws-sdk-pandas
|
pandas
| 2,997
|
Athena to iceberg method not writting data to columns that are new in the schema
|
### Describe the bug
I have a table that was created by a glue job. I want to append data to that table using AWS Wrangler. The writting process seems to work fine, but when I check on Athena, the columns that were not there before are added but appear to be completely empty, while there were no nulls in my dataframe.
If I delete the rows I appended and write the data again using AWS Wrangler, the table is updated correctly, since the columns are not new anymore.
### How to Reproduce
I tried replicating the issue using just AWS Wrangler and I could not do it.
Try having a glue job create an iceberg table and then try to update this table with an extra column using wrangler.
### Expected behavior
_No response_
### Your project
_No response_
### Screenshots
_No response_
### OS
Mac
### Python version
3.10
### AWS SDK for pandas version
3.9.0
### Additional context
_No response_
|
open
|
2024-10-16T12:20:30Z
|
2024-12-06T15:54:40Z
|
https://github.com/aws/aws-sdk-pandas/issues/2997
|
[
"bug"
] |
lautarortega
| 4
|
horovod/horovod
|
pytorch
| 3,266
|
What does shard mean in the lightning PetastormDataModule?
|
Hi!
I'm playing a bit with your lightning datamodule https://github.com/horovod/horovod/blob/master/horovod/spark/lightning/datamodule.py and I can't make it work. It's complaining about shard count/number of row-groups.
My code:
```python
path = "file:///dbfs/file.parquet"
dm = PetastormDataModule(path, path, train_batch_size=2)
spark.conf.set(SparkDatasetConverter.PARENT_CACHE_DIR_URL_CONF, "file:///dbfs/tmp/petastorm/cache")
# download, etc...
dm.prepare_data()
# splits/transforms
dm.setup(stage="fit")
```
It complains on the setup stage:
```python
NoDataAvailableError: Number of row-groups in the dataset must be greater or equal to the number of requested shards. Otherwise, some of the shards will end up being empty.
<command-4307812655913348> in <module>
8
9 # splits/transforms
---> 10 dm.setup(stage="fit")
11
12 i=0
/databricks/python/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py in wrapped_fn(*args, **kwargs)
472 )
473 else:
--> 474 fn(*args, **kwargs)
475
476 return wrapped_fn
<command-2226737894299293> in setup(self, stage)
51 reader_factory = make_batch_reader
52
---> 53 self.train_reader = reader_factory(self.train_dir, num_epochs=self.num_reader_epochs,
54 cur_shard=self.cur_shard, shard_count=self.shard_count,
55 hdfs_driver=PETASTORM_HDFS_DRIVER,
/databricks/python/lib/python3.8/site-packages/petastorm/reader.py in make_batch_reader(dataset_url_or_urls, schema_fields, reader_pool_type, workers_count, shuffle_row_groups, shuffle_row_drop_partitions, predicate, rowgroup_selector, num_epochs, cur_shard, shard_count, shard_seed, cache_type, cache_location, cache_size_limit, cache_row_size_estimate, cache_extra_settings, hdfs_driver, transform_spec, filters, storage_options, zmq_copy_buffers, filesystem)
314 raise ValueError('Unknown reader_pool_type: {}'.format(reader_pool_type))
315
--> 316 return Reader(filesystem, dataset_path_or_paths,
317 schema_fields=schema_fields,
318 worker_class=ArrowReaderWorker,
/databricks/python/lib/python3.8/site-packages/petastorm/reader.py in __init__(self, pyarrow_filesystem, dataset_path, schema_fields, shuffle_row_groups, shuffle_row_drop_partitions, predicate, rowgroup_selector, reader_pool, num_epochs, cur_shard, shard_count, cache, worker_class, transform_spec, is_batched_reader, filters, shard_seed)
445
446 # 3. Filter rowgroups
--> 447 filtered_row_group_indexes, worker_predicate = self._filter_row_groups(self.dataset, row_groups, predicate,
448 rowgroup_selector, cur_shard,
449 shard_count, shard_seed)
/databricks/python/lib/python3.8/site-packages/petastorm/reader.py in _filter_row_groups(self, dataset, row_groups, predicate, rowgroup_selector, cur_shard, shard_count, shard_seed)
526
527 if cur_shard is not None or shard_count is not None:
--> 528 filtered_row_group_indexes = self._partition_row_groups(dataset, row_groups, shard_count,
529 cur_shard,
530 filtered_row_group_indexes, shard_seed)
/databricks/python/lib/python3.8/site-packages/petastorm/reader.py in _partition_row_groups(self, dataset, row_groups, shard_count, cur_shard, filtered_row_group_indexes, shard_seed)
547
548 if shard_count is not None and len(row_groups) < shard_count:
--> 549 raise NoDataAvailableError('Number of row-groups in the dataset must be greater or equal to the number of '
550 'requested shards. Otherwise, some of the shards will end up being empty.')
551
NoDataAvailableError: Number of row-groups in the dataset must be greater or equal to the number of requested shards. Otherwise, some of the shards will end up being empty.
```
Saying `NoDataAvailableError: Number of row-groups in the dataset must be greater or equal to the number of requested shards. Otherwise, some of the shards will end up being empty.
`
I'm no DE and I dont understand why is it complaining about the shard number. How could I fix this?
Thanks!
|
closed
|
2021-11-09T17:45:43Z
|
2024-05-28T13:41:28Z
|
https://github.com/horovod/horovod/issues/3266
|
[] |
jiwidi
| 1
|
coqui-ai/TTS
|
pytorch
| 3,545
|
[Bug] HifiGAN Generator throwing error
|
### Describe the bug
Need to add a condition for handling NoneType in Hifigan Generator code
### To Reproduce
https://github.com/coqui-ai/TTS/blob/5dcc16d1931538e5bce7cb20c1986df371ee8cd6/TTS/vocoder/models/hifigan_generator.py#L251
If `g` is `None` (which is the default value), we will end up giving `nn.Conv1d` a `NoneType` value as input which will throw an error.
### Expected behavior
We can simply fix this issue by making the if condition
```python
if asattr(self, "cond_layer") and g != None:
o = o + self.cond_layer(g)
```
to prevent using the conv layer when g is `None`.
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA A100 80GB PCIe",
"NVIDIA A100 80GB PCIe",
"NVIDIA A100 80GB PCIe",
"NVIDIA A100 80GB PCIe"
],
"available": true,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.2+cu121",
"TTS": "0.22.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.9.18",
"version": "#101-Ubuntu SMP Tue Nov 14 13:30:08 UTC 2023"
}
}
```
### Additional context
Should be a simple fix. Not sure why the code was written this way (assuming its a bug no one noticed). I got an error when i was trying to use it in single speaker setting (g = `None`), hence i am reporting this.
|
closed
|
2024-01-28T17:35:49Z
|
2024-03-09T10:40:06Z
|
https://github.com/coqui-ai/TTS/issues/3545
|
[
"bug",
"wontfix"
] |
saiakarsh193
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.