repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets
|
nlp
| 6,728
|
Issue Downloading Certain Datasets After Setting Custom `HF_ENDPOINT`
|
### Describe the bug
This bug is triggered under the following conditions:
- datasets repo ids without organization names trigger errors, such as `bookcorpus`, `gsm8k`, `wikipedia`, rather than in the form of `A/B`.
- If `HF_ENDPOINT` is set and the hostname is not in the form of `(hub-ci.)?huggingface.co`.
- This issue occurs with `datasets>2.15.0` or `huggingface-hub>0.19.4`. For example, using the latest versions: `datasets==2.18.0` and `huggingface-hub==0.21.4`,
### Steps to reproduce the bug
the issue can be reproduced with the following code:
1. install specific datasets and huggingface_hub.
```bash
pip install datasets==2.18.0
pip install huggingface_hub==0.21.4
```
2. execute python code.
```Python
import os
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
from datasets import load_dataset
bookcorpus = load_dataset('bookcorpus', split='train')
```
console output:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 1879, in dataset_module_factory
raise e1 from None
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 1830, in dataset_module_factory
with fs.open(f"datasets/{path}/{filename}", "r", encoding="utf-8") as f:
File "/home/padeoe/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1295, in open
self.open(
File "/home/padeoe/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1307, in open
f = self._open(
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 228, in _open
return HfFileSystemFile(self, path, mode=mode, revision=revision, block_size=block_size, **kwargs)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 615, in __init__
self.resolved_path = fs.resolve_path(path, revision=revision)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 180, in resolve_path
repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 117, in _repo_and_revision_exist
self._api.repo_info(repo_id, revision=revision, repo_type=repo_type)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2413, in repo_info
return method(
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2286, in dataset_info
hf_raise_for_status(r)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 362, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 401 Client Error: Unauthorized for url: https://hf-mirror.com/api/datasets/bookcorpus/bookcorpus.py (Request ID: Root=1-65ee8659-5ab10eec5960c63e71f2bb58;b00bdbea-fd6e-4a74-8fe0-bc4682ae090e)
```
### Expected behavior
The dataset was downloaded correctly without any errors.
### Environment info
datasets==2.18.0
huggingface-hub==0.21.4
|
closed
|
2024-03-11T09:06:38Z
|
2024-03-15T14:52:07Z
|
https://github.com/huggingface/datasets/issues/6728
|
[] |
padeoe
| 3
|
home-assistant/core
|
python
| 140,688
|
Webdav Integration for backup does not work
|
### The problem
Backup with remote storage via webDAV integration does not work.
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
core-2025.3.3
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Webdav
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/webdav
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
Logger: homeassistant.components.backup
Quelle: components/backup/manager.py:1073
Integration: Backup (Dokumentation, Probleme)
Erstmals aufgetreten: 05:29:21 (3 Vorkommnisse)
Zuletzt protokolliert: 21:13:32
Backup agents ['webdav.01JNP54QC3DEYPN79AAFSMQMYQ'] are not available, will backupp to ['hassio.local']
```
### Additional information
_No response_
|
closed
|
2025-03-15T20:46:17Z
|
2025-03-15T23:25:12Z
|
https://github.com/home-assistant/core/issues/140688
|
[
"needs-more-information",
"integration: webdav"
] |
holger-tangermann
| 7
|
AntonOsika/gpt-engineer
|
python
| 182
|
Doesn't quite work right on windows: tries to run .sh file
|
```
Do you want to execute this code?
If yes, press enter. Otherwise, type "no"
Executing the code...
run.sh: /mnt/c/Users/jeff/.pyenv/pyenv-win/shims/pip: /bin/sh^M: bad interpreter: No such file or directory
run.sh: line 2: $'\r': command not found
run.sh: line 3: syntax error near unexpected token `<'
'un.sh: line 3: `python main.py <file_path> <column_name> <output_file_path>
```
I even told it in the prompt: this is running on windows. Do not give me shell commands that windows cannot execute. Yet is still generated run.sh and tried to run it.
|
closed
|
2023-06-19T02:47:53Z
|
2023-06-21T13:12:19Z
|
https://github.com/AntonOsika/gpt-engineer/issues/182
|
[] |
YEM-1
| 7
|
huggingface/peft
|
pytorch
| 1,452
|
peft/utils/save_and_load.py try to connect to the hub even when HF_HUB_OFFLINE=1
|
### System Info
peft 0.8.2
axolotl v0.4.0
export HF_DATASETS_OFFLINE=1
export TRANSFORMERS_OFFLINE=1
export HF_HUB_OFFLINE=1
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
When I am FT my models on a GPU offline using lora and peft with axolotl library, I get an error from peft/utils/save_and_load.py", line 146, in get_peft_model_state_dict
has_remote_config = file_exists(model_id, "config.json")
The function try to connect to the HUB before saving the checkpoint locally, which make crash the run and lose the FT model.
Here is the full error message:
```
Traceback (most recent call last):
File "/gpfslocalsup/pub/anaconda-py3/2021.05/envs/python-3.9.12/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/gpfslocalsup/pub/anaconda-py3/2021.05/envs/python-3.9.12/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/gpfsdswork/projects/rech/vqt/ule33dt/axolotl/src/axolotl/cli/train.py", line 59, in <module>
fire.Fire(do_cli)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/gpfsdswork/projects/rech/vqt/ule33dt/axolotl/src/axolotl/cli/train.py", line 35, in do_cli
return do_train(parsed_cfg, parsed_cli_args)
File "/gpfsdswork/projects/rech/vqt/ule33dt/axolotl/src/axolotl/cli/train.py", line 55, in do_train
return train(cfg=cfg, cli_args=cli_args, dataset_meta=dataset_meta)
File "/gpfsdswork/projects/rech/vqt/ule33dt/axolotl/src/axolotl/train.py", line 163, in train
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/transformers/trainer.py", line 1561, in train
return inner_training_loop(
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/transformers/trainer.py", line 1968, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/transformers/trainer.py", line 2340, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/transformers/trainer.py", line 2416, in _save_checkpoint
self.save_model(staging_output_dir, _internal_call=True)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/transformers/trainer.py", line 2927, in save_model
self._save(output_dir)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/transformers/trainer.py", line 2999, in _save
self.model.save_pretrained(
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/peft/peft_model.py", line 216, in save_pretrained
output_state_dict = get_peft_model_state_dict(
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/peft/utils/save_and_load.py", line 146, in get_peft_model_state_dict
has_remote_config = file_exists(model_id, "config.json")
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2386, in file_exists
get_hf_file_metadata(url, token=token)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1631, in get_hf_file_metadata
r = _request_wrapper(
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 385, in _request_wrapper
response = _request_wrapper(
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 408, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/linkhome/rech/genrqo01/ule33dt/.local/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 78, in send
raise OfflineModeIsEnabled(
huggingface_hub.utils._http.OfflineModeIsEnabled: Cannot reach https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1/resolve/main/config.json: offline mode is enabled. To disable it, please unset the `HF_HUB_OFFLINE` environment variable.
```
I manage to fix the bug by commenting those lines in save_and_load.py:
```
if model_id is not None:
try:
has_remote_config = file_exists(model_id, "config.json")
except (HFValidationError, EntryNotFoundError):
warnings.warn(
f"Could not find a config file in {model_id} - will assume that the vocabulary was not modified."
)
has_remote_config = False
```
### Expected behavior
It seems that `file_exists(model_id, "config.json")` should not call the hub when the variable 'export `HF_HUB_OFFLINE=1` is set
|
closed
|
2024-02-09T18:59:45Z
|
2024-02-12T13:41:37Z
|
https://github.com/huggingface/peft/issues/1452
|
[] |
LsTam91
| 1
|
ultralytics/ultralytics
|
pytorch
| 19,105
|
Impossible to use custom YOLOv8 when loading model
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
I tried to use my own train model for inference but instead of that I download and use a new yolov8n.pt
Here is the output:
Downloading https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt to 'yolov8n.pt'...
100%|██████████| 6.23M/6.23M [00:00<00:00, 42.9MB/s]
### Environment
Ultralytics YOLOv8.0.239 🚀 Python-3.10.16 torch-2.4.0+cu121 CPU (13th Gen Intel Core(TM) i7-1365U)
Setup complete ✅ (12 CPUs, 15.5 GB RAM, 299.9/1006.9 GB disk)
OS Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Environment Linux
Python 3.10.16
Install git
RAM 15.45 GB
CPU 13th Gen Intel Core(TM) i7-1365U
CUDA None
matplotlib ✅ 3.8.4>=3.3.0
numpy ✅ 1.24.3>=1.22.2
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 10.4.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.11.4>=1.4.1
torch ✅ 2.4.0>=1.8.0
torchvision ✅ 0.19.0>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
thop ✅ 0.1.1-2209072238>=0.1.1
pandas ✅ 2.0.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
hub-sdk ✅ 0.0.18>=0.0.2
### Minimal Reproducible Example
from ultralyticsplus import YOLO
model = YOLO("path/to/best.pt")
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
open
|
2025-02-06T16:10:50Z
|
2025-02-06T17:22:03Z
|
https://github.com/ultralytics/ultralytics/issues/19105
|
[] |
nlsferrara
| 2
|
biolab/orange3
|
data-visualization
| 6,809
|
do you have any examples that use Orange3 for brain image analysis and network analysis?
|
<!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
<!-- In other words, what's your pain point? -->
<!-- Is your request related to a problem, or perhaps a frustration? -->
<!-- Tell us the story that led you to write this request. -->
Use Orange3 for brain image analysis and network analysis
**What's your proposed solution?**
<!-- Be specific, clear, and concise. -->
I want to learn how to use Orange3 for brain image analysis and network analysis
**Are there any alternative solutions?**
There are many alternatives, but I like the simplicity of Oranges, and I want to follow examples if someone has already done so
|
closed
|
2024-05-21T17:09:38Z
|
2024-06-06T15:53:35Z
|
https://github.com/biolab/orange3/issues/6809
|
[] |
cosmosanalytics
| 1
|
iperov/DeepFaceLab
|
deep-learning
| 5,463
|
Wont use GPU
|
I have a Geforce RTX 2060 and it wont work with quick train 96 any way to fix?
|
open
|
2022-01-18T17:16:20Z
|
2023-06-08T23:18:36Z
|
https://github.com/iperov/DeepFaceLab/issues/5463
|
[] |
YGT72
| 5
|
proplot-dev/proplot
|
matplotlib
| 309
|
Latest proplot version incompatible with matplotlib 3.5
|
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
I think v0.9.5 might be incompatible with matplotlib 3.5.0. I installed from conda-forge so we can repin.
### Steps to reproduce
A "[Minimal, Complete and Verifiable Example](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports)" will make it much easier for maintainers to help you.
```python
# your code here
# we should be able to copy-paste this into python and exactly reproduce your bug
fig, axs = pplt.subplots(nrows=3, ncols=3)
```
**Expected behavior**: [What you expected to happen]
A plot without a traceback?
**Actual behavior**: [What actually happened]
An error.
```
>>> import proplot as pplt
/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/proplot/__init__.py:71: ProplotWarning: Rebuilding font cache. This usually happens after installing or updating proplot.
register_fonts(default=True)
>>> fig, axs = pplt.subplots(nrows=3, ncols=3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/proplot/ui.py", line 192, in subplots
axs = fig.add_subplots(*args, **kwsubs)
File "/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/proplot/figure.py", line 1362, in add_subplots
axs[idx] = self.add_subplot(ss, **kw)
File "/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/proplot/figure.py", line 1248, in add_subplot
ax = super().add_subplot(ss, _subplot_spec=ss, **kwargs)
File "/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/matplotlib/figure.py", line 772, in add_subplot
ax = subplot_class_factory(projection_class)(self, *args, **pkw)
File "/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/matplotlib/axes/_subplots.py", line 34, in __init__
self._axes_class.__init__(self, fig, [0, 0, 1, 1], **kwargs)
File "/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/proplot/axes/cartesian.py", line 338, in __init__
super().__init__(*args, **kwargs)
File "/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/proplot/axes/plot.py", line 1260, in __init__
super().__init__(*args, **kwargs)
File "/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/proplot/axes/base.py", line 783, in __init__
self._auto_share()
File "/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/proplot/axes/base.py", line 1777, in _auto_share
child._sharey_setup(parent)
File "/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/proplot/axes/cartesian.py", line 605, in _sharey_setup
self._sharey_limits(sharey)
File "/Users/beckermr/miniconda3/envs/test-pp/lib/python3.10/site-packages/proplot/axes/cartesian.py", line 548, in _sharey_limits
self._shared_y_axes.join(self, sharey) # share limit/scale changes
AttributeError: 'CartesianAxesSubplot' object has no attribute '_shared_y_axes'. Did you mean: '_shared_axes'?
```
### Equivalent steps in matplotlib
Please try to make sure this bug is related to a proplot-specific feature. If you're not sure, try to replicate it with the [native matplotlib API](https://matplotlib.org/3.1.1/api/index.html). Matplotlib bugs belong on the [matplotlib github page](https://github.com/matplotlib/matplotlib).
```python
# your code here, if applicable
import matplotlib.pyplot as plt
```
### Proplot version
Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)`here.
```
>>> import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)
3.5.0
0.9.5
```
|
closed
|
2021-12-10T21:29:15Z
|
2023-03-03T12:57:40Z
|
https://github.com/proplot-dev/proplot/issues/309
|
[
"bug",
"dependencies"
] |
beckermr
| 7
|
huggingface/pytorch-image-models
|
pytorch
| 1,497
|
[DOC] Add Link to the recipe for each model
|
Adding the recipe used to train each model would be a step forward in the documentation.
|
closed
|
2022-10-16T10:49:03Z
|
2024-08-20T19:29:23Z
|
https://github.com/huggingface/pytorch-image-models/issues/1497
|
[
"enhancement"
] |
mjack3
| 1
|
BeanieODM/beanie
|
pydantic
| 187
|
Validation error when getting the doc and it's linked document is deleted.
|
I have looked on the docs and according to [docs](https://roman-right.github.io/beanie/tutorial/relations/) "If a direct link is referred to a non-existent document, after the fetching it will stay the object of the Link class." But in my case it's raising ValidationError when the Linked document is deleted. Here is a sample of my code.\
`class User(Document):
id: int
invited_by: typing.Union[Link["User"], None] = None`
`async def test_sth(db: UsersDB):
user1 = User(id=12)
await user1.insert()
user2 = User(id=123, invited_by=user1)
await user2.insert()
await user1.delete()
await User.find_one(User.id == int(12), fetch_links=True)
`
and it raises the following error. I don't know what I'm doing wrong or if there is a problem with my code.
` E pydantic.error_wrappers.ValidationError: 1 validation error for User
E invited_by
E value is not a valid dict (type=type_error.dict)
pydantic/main.py:331: ValidationError
`
|
closed
|
2022-01-16T19:23:53Z
|
2023-05-23T18:40:31Z
|
https://github.com/BeanieODM/beanie/issues/187
|
[
"bug"
] |
runetech0
| 3
|
graphql-python/graphene-mongo
|
graphql
| 180
|
Switch Mongoengine DB at run time
|
I have a requirement where each client's data is stored in a different DB. The schema is the same for all clients in this case. So I need to switch_db while the query is being processed.
Is there any way to configure graphQL to select DB to be used?
|
open
|
2021-07-12T03:54:33Z
|
2021-07-12T03:54:33Z
|
https://github.com/graphql-python/graphene-mongo/issues/180
|
[] |
harsh04
| 0
|
encode/databases
|
sqlalchemy
| 221
|
"asyncpg.exceptions.PostgresSyntaxError" While using sqlalchemy is_(None) method in query
|
Hai friends,
I am using `sqlalchemy core + databases` for building and executing query in my project.\
My steps to build the query is like
```python
from sqlalchemy import *
from databases import Database
url = 'postgresql://username:password@localhost:5432/db_name'
meta = MetaData()
eng = create_engine(url, echo=True)
database = Database(url)
my_user_table = Table("my_user_table", meta, Column("name", String), Column("age", Integer))
meta.create_all(eng)
async def run():
await database.connect()
query = my_user_table.update().values({"name": bindparam("name_value")}).where(my_user_table.c.age.is_(bindparam("null_value")))
values = {"name_value": "foo", "null_value": None}
text_query = text(str(query))
final_query = text_query.bindparams(**values)
result = await database.fetch_all(query=final_query)
await database.disconnect()
print(result)
import asyncio
asyncio.run(run())
```
But this throws an error
```
File "/Users/env_file/lib/python3.8/site-packages/databases/core.py", line 218, in fetch_all
return await self._connection.fetch_all(self._build_query(query, values))
File "/Users/env_file/lib/python3.8/site-packages/databases/backends/postgres.py", line 144, in fetch_all
rows = await self._connection.fetch(query, *args)
File "/Users/env_file/lib/python3.8/site-packages/asyncpg/connection.py", line 420, in fetch
return await self._execute(query, args, 0, timeout)
File "/Users/env_file/lib/python3.8/site-packages/asyncpg/connection.py", line 1402, in _execute
result, _ = await self.__execute(
File "/Users/env_file/lib/python3.8/site-packages/asyncpg/connection.py", line 1411, in __execute
return await self._do_execute(query, executor, timeout)
File "/Users/env_file/lib/python3.8/site-packages/asyncpg/connection.py", line 1423, in _do_execute
stmt = await self._get_statement(query, None)
File "/Users/env_file/lib/python3.8/site-packages/asyncpg/connection.py", line 328, in _get_statement
statement = await self._protocol.prepare(stmt_name, query, timeout)
File "asyncpg/protocol/protocol.pyx", line 163, in prepare
asyncpg.exceptions.PostgresSyntaxError: syntax error at or near "$2"
```
Nb: It's not because of the value `None`, I tried the query by replacing other values but the same error.
|
open
|
2020-06-16T15:47:20Z
|
2021-07-21T14:03:44Z
|
https://github.com/encode/databases/issues/221
|
[] |
balukrishnans
| 1
|
mckinsey/vizro
|
data-visualization
| 459
|
Relationship analysis chart in demo has no flexibility in Y axis
|
### Description
If we look at [relationship analysis page of the demo UI](https://vizro.mckinsey.com/relationship-analysis), it provides explicit controls of what variables to put on axis:
<img width="243" alt="Screenshot 2024-05-06 at 9 58 43 PM" src="https://github.com/mckinsey/vizro/assets/102987839/9dda54f5-d985-4d13-9a2e-28c3cc2d2b25">
Those selectors are not created automatically on the `plotly` size - they are [part of Vizro source code](https://github.com/mckinsey/vizro/blob/6484827364337e418b0539bda2abdff54bdb7967/vizro-core/examples/demo/app.py#L295-L314).
So on one hand there's flexibility in selecting variables to put on axis, but on the other hand, [the range for Y axis is hard-coded](https://github.com/mckinsey/vizro/blob/6484827364337e418b0539bda2abdff54bdb7967/vizro-core/examples/demo/app.py#L134).
This in fact makes the chart empty for all Y axis options except for `life_expectancy`, which is where those limits are coming from. What this leads to is that if a user makes the simplest adjustment, like flip X and Y axis, the chart becomes empty:
<img width="1440" alt="Screenshot 2024-05-06 at 10 01 52 PM" src="https://github.com/mckinsey/vizro/assets/102987839/0848b991-3e27-4d6a-976d-6c26f1552808">
Because the range for Y axis is hard coded to be representative of `life_expectancy` values.
### Expected behavior
The page should either not provide those controls, or make them functional so that the chart works with any combination of those without hard coded ranges for any.
### Which package?
vizro
### Package version
0.1.16
### Python version
3.12
### OS
MacOS
### How to Reproduce
Added in the description.
### Output
Added in the description.
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
|
closed
|
2024-05-06T20:04:00Z
|
2024-06-04T13:11:17Z
|
https://github.com/mckinsey/vizro/issues/459
|
[] |
yury-fedotov
| 10
|
skypilot-org/skypilot
|
data-science
| 4,922
|
[API server] Helm warning: annotation "kubernetes.io/ingress.class" is deprecated
|
```
W0310 23:08:14.543122 97112 warnings.go:70] annotation "kubernetes.io/ingress.class" is deprecated, please use 'spec.ingressClassName' instead
NAME: skypilot
```
We can choose the field to set based on the server version
|
open
|
2025-03-10T15:16:26Z
|
2025-03-10T15:16:32Z
|
https://github.com/skypilot-org/skypilot/issues/4922
|
[
"good first issue",
"good starter issues"
] |
aylei
| 0
|
miguelgrinberg/python-socketio
|
asyncio
| 1,077
|
keep trying to reconnect
|
**Describe the bug**
Analyze the log of PING pong, the connection has been successful, but it is constantly trying to reconnect, check the traceback, socketio.exceptions.ConnectionError: Already connected
**To Reproduce**
Please fill the following code example:
Socket.IO server version: `4.1.z`
*Server*
```js
const { setupWorker } = require("@socket.io/sticky");
const crypto = require("crypto");
const randomId = () => crypto.randomBytes(8).toString("hex");
const { RedisSessionStore } = require("./sessionStore");
const sessionStore = new RedisSessionStore(redisClient, redisClient1);
const { RedisMessageStore } = require("./messageStore");
const messageStore = new RedisMessageStore(redisClient);
io.use(async (socket, next) => {
const sessionID = socket.handshake.auth.sessionID;
if (sessionID) {
const session = await sessionStore.findSession(sessionID);
if (session) {
socket.sessionID = sessionID;
socket.userID = session.userID;
socket.username = session.username;
return next();
}
}
const auth_info = socket.handshake.auth.auth_info;
if (auth_info == 'admin'){
socket.userID = -1
socket.username = 'admin';
socket.room_code = 'admin_-1'
return next();
}
console.log(auth_info,33, auth_info == 'admin')
if (!auth_info) {
return next(new Error("invalid auth_info"));
}
socket.sessionID = randomId();
// socket.userID = randomId();
user_info = await sessionStore.findUserIdForToken(auth_info);
console.log(user_info, auth_info)
if (!user_info){
console.log(64)
return next(new Error("invalid auth_info not found"));
}
const [uid, source] = user_info
socket.userID = uid
socket.username = source;
socket.room_code = source + '_' + uid
next();
});
io.on("connection", async (socket) => {
// persist session
sessionStore.saveSession(socket.sessionID, {
userID: socket.userID,
username: socket.username,
connected: true,
});
```
Socket.IO client version: python-socketio 5.7.2
*Client*
```python
import socketio
sio = socketio.Client(logger=True, engineio_logger=True)
@sio.event
def connect():
print('连接成功')
@sio.event
def connect_error(data):
print(14, data)
# exit()
@sio.event
def disconnect():
print('断开连接了')
@sio.on('private message')
def private_message(data):
print(22, data)
@sio.event
def hello(a, b, c):
print(a, b, c)
@sio.on('*')
def catch_all(event, data):
print(11, event,data, 28)
if __name__ == '__main__':
sio.connect('https://ws.xxxx.com', auth={'auth_info': 'admin'}, wait_timeout=10)
sio.emit('private message', {'body': '1213123', 'to': 'cx_67506'})
sio.wait()
```
**code**
```
def _handle_reconnect(self):
if self._reconnect_abort is None: # pragma: no cover
self._reconnect_abort = self.eio.create_event()
self._reconnect_abort.clear()
reconnecting_clients.append(self)
attempt_count = 0
current_delay = self.reconnection_delay
while True:
delay = current_delay
current_delay *= 2
if delay > self.reconnection_delay_max:
delay = self.reconnection_delay_max
delay += self.randomization_factor * (2 * random.random() - 1)
self.logger.info(
'Connection failed, new attempt in {:.02f} seconds'.format(
delay))
if self._reconnect_abort.wait(delay):
self.logger.info('Reconnect task aborted')
break
attempt_count += 1
try:
self.connect(self.connection_url,
headers=self.connection_headers,
auth=self.connection_auth,
transports=self.connection_transports,
namespaces=self.connection_namespaces,
socketio_path=self.socketio_path)
except (exceptions.ConnectionError, ValueError):
# traceback.print_exc()
pass
else:
self.logger.info('Reconnection successful')
self._reconnect_task = None
break
if self.reconnection_attempts and \
attempt_count >= self.reconnection_attempts:
self.logger.info(
'Maximum reconnection attempts reached, giving up')
break
reconnecting_clients.remove(self)
```
**log**
**Connection failed, new attempt in 5.49 seconds**
Traceback (most recent call last):
File "/Users/dai/Dev/aiguo_python/ven3a-socketio/lib/python3.10/site-packages/socketio/client.py", line 661, in _handle_reconnect
self.connect(self.connection_url,
File "/Users/dai/Dev/aiguo_python/ven3a-socketio/lib/python3.10/site-packages/socketio/client.py", line 307, in connect
raise exceptions.ConnectionError('Already connected')
socketio.exceptions.ConnectionError: Already connected
**Connection failed, new attempt in 5.21 seconds**
Traceback (most recent call last):
File "/Users/dai/Dev/aiguo_python/ven3a-socketio/lib/python3.10/site-packages/socketio/client.py", line 661, in _handle_reconnect
self.connect(self.connection_url,
File "/Users/dai/Dev/aiguo_python/ven3a-socketio/lib/python3.10/site-packages/socketio/client.py", line 307, in connect
raise exceptions.ConnectionError('Already connected')
socketio.exceptions.ConnectionError: Already connected
**Connection failed, new attempt in 5.34 seconds**
Traceback (most recent call last):
File "/Users/dai/Dev/aiguo_python/ven3a-socketio/lib/python3.10/site-packages/socketio/client.py", line 661, in _handle_reconnect
self.connect(self.connection_url,
File "/Users/dai/Dev/aiguo_python/ven3a-socketio/lib/python3.10/site-packages/socketio/client.py", line 307, in connect
raise exceptions.ConnectionError('Already connected')
**socketio.exceptions.ConnectionError: Already connected**
**log2**
```
WebSocket upgrade was successful
Received packet MESSAGE data 0{"sid":"gvr4xtgkcIBMFnAzAAAP"}
Namespace / is connected
连接成功
Reconnection successful
Reconnection successful
Connection failed, new attempt in 4.22 seconds
Connection failed, new attempt in 4.12 seconds
Connection failed, new attempt in 5.02 seconds
Connection failed, new attempt in 5.34 seconds
Connection failed, new attempt in 5.02 seconds
Connection failed, new attempt in 5.21 seconds
Connection failed, new attempt in 4.81 seconds
Connection failed, new attempt in 5.21 seconds
Connection failed, new attempt in 4.70 seconds
Connection failed, new attempt in 5.24 seconds
Connection failed, new attempt in 5.39 seconds
Received packet PING data
Sending packet PONG data
Connection failed, new attempt in 5.17 seconds
Connection failed, new attempt in 5.15 seconds
Connection failed, new attempt in 5.46 seconds
Connection failed, new attempt in 5.06 seconds
Connection failed, new attempt in 4.51 seconds
Connection failed, new attempt in 4.81 seconds
Connection failed, new attempt in 5.16 seconds
```
**Platform:**
+ client
- Device: mac pro
- OS: macos
+ server
- Device: pc
- OS: ubuntu20.04
**Additional context**
Add any other context about the problem here.
|
closed
|
2022-10-27T07:24:18Z
|
2024-01-04T20:08:11Z
|
https://github.com/miguelgrinberg/python-socketio/issues/1077
|
[
"question"
] |
dly667
| 9
|
s3rius/FastAPI-template
|
fastapi
| 199
|
PEP 604 Optional[]
|
Python 3.10+ introduces the | union operator into type hinting, see [PEP 604](https://www.python.org/dev/peps/pep-0604/). Instead of Union[str, int] you can write str | int. In line with other type-hinted languages, the preferred (and more concise) way to denote an optional argument in Python 3.10 and up, is now Type | None, e.g. str | None or list | None.
I'm sure many people use version 3.10 and higher.
Also, the List type changes to list
I suggest you bring the code to PEP 604 or make an option during installation
|
open
|
2023-12-13T08:58:02Z
|
2023-12-15T14:05:27Z
|
https://github.com/s3rius/FastAPI-template/issues/199
|
[] |
Spirit412
| 3
|
graphql-python/graphene-sqlalchemy
|
graphql
| 152
|
Serializing native Python enums does not work
|
Currently using SQL enums works well, unless you use a `enum.Enum` as base for the enum. For example using this model:
```python
class Hairkind(enum.Enum):
LONG = 'long'
SHORT = 'short'
class Pet(Base):
__tablename__ = 'pets'
id = Column(Integer(), primary_key=True)
hair_kind = Column(Enum(Hairkind, name='hair_kind'), nullable=False)
```
will fail badly if you try to query the hairKind field:
```
File "lib/python3.7/site-packages/graphql/execution/executor.py", line 622, in complete_leaf_value
path=path,
graphql.error.base.GraphQLError: Expected a value of type "hair_kind" but received: Hairkind.LONG
```
|
closed
|
2018-07-31T12:47:07Z
|
2023-02-25T06:58:33Z
|
https://github.com/graphql-python/graphene-sqlalchemy/issues/152
|
[] |
wichert
| 6
|
microsoft/unilm
|
nlp
| 712
|
Isn't LayoutXLM-large is the public model?
|
Hi, Thank you for sharing your work. I'm using the layoutxlm-base model.
I wanna check the layoutxlm-large too, but I can't find the models in huggingface.
Can't I try the model?
|
closed
|
2022-05-11T09:05:47Z
|
2022-05-13T00:17:49Z
|
https://github.com/microsoft/unilm/issues/712
|
[] |
yellowjs0304
| 2
|
flaskbb/flaskbb
|
flask
| 548
|
pip install -r requirements.txt doesn't work
|
Last output of `pip install -r requirements.txt`
```
Collecting Mako==1.0.7
Using cached Mako-1.0.7.tar.gz (564 kB)
Collecting MarkupSafe==1.0
Using cached MarkupSafe-1.0.tar.gz (14 kB)
ERROR: Command errored out with exit status 1:
command: /home/<user>/flaskbb/.venv/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-vducm9fe/MarkupSafe/setup.py'"'"'; __file__='"'"'/tmp/pip-install-vducm9fe/MarkupSafe/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-vducm9fe/MarkupSafe/pip-egg-info
cwd: /tmp/pip-install-vducm9fe/MarkupSafe/
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-vducm9fe/MarkupSafe/setup.py", line 6, in <module>
from setuptools import setup, Extension, Feature
ImportError: cannot import name 'Feature'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
```
and with `--log log.log`
```
2020-04-17T21:41:07,974 Found link https://files.pythonhosted.org/packages/e0/bf/acc45baeb2d7333ed724c30af188640d9cb0be4b28533edfc3e2ae5aad72/MarkupSafe-2.0.0a1.tar.gz#sha256=beac28ed60c8e838301226a7a85841d0af2068eba2dcb1a58c2d32d6c05e440e (from https://pypi.org/simple/markupsafe/) (requires-python:>=3.6), version: 2.0.0a1
2020-04-17T21:41:07,977 Given no hashes to check 1 links for project 'MarkupSafe': discarding no candidates
2020-04-17T21:41:07,984 Using version 1.0 (newest of versions: 1.0)
2020-04-17T21:41:07,994 Collecting MarkupSafe==1.0
2020-04-17T21:41:07,995 Created temporary directory: /tmp/pip-unpack-gqwjpz49
2020-04-17T21:41:08,009 Using cached MarkupSafe-1.0.tar.gz (14 kB)
2020-04-17T21:41:08,032 Added MarkupSafe==1.0 from https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz#sha256=a6be69091dac236ea9c6bc7d012beab42010fa914c459791d627dad4910eb665 (from -r requirements.txt (line 35)) to build tracker '/tmp/pip-req-tracker-ev_qmg96'
2020-04-17T21:41:08,032 Running setup.py (path:/tmp/pip-install-pom40nr4/MarkupSafe/setup.py) egg_info for package MarkupSafe
2020-04-17T21:41:08,032 Running command python setup.py egg_info
2020-04-17T21:41:08,573 Traceback (most recent call last):
2020-04-17T21:41:08,574 File "<string>", line 1, in <module>
2020-04-17T21:41:08,574 File "/tmp/pip-install-pom40nr4/MarkupSafe/setup.py", line 6, in <module>
2020-04-17T21:41:08,574 from setuptools import setup, Extension, Feature
2020-04-17T21:41:08,574 ImportError: cannot import name 'Feature'
2020-04-17T21:41:08,627 Cleaning up...
2020-04-17T21:41:08,628 Removing source in /tmp/pip-install-pom40nr4/alembic
2020-04-17T21:41:08,650 Removing source in /tmp/pip-install-pom40nr4/Flask-Limiter
2020-04-17T21:41:08,656 Removing source in /tmp/pip-install-pom40nr4/flask-whooshee
2020-04-17T21:41:08,659 Removing source in /tmp/pip-install-pom40nr4/itsdangerous
2020-04-17T21:41:08,666 Removing source in /tmp/pip-install-pom40nr4/Mako
2020-04-17T21:41:08,684 Removing source in /tmp/pip-install-pom40nr4/MarkupSafe
2020-04-17T21:41:08,844 Removed MarkupSafe==1.0 from https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz#sha256=a6be69091dac236ea9c6bc7d012beab42010fa914c459791d627dad4910eb665 (from -r requirements.txt (line 35)) from build tracker '/tmp/pip-req-tracker-ev_qmg96'
2020-04-17T21:41:08,844 Removed build tracker: '/tmp/pip-req-tracker-ev_qmg96'
2020-04-17T21:41:08,848 ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
2020-04-17T21:41:08,849 Exception information:
2020-04-17T21:41:08,849 Traceback (most recent call last):
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 186, in _main
2020-04-17T21:41:08,849 status = self.run(options, args)
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 331, in run
2020-04-17T21:41:08,849 resolver.resolve(requirement_set)
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/legacy_resolve.py", line 177, in resolve
2020-04-17T21:41:08,849 discovered_reqs.extend(self._resolve_one(requirement_set, req))
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/legacy_resolve.py", line 333, in _resolve_one
2020-04-17T21:41:08,849 abstract_dist = self._get_abstract_dist_for(req_to_install)
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/legacy_resolve.py", line 282, in _get_abstract_dist_for
2020-04-17T21:41:08,849 abstract_dist = self.preparer.prepare_linked_requirement(req)
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 516, in prepare_linked_requirement
2020-04-17T21:41:08,849 req, self.req_tracker, self.finder, self.build_isolation,
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 95, in _get_prepared_distribution
2020-04-17T21:41:08,849 abstract_dist.prepare_distribution_metadata(finder, build_isolation)
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/distributions/sdist.py", line 40, in prepare_distribution_metadata
2020-04-17T21:41:08,849 self.req.prepare_metadata()
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/req/req_install.py", line 564, in prepare_metadata
2020-04-17T21:41:08,849 self.metadata_directory = self._generate_metadata()
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/req/req_install.py", line 544, in _generate_metadata
2020-04-17T21:41:08,849 details=self.name or "from {}".format(self.link)
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/operations/build/metadata_legacy.py", line 118, in generate_metadata
2020-04-17T21:41:08,849 command_desc='python setup.py egg_info',
2020-04-17T21:41:08,849 File "/home/<user>/flaskbb/.venv/lib/python3.6/site-packages/pip/_internal/utils/subprocess.py", line 242, in call_subprocess
2020-04-17T21:41:08,849 raise InstallationError(exc_msg)
2020-04-17T21:41:08,849 pip._internal.exceptions.InstallationError: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
```
Output of `python setup.py egg_info` and result code:
```
(.venv) <user>@hosty:~/flaskbb$ python setup.py egg_info
running egg_info
writing FlaskBB.egg-info/PKG-INFO
writing dependency_links to FlaskBB.egg-info/dependency_links.txt
writing entry points to FlaskBB.egg-info/entry_points.txt
writing requirements to FlaskBB.egg-info/requires.txt
writing top-level names to FlaskBB.egg-info/top_level.txt
reading manifest file 'FlaskBB.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'pytest.ini'
no previously-included directories found matching 'flaskbb/themes/*/node_modules'
no previously-included directories found matching 'flaskbb/themes/*/.sass-cache'
warning: no previously-included files matching '__pycache__' found anywhere in distribution
warning: no previously-included files matching '*.sw[a-z]' found anywhere in distribution
writing manifest file 'FlaskBB.egg-info/SOURCES.txt'
(.venv) <user>@hosty:~/flaskbb$ echo $?
0
```
|
closed
|
2020-04-17T19:44:40Z
|
2020-06-04T18:29:32Z
|
https://github.com/flaskbb/flaskbb/issues/548
|
[] |
trick2011
| 4
|
pydantic/pydantic-ai
|
pydantic
| 846
|
ollama_example.py not working from docs
|
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
|
closed
|
2025-02-04T05:35:14Z
|
2025-02-04T18:11:48Z
|
https://github.com/pydantic/pydantic-ai/issues/846
|
[] |
saipr0
| 2
|
cle-b/httpdbg
|
rest-api
| 155
|
Feature Request: Counter for List of Requests
|
Just learned about httpdbg and have been enjoying using it. This is a small suggestion, but it would be nice to have a counter in the UI displaying the number of recorded requests.
Thanks for the great tool!
|
closed
|
2024-10-31T00:48:49Z
|
2024-11-01T17:50:41Z
|
https://github.com/cle-b/httpdbg/issues/155
|
[] |
erikcw
| 2
|
sunscrapers/djoser
|
rest-api
| 306
|
Allow optional user fields to be set on registration
|
It would be very helpful if optional user fields like `first_name` and `last_name` could be set in `POST /users/`. The available fields would depend on the serializer being used.
|
closed
|
2018-09-14T23:09:47Z
|
2019-01-18T17:48:30Z
|
https://github.com/sunscrapers/djoser/issues/306
|
[] |
ferndot
| 2
|
axnsan12/drf-yasg
|
rest-api
| 260
|
Detect ChoiceField type based on choices
|
### Background
According to DRF documentation and source code, `ChoiceField` class supports different values types.
`ChoiceFieldInspector` considers ChoiceField to be of string type in all cases (except ModelSerializer case).
### Goal
Detect field type based on provided choices types.
When all choices are integers, set swagger type to "integer", otherwise use "string".
### Open questions
1. Are there any other types which can be automatically inferred?
|
closed
|
2018-12-04T08:15:45Z
|
2018-12-07T12:11:14Z
|
https://github.com/axnsan12/drf-yasg/issues/260
|
[] |
mofr
| 5
|
RobertCraigie/prisma-client-py
|
pydantic
| 107
|
Add support for the Unsupported type
|
## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma supports an `Unsupported` type that means types that Prisma does not support yet can still be represented in the schema.
We should support it too.
[https://www.prisma.io/docs/reference/api-reference/prisma-schema-reference#unsupported](https://www.prisma.io/docs/reference/api-reference/prisma-schema-reference#unsupported)
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
As fields of this type are not actually supported in the Client, what we have to do is limit what actions are available in certain situations.
E.g. if a model contains a required `Unsupported` field then `create()` and `update()` are not available
|
open
|
2021-11-08T00:16:59Z
|
2022-02-01T15:38:21Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/107
|
[
"topic: types",
"kind/feature",
"level/advanced",
"priority/low"
] |
RobertCraigie
| 0
|
modin-project/modin
|
data-science
| 7,249
|
how to take down ray and put up again in local mode
|
My program has memory risk, and part of it seems to come from memory leak (idling ray workers holding a big chunk of memory). I have a for loop to independently run chunks of csv file on a series of tasks, I wish to kill ray after each iteration to release memory, and let Modin to put it up again with fresh ray workers. However, my code is the following:
```py
import pandas
for df_ in pandas.read_csv('xxx.csv', chunk=5000):
df_.to_csv(xxx)
run_my_tasks(xxx) # Modin will initialize ray in first iteration
ray.shutdown()
```
however, I got below error:
```
File "/home/.../lib/python3.9/site-packages/modin/core/execution/ray/common/deferred_execution.py", line 309, in _deconstruct_chain
output[out_pos] = out_pos
IndexError: list assignment index out of range
```
|
closed
|
2024-05-09T09:06:03Z
|
2024-06-15T21:00:33Z
|
https://github.com/modin-project/modin/issues/7249
|
[
"new feature/request 💬"
] |
SiRumCz
| 14
|
iMerica/dj-rest-auth
|
rest-api
| 499
|
Not getting `id_token` in response: Apple authentication.
|
I am using 3.0.0 and now I am confused about `id_token` for apple authentication. This [issue](https://github.com/iMerica/dj-rest-auth/issues/201#issue-774050426) says to use both `access_token` and `id_token` for login. When I hit the [authorisation url](https://developer.apple.com/documentation/sign_in_with_apple/request_an_authorization_to_the_sign_in_with_apple_server) with the required params, I am getting only `code`. When I use this code for login in API then I get logged in successfully for the first time and I get `access_token` and `refresh_token`. There is no trace of `id_token`.
What am I missing here?
|
open
|
2023-03-30T01:03:34Z
|
2023-05-04T05:19:08Z
|
https://github.com/iMerica/dj-rest-auth/issues/499
|
[] |
haccks
| 1
|
jupyter/nbviewer
|
jupyter
| 673
|
Error 503: GitHub API rate limit exceeded. Try again soon.
|
I am getting this error on a few notebooks, but I can't imagine that I have reached any traffic limits
[http://nbviewer.jupyter.org/github/MaayanLab/single_cell_RNAseq_Visualization/blob/master/Single%20Cell%20RNAseq%20Visualization%20Example.ipynb](http://nbviewer.jupyter.org/github/MaayanLab/single_cell_RNAseq_Visualization/blob/master/Single%20Cell%20RNAseq%20Visualization%20Example.ipynb)
[https://nbviewer.jupyter.org/github/MaayanLab/clustergrammer-widget/blob/master/DataFrame_Example.ipynb](https://nbviewer.jupyter.org/github/MaayanLab/clustergrammer-widget/blob/master/DataFrame_Example.ipynb)
It also seems to be a general issue with nbviewer since I can't reach this notebook also (linked from the front page)
[http://nbviewer.jupyter.org/github/ipython/ipython/blob/4.0.x/examples/IPython%20Kernel/Cell%20Magics.ipynb](http://nbviewer.jupyter.org/github/ipython/ipython/blob/4.0.x/examples/IPython%20Kernel/Cell%20Magics.ipynb)
|
closed
|
2017-02-22T15:50:40Z
|
2019-10-07T17:37:41Z
|
https://github.com/jupyter/nbviewer/issues/673
|
[] |
cornhundred
| 10
|
plotly/dash-bio
|
dash
| 422
|
dash bio installation error in R
|
**Description of the bug**
Error when installing dash-bio in R, problem with dashHtmlComponents.
**To Reproduce**
```
> remotes::install_github("plotly/dash-bio")
Downloading GitHub repo plotly/dash-bio@master
✔ checking for file ‘/tmp/Rtmp3t2YC5/remotes1be9102a356f/plotly-dash-bio-447ebbe/DESCRIPTION’ ...
─ preparing ‘dashBio’:
✔ checking DESCRIPTION meta-information ...
─ cleaning src
─ checking for LF line-endings in source and make files and shell scripts
─ checking for empty or unneeded directories
Removed empty directory ‘dashBio/src’
Removed empty directory ‘dashBio/tests’
─ looking to see if a ‘data/datalist’ file should be added
─ building ‘dashBio_0.1.2.tar.gz’
Installing package into ‘/home/ediman/R/x86_64-pc-linux-gnu-library/3.6’
(as ‘lib’ is unspecified)
* installing *source* package ‘dashBio’ ...
** using staged installation
** R
** data
*** moving datasets to lazyload DB
** inst
** byte-compile and prepare package for lazy loading
Error in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]) :
namespace ‘dashHtmlComponents’ 1.0.1 is being loaded, but == 1.0.0 is required
Calls: <Anonymous> ... namespaceImport -> loadNamespace -> namespaceImport -> loadNamespace
Execution halted
ERROR: lazy loading failed for package ‘dashBio’
```
**Expected behavior**
Installation success.
**Session Info**
```
R version 3.6.1 (2019-07-05)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.3 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/atlas/libblas.so.3.10.3
LAPACK: /usr/lib/x86_64-linux-gnu/atlas/liblapack.so.3.10.3
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] Rcpp_1.0.2 rstudioapi_0.10 magrittr_1.5 usethis_1.5.1 devtools_2.2.1 pkgload_1.0.2
[7] R6_2.4.0 rlang_0.4.0 tools_3.6.1 pkgbuild_1.0.5 sessioninfo_1.1.1 cli_1.1.0
[13] withr_2.1.2 ellipsis_0.3.0 remotes_2.1.0 yaml_2.2.0 assertthat_0.2.1 digest_0.6.21
[19] rprojroot_1.3-2 crayon_1.3.4 processx_3.4.1 callr_3.3.1 fs_1.3.1 ps_1.3.0
[25] curl_4.2 testthat_2.2.1 memoise_1.1.0 glue_1.3.1.9000 compiler_3.6.1 desc_1.2.0
[31] backports_1.1.4 prettyunits_1.0.2
```
|
closed
|
2019-10-01T14:55:42Z
|
2019-10-01T17:18:31Z
|
https://github.com/plotly/dash-bio/issues/422
|
[] |
Ebedthan
| 3
|
keras-team/keras
|
machine-learning
| 20,432
|
TorchModuleWrapper object has no attribute 'train' (Keras3)
|
**Description**
I am trying to integrate a `torch.nn.Module` together with Keras Layers in my neural architecture using the `TorchModuleWrapper` layer. For this, I tried to reproduce the example reported in the [documentation](https://keras.io/api/layers/backend_specific_layers/torch_module_wrapper/).
To make the code run, I have to first change the import from `from keras.src.layers import TorchModuleWrapper` to `from keras.layers import TorchModuleWrapper`.
**Actual Behavior**
I obtain an `AttributeError` when the line `print("# Output shape", model(torch.ones(1, 1, 28, 28).to("cpu")).shape)` (using the cpu rather than gpu as the example shows) is executed:
```
24 def call(self, inputs):
---> 25 x = F.relu(self.conv1(inputs))
26 x = self.pool(x)
27 x = F.relu(self.conv2(x))
AttributeError: Exception encountered when calling TorchModuleWrapper.call().
'TorchModuleWrapper' object has no attribute 'train'
Arguments received by TorchModuleWrapper.call():
• args=('tf.Tensor(shape=(1, 1, 28, 28), dtype=float32)',)
• training=None
• kwargs=<class 'inspect._empty'>
```
**Steps to Reproduce**
1. Copy the code in the documentation
2. Change the import as explained above
3. Execute the code
**Environment**
- keras v3.6.0
- torch v2.5.1
- python 3.10.15
Thank you in advance for any help!
|
closed
|
2024-10-31T13:04:18Z
|
2024-11-01T07:40:06Z
|
https://github.com/keras-team/keras/issues/20432
|
[] |
MicheleCattaneo
| 4
|
chiphuyen/stanford-tensorflow-tutorials
|
nlp
| 143
|
How to change the output of style_transfer?
|
After running the code successfully it cut off the image which wasn't desired.How to change the scale on which i want to do style transfer.I used different images as input.
I am working on ubuntu 18.04
|
open
|
2019-03-08T14:16:55Z
|
2019-03-08T16:41:01Z
|
https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/143
|
[] |
ghost
| 0
|
ultralytics/ultralytics
|
machine-learning
| 19,099
|
How to deploy YOLO11 detection model on CVAT nuclio?
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Any sample .yaml I can follow?
### Additional
_No response_
|
open
|
2025-02-06T11:18:43Z
|
2025-02-06T14:07:29Z
|
https://github.com/ultralytics/ultralytics/issues/19099
|
[
"question",
"detect"
] |
patricklau12
| 4
|
ARM-DOE/pyart
|
data-visualization
| 1,245
|
plot_ppi_map with lat/lon
|
I have some code that I've been using successfully on a Debian 10 machine running pyart v1.11.2. I've tried running it on a Debian 11 machine running pyart v1.12.5 and I get plots but no lat/lon information. See my code and two attached images. Can you explain what I might be missing?
`import os
import sys
import matplotlib.pyplot as plt
import pyart
import numpy as np
from datetime import datetime
import cartopy.crs as ccrs
import warnings
warnings.filterwarnings("ignore")
fname = "/home/disk/monsoon/precip/cfradial/spol_ncar/cfradial/rate/sur/20220607/cfrad.20220607_042450.162_to_20220607_043052.364_SPOL_SUR.nc"
outdir_base = '/home/disk/monsoon/precip/radar/spol_test'
# get date and time from filename
filename = os.path.basename(fname)
(prefix,start,junk,junk,junk) = filename.split('.')
start_obj = datetime. strptime(start, '%Y%m%d_%H%M%S')
start_str = start_obj.strftime('%Y/%m/%d %H:%M:%S')
datetime_str = start_obj.strftime('%Y%m%d%H%M%S')
date = start_obj.strftime('%Y%m%d')
# create outdir if necessary
outdir = outdir_base+'/'+date
if not os.path.exists(outdir):
os.makedirs(outdir)
# read input file & get elevation angles
radar = pyart.io.read(fname)
radar.scan_type = 'sur'
els = radar.fixed_angle['data']
# get lowest elevation angle
index = 0
angle = round(els[index],1)
print('lowest angle =',angle)
# check to see if image already exists
file_out = 'research.Radar_SPOL.'+datetime_str+'.ppim_rrate_'+str(angle).replace('.','')+'.png'
if not os.path.isfile(outdir+'/'+file_out):
# remove transition angle flags
subset = radar.extract_sweeps([index])
trans = subset.antenna_transition["data"]
trans[:] = 0
subset.antenna_transition["data"] = trans
# create display
# use 'subset' instead of 'radar' to define display
# then sweep will be 0 for all plots since subset contains only one sweep
display = pyart.graph.RadarMapDisplay(subset)
fig = plt.figure(figsize=(12, 4.5))
fig.tight_layout()
fig.suptitle('SPOL Rain Rates '+start_str+' UTC PPI '+str(angle)+' deg', fontsize=18)
ax = fig.add_subplot(121, projection=ccrs.PlateCarree())
display.plot_ppi_map('RATE_HYBRID', sweep=0, ax=ax, title='',
vmin=0,vmax=50,
width=400000, height=400000,
colorbar_label='RATE_HYBRID (mm/hr)',
cmap='pyart_HomeyerRainbow',
lat_lines = np.arange(22,27,.5),
lon_lines = np.arange(118, 123,1),
#axislabels=('', 'Distance from radar (km)'),
resolution = '10m')
ax = fig.add_subplot(122, projection=ccrs.PlateCarree())
display.plot_ppi_map('RATE_PID', sweep=0, ax=ax, title='',
vmin=0,vmax=50,
width=400000, height=400000,
colorbar_label='RATE_PID (mm/hr)',
#cmap='pyart_Carbone42',
cmap='pyart_HomeyerRainbow',
lat_lines = np.arange(22,27,.5),
lon_lines = np.arange(118, 123,1),
#axislabels=('', 'Distance from radar (km)'),
resolution = '10m')
# save plot
plt.savefig(outdir+'/'+file_out)
`


|
closed
|
2022-08-17T21:07:02Z
|
2024-05-14T18:55:02Z
|
https://github.com/ARM-DOE/pyart/issues/1245
|
[
"Question",
"component: pyart.graph"
] |
srbrodzik
| 27
|
strawberry-graphql/strawberry
|
django
| 3,158
|
Default values for scalar arguments passed as string
|
When declaring an optional argument with a scalar type, its default value is passed as a string in the resulting schema. This makes Strawberry-declared schemas incompatible with externally connected GraphQL consumers with strict schema checkers, such as Hasura.
The following code:
```python
from typing import Optional
import strawberry
@strawberry.type
class Query:
@strawberry.field
def example(self,
baz: int,
foo: int | None = None ,
bar: int = 10
) -> None:
return None
schema = strawberry.Schema(query=Query)
```
produces the following default values in the schema:
```
...
"defaultValue": null
...
"defaultValue": "null"
...
"defaultValue": "10"
```
<details>
<summary> Schema inspection </summary>
```graphql
{
"data": {
"__schema": {
"queryType": {
"name": "Query"
},
"mutationType": null,
"types": [
{
"kind": "OBJECT",
"name": "Query",
"description": null,
"fields": [
{
"name": "example",
"description": null,
"args": [
{
"name": "baz",
"description": null,
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "Int",
"ofType": null
}
},
"defaultValue": null
},
{
"name": "foo",
"description": null,
"type": {
"kind": "SCALAR",
"name": "Int",
"ofType": null
},
"defaultValue": "null"
},
{
"name": "bar",
"description": null,
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "Int",
"ofType": null
}
},
"defaultValue": "10"
}
```
</details>
## System Information
Python 3.11
Ubuntu 22.10
Strawberry version: 0.209.2
|
closed
|
2023-10-18T09:52:49Z
|
2025-03-20T15:56:26Z
|
https://github.com/strawberry-graphql/strawberry/issues/3158
|
[
"bug"
] |
ichorid
| 7
|
joke2k/django-environ
|
django
| 548
|
ReadTheDocs build is broken
|
https://app.readthedocs.org/projects/django-environ/?utm_source=django-environ&utm_content=flyout
As a result the updates for v.0.12.0 have not been uploaded to ReadTheDocs
|
closed
|
2025-01-15T21:18:52Z
|
2025-01-16T22:15:58Z
|
https://github.com/joke2k/django-environ/issues/548
|
[] |
dgilmanAIDENTIFIED
| 2
|
donnemartin/system-design-primer
|
python
| 203
|
DNS layer to elect loadbalancer health
|
How about to dns layer work as service discovery? Wich load balaner must be elect ?
Load balancer network may cuase problem or even might be any problem such as any layer of system.

DNS layer can work via simple mechanism to figure it out wich load load balancer ip resolve that client connect to it.
The simple line between client and load balancer is more complicated.
|
open
|
2018-08-19T19:42:11Z
|
2020-01-18T21:01:23Z
|
https://github.com/donnemartin/system-design-primer/issues/203
|
[
"needs-review"
] |
mhf-ir
| 2
|
wkentaro/labelme
|
computer-vision
| 1,029
|
rectangle mouse line cross
|
rectangle
I want the mouse to show a cross
Convenient for me to locate
<img width="408" alt="微信截图_20220602200358" src="https://user-images.githubusercontent.com/6490927/171625270-8646ab55-a5d3-44df-a935-279a72cb156a.png">
|
closed
|
2022-06-02T12:05:15Z
|
2022-06-25T04:09:24Z
|
https://github.com/wkentaro/labelme/issues/1029
|
[] |
monkeycc
| 0
|
MycroftAI/mycroft-core
|
nlp
| 2,511
|
Unable to install on latest Ubuntu
|
Hello! Thanks for your time :-)
## software, hardware and version
* master pull of the mycroft-core codebase
## steps that we can use to replicate the Issue
For example:
1. Clone the repo to a machine running the latest version of ubuntu
2. Try to install/run the program with `dev_setup.sh` or `start-microft.sh`
3. Wait for a while during compilation, which ultimately fails
## Be as specific as possible about the expected condition, and the deviation from expected condition.
I expect the application to be able to build and start, but it's failing to do so.
## Provide log files or other output to help us see the error
When it finally fails, the last thing I get is:
```
CC src/audio/libttsmimic_la-auclient.lo
CC src/audio/libttsmimic_la-au_command.lo
CC src/audio/libttsmimic_la-audio.lo
CC src/audio/libttsmimic_la-au_none.lo
CCLD libttsmimic.la
CCLD libttsmimic_lang_usenglish.la
CCLD libttsmimic_lang_cmu_grapheme_lang.la
CCLD libttsmimic_lang_cmu_indic_lang.la
CCLD libttsmimic_lang_cmulex.la
CCLD libttsmimic_lang_cmu_grapheme_lex.la
CCLD libttsmimic_lang_cmu_us_kal.la
CCLD libttsmimic_lang_cmu_time_awb.la
CCLD libttsmimic_lang_cmu_us_kal16.la
CCLD libttsmimic_lang_cmu_us_awb.la
CCLD libttsmimic_lang_cmu_us_rms.la
CCLD libttsmimic_lang_cmu_us_slt.la
CCLD compile_regexes
/usr/bin/ld: warning: libicui18n.so.64, needed by ./.libs/libttsmimic.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libicuuc.so.64, needed by ./.libs/libttsmimic.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_hw_params_any'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_hw_params_sizeof'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_hw_params_set_channels'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_drop'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_writei'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `utext_openUTF8_64'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_close'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `u_errorName_64'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_state'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_hw_params'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `ucasemap_utf8ToLower_64'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_drain'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_config_update_free_global'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `ucasemap_open_64'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `ucasemap_utf8ToUpper_64'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_hw_params_set_format'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `uregex_close_64'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `uregex_setUText_64'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `uregex_matches_64'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_delay'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_strerror'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `uregex_openC_64'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_hw_params_set_access'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_wait'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_open'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_hw_params_set_rate'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_resume'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `snd_pcm_prepare'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `ucasemap_close_64'
/usr/bin/ld: ./.libs/libttsmimic.so: undefined reference to `uregex_reset_64'
collect2: error: ld returned 1 exit status
make[1]: *** [Makefile:2705: compile_regexes] Error 1
make[1]: *** Waiting for unfinished jobs....
```
Thanks for your time!
|
closed
|
2020-03-24T21:03:53Z
|
2020-04-27T08:54:34Z
|
https://github.com/MycroftAI/mycroft-core/issues/2511
|
[] |
metasoarous
| 8
|
feder-cr/Jobs_Applier_AI_Agent_AIHawk
|
automation
| 967
|
[FEATURE]: add undetected - chrome driver
|
### Feature summary
adding undetected - chrome driver
### Feature description
https://github.com/ultrafunkamsterdam/undetected-chromedriver
### Motivation
_No response_
### Alternatives considered
_No response_
### Additional context
_No response_
|
closed
|
2024-11-28T16:02:20Z
|
2024-12-01T15:07:48Z
|
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/967
|
[
"enhancement",
"hotfix needed"
] |
surapuramakhil
| 2
|
HIT-SCIR/ltp
|
nlp
| 435
|
dlopen: cannot load any more object with static TLS
|
python 3.6.3
ltp 4.0.9
GCC 4.8.4
Traceback (most recent call last):
File "pre_app.py", line 39, in <module>
from wenlp.sentence_analyser import process_long_sentence,sub_pre_comma,del_nonsense,change_negative_is_positive
File "/data/welab/nlp/speech_bot_venv/venv/lib/python3.6/site-packages/webot_nlp-1.0-py3.6.egg/wenlp/sentence_analyser.py", line 4, in <module>
from ltp import LTP
File "/data/welab/nlp/speech_bot_venv/venv/lib/python3.6/site-packages/ltp/__init__.py", line 7, in <module>
from .data import Dataset
File "/data/welab/nlp/speech_bot_venv/venv/lib/python3.6/site-packages/ltp/data/__init__.py", line 5, in <module>
from .vocab import Vocab
File "/data/welab/nlp/speech_bot_venv/venv/lib/python3.6/site-packages/ltp/data/vocab.py", line 9, in <module>
import torch
File "/data/welab/nlp/speech_bot_venv/venv/lib/python3.6/site-packages/torch/__init__.py", line 188, in <module>
_load_global_deps()
File "/data/welab/nlp/speech_bot_venv/venv/lib/python3.6/site-packages/torch/__init__.py", line 141, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/usr/lib/python3.6/ctypes/__init__.py", line 348, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen: cannot load any more object with static TLS
|
closed
|
2020-11-10T01:24:59Z
|
2020-11-16T01:55:54Z
|
https://github.com/HIT-SCIR/ltp/issues/435
|
[] |
liuchenbaidu
| 1
|
roboflow/supervision
|
machine-learning
| 1,281
|
Machine vision
|
closed
|
2024-06-13T19:07:39Z
|
2024-06-14T13:12:04Z
|
https://github.com/roboflow/supervision/issues/1281
|
[] |
Romu10
| 0
|
|
pydantic/FastUI
|
pydantic
| 339
|
FastUI provides raw json object instead of rendered interface
|
First of all, congratulations for this outstanding framework, you rock! I wrote the small app, below, on a file `main.py` with routes `/users` and `/users/{id}`. For some reason I do not know why is my app displaying "Request Error: Response not valid JSON" when I hit endpoint `/` and the raw json on available routes instead of rendered content. Thanks for any help provided.
```
from datetime import date
from typing import Optional
from faker import Faker
from fastapi import FastAPI, HTTPException, Query
from fastapi.responses import HTMLResponse
from fastui import FastUI, AnyComponent, prebuilt_html, components as c
from fastui.components.display import DisplayMode, DisplayLookup
from fastui.events import GoToEvent
from pydantic import BaseModel, Field
# Determine the range of page numbers to display dynamically
MAX_VISIBLE_PAGES = 5
app = FastAPI()
# Initialize Faker library
fake = Faker()
class UserCursor(BaseModel):
id: int
prev_id: Optional[int] = None
next_id: Optional[int] = None
class UserDetail(BaseModel):
id: int
name: str
dob: date = Field(title='Date of Birth')
email: str
phone: str
address: str
city: str
state: str
country: str
zip_code: str
def page_indexation(
total_elems: int, limit: int, offset: int, num_visible_pages: int
):
# Calculate total number of pages
total_pages = (total_elems + limit - 1) // limit
# Calculate current page number
current_page = (offset // limit) + 1
# Determine start and end page numbers
if total_elems <= num_visible_pages:
start_page = 1
end_page = total_pages
elif current_page <= num_visible_pages // 2 + 1:
start_page = 1
end_page = num_visible_pages
elif current_page >= total_pages - num_visible_pages // 2:
start_page = total_pages - num_visible_pages + 1
end_page = total_pages
else:
start_page = current_page - num_visible_pages // 2
end_page = current_page + num_visible_pages // 2
return start_page, end_page, total_pages
def generate_pagination_buttons(
total_elements: int, limit: int, offset: int, num_visible_pages: int
):
start_page, end_page, total_pages = page_indexation(
total_elems=total_elements,
limit=limit,
offset=offset,
num_visible_pages=num_visible_pages
)
# Generate page number links/buttons
page_buttons = []
# Function to add page link/button
def add_page_button(page_number: int, url: str):
page_number=str(page_number).rjust(len(str(total_pages)), ' ')
page_buttons.append(
c.Button(
text=page_number,
on_click=GoToEvent(url=url),
class_name="page-button"
)
)
# Function to add ellipsis link/button
def add_ellipsis_button(url: str):
page_buttons.append(
c.Button(
text='...',
on_click=GoToEvent(url=url),
class_name="ellipsis-button"
)
)
# Add ellipsis and first page link if necessary
if start_page > 1:
add_page_button(1, f'/?offset=0&limit={limit}')
if start_page > 2:
add_ellipsis_button(
f'/?offset={max(offset - num_visible_pages * limit, 0)}&limit={limit}'
)
# Add page links/buttons for the visible range
for p in range(start_page, end_page + 1):
add_page_button(p, f'/?offset={(p - 1) * limit}&limit={limit}')
# Add ellipsis and last page link if necessary
if end_page < total_pages:
if end_page < total_pages - 1:
add_ellipsis_button(f'/?offset={(end_page + 2) * limit}&limit={limit}')
add_page_button(total_pages, f'/?offset={(total_pages - 1) * limit}&limit={limit}')
return page_buttons
# Given a model, generate a DisplayLookup for each field with respective
# type and title. Provide on_click event mapping to the URL for desired fields
def generate_display_lookups(
model: BaseModel,
on_click: dict[str, str] = {}
) -> list[DisplayLookup]:
lookups = []
for field in model.__fields__.keys():
title = model.__fields__[field].title
if model.__fields__[field].annotation == date:
mode = DisplayMode.date
lookup = DisplayLookup(field=field, title=title, mode=mode)
else:
lookup = DisplayLookup(field=field, title=title)
if field in on_click:
lookup.on_click = GoToEvent(url=on_click[field])
lookups.append(lookup)
return lookups
# Generate random users
# Number of random users to generate
def generate_users(n: int) -> list[UserDetail]:
users = []
users_cursor = {}
for i in range(n):
user = UserDetail(
id=i,
name=fake.name(),
dob=fake.date_of_birth(minimum_age=18, maximum_age=80),
email=fake.email(),
phone=fake.phone_number(),
address=fake.street_address(),
city=fake.city(),
state=fake.state(),
country=fake.country(),
zip_code=fake.zipcode()
)
users.append(user)
id_ = user.id
users_cursor[id_] = UserCursor(id=id_)
# Set up the doubly linked list
if i == 0:
users_cursor[id_].prev_id = users[len(users) - 1].id
if i > 0:
users_cursor[id_].prev_id = users[i-1].id
if i < len(users) - 1:
users_cursor[id_].next_id = users[i+1].id
if i == len(users) - 1:
users_cursor[id_].next_id = users[0].id
return users, users_cursor
def users_lookup() -> dict[int, UserDetail]:
users, _ = generate_users(100)
return {user.id: user for user in users}
def users_cursor_lookup() -> dict[int, UserCursor]:
_, users_cursor = generate_users(100)
return users_cursor
def users_list() -> list[UserDetail]:
return list(users_lookup().values())
users = users_list()
users_cursor = users_cursor_lookup()
# Endpoint for users table with pagination
@app.get("/users", response_model=FastUI, response_model_exclude_none=True)
def users_table(
limit: Optional[int] = Query(10, ge=1, le=len(users)), offset: int = Query(0, ge=0)
) -> list[AnyComponent]:
"""
Show a table of users with pagination.
"""
# Paginate users based on limit and offset
paginated_users = users[offset:offset + limit]
page_buttons=generate_pagination_buttons(
total_elements=len(users), limit=limit, offset=offset,
num_visible_pages=MAX_VISIBLE_PAGES
)
user_lookup_list=generate_display_lookups(
UserDetail, on_click={'name': '/users/{id}'}
)
table_msg=f"Displaying users {offset + 1} to {min(offset + limit, len(users))} of {len(users)}"
components_ = [
c.Heading(text='Users', level=2),
c.Table(data=paginated_users, columns=user_lookup_list),
c.Text(text=table_msg),
c.Div(components=page_buttons, class_name="pagination")
]
pages_ = [ c.Page(components=components_), ]
return pages_
@app.get("/users/{user_id}/", response_model=FastUI, response_model_exclude_none=True)
def user_profile(user_id: int) -> list[AnyComponent]:
"""
User profile page, the frontend will fetch this when the user visits `/user/{id}/`.
"""
try:
user_cursor = next(u for u in users_cursor.values() if u.id == user_id)
except StopIteration:
raise HTTPException(status_code=404, detail="User not found")
user = users_lookup()[user_id]
components_ = [
c.Button(text='< Back to Users', on_click=GoToEvent(url='/')),
c.Heading(text=user.name, level=2),
c.Details(data=user),
]
# Add the "Previous" button if there is a previous user
if user_cursor.prev_id:
button=c.Button(
text='<< Previous', on_click=GoToEvent(url=f'/users/{user_cursor.prev_id}/')
)
components_.append(button)
# Add the "Next" button if there is a next user
if user_cursor.next_id:
button=c.Button(
text='Next >>', on_click=GoToEvent(url=f'/users/{user_cursor.next_id}/')
)
components_.append(button)
page_ = c.Page(components=components_)
return [ page_ ]
# HTML landing page
@app.get('/{path:path}')
async def html_landing() -> HTMLResponse:
"""
Simple HTML page which serves the React app, comes last as it matches all paths.
"""
print(HTMLResponse(prebuilt_html(title='FastUI Demo')))
return HTMLResponse(prebuilt_html(title='FastUI Demo'))
```
|
closed
|
2024-07-13T19:55:31Z
|
2024-07-14T16:05:40Z
|
https://github.com/pydantic/FastUI/issues/339
|
[] |
brunolnetto
| 3
|
ultralytics/ultralytics
|
pytorch
| 19,476
|
YOLOv10 or YOLO11 Pruning, Masking and Fine-Tuning
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello, I have a question. After pruning, I wanna fine-tuning with the trainer and apply masked to not let the weight update to update my prune weight. I applied this code, but it seems the mAP is too poor. Is there something wrong with my applied code?
Looking forward your suggestions!
def create_pruning_mask(self):
"""Recreate the pruning mask based on the zero weights in the model."""
masks = {}
for name, module in self.model.named_modules():
if isinstance(module, torch.nn.Conv2d):
# Create a mask where weights are non-zero
mask = (torch.abs(module.weight) > 1e-6).float()
masks[name] = mask
return masks
def apply_pruned_mask(self):
"""Apply the masks to block gradients of pruned weights during training."""
for name, module in self.model.named_modules():
if isinstance(module, torch.nn.Conv2d) and name in self.masks:
if module.weight.grad is not None:
mask = self.masks[name].to(module.weight.device)
if mask.size() != module.weight.grad.size():
raise ValueError(f"Mask and gradient size mismatch in layer {name}: "
f"mask {mask.size()}, grad {module.weight.grad.size()}")
module.weight.grad *= mask
def optimizer_step(self):
"""Perform a single step of the training optimizer with gradient clipping and EMA update."""
self.scaler.unscale_(self.optimizer) # unscale gradients
self.apply_pruned_mask() # Apply pruning mask
torch.nn.utils.clip_grad_norm_(self.model.parameters(), max_norm=10.0) # clip gradients
self.scaler.step(self.optimizer)
self.scaler.update()
self.optimizer.zero_grad()
if self.ema:
self.ema.update(self.model)
### Additional
_No response_
|
open
|
2025-02-28T08:23:13Z
|
2025-03-03T02:00:36Z
|
https://github.com/ultralytics/ultralytics/issues/19476
|
[
"enhancement",
"question"
] |
Thaising-Taing
| 6
|
viewflow/viewflow
|
django
| 33
|
Separate declarative and state code
|
closed
|
2014-04-09T10:17:04Z
|
2014-05-01T09:58:12Z
|
https://github.com/viewflow/viewflow/issues/33
|
[
"request/enhancement"
] |
kmmbvnr
| 2
|
|
deepspeedai/DeepSpeed
|
machine-learning
| 6,838
|
nv-ds-chat CI test failure
|
The Nightly CI for https://github.com/microsoft/DeepSpeed/actions/runs/12226557524 failed.
|
closed
|
2024-12-09T00:21:33Z
|
2024-12-11T00:08:34Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6838
|
[
"ci-failure"
] |
github-actions[bot]
| 0
|
quantmind/pulsar
|
asyncio
| 283
|
HTTP Tunneling
|
* **pulsar version**: 2.0
* **python version**: 3.5+
* **platform**: any
## Description
With pulsar 2.0, the HTTP tunnel does not work across different event loops. Since we use the uvloop as almost the default loop we need to find an implementation that works properly. In the mean time SSL tunneling (ssl behind a proxy) is not supported
## Expected behaviour
To work
## Actual behaviour
It doesn't work, the handshake does not happen.
## Steps to reproduce
The ``http.tunnel`` tests are switched off in CI, but they can still be run on local machine with the following command
```
python setup.py test -f http.tunnel
```
|
closed
|
2017-11-14T22:29:07Z
|
2017-11-16T21:21:37Z
|
https://github.com/quantmind/pulsar/issues/283
|
[
"http",
"bug"
] |
lsbardel
| 0
|
man-group/arctic
|
pandas
| 662
|
VersionStore static method is_serializable
|
#### Arctic Version
```
1.72.0
```
#### Arctic Store
```
VersionStore
```
#### Description of problem and/or code sample that reproduces the issue
Following up the change where "can_write_type" static methods were added to all VersionStore handlers:
https://github.com/manahl/arctic/pull/622
we can now have an "check_serializable(data)" static method in VersionStore which us used to answer the questions:
- detect the handler based on the type of the supplied 'data', which should be used to write the data
- verify if this handler can_write() is True the data
User may use "check_serializable(data)" to debug fallback-to-pickling issues, and experiment with serialization, as they try to cleanse their data from objects etc.
|
open
|
2018-11-20T11:09:49Z
|
2018-11-20T11:13:26Z
|
https://github.com/man-group/arctic/issues/662
|
[
"enhancement",
"feature"
] |
dimosped
| 0
|
plotly/dash
|
data-visualization
| 2,891
|
[Feature Request] tabIndex of Div should also accept number type
|
In the origin React, parameter `tabIndex` could accept number type:

It would be better to add number type support for `tabIndex`:

|
closed
|
2024-06-18T06:42:16Z
|
2024-06-21T14:26:02Z
|
https://github.com/plotly/dash/issues/2891
|
[
"good first issue"
] |
CNFeffery
| 2
|
PaddlePaddle/ERNIE
|
nlp
| 326
|
CUDA_VISIBLE_DEVICES=0,1,2,3设置多卡跑run_sequence_labeling.py跑不起来,代码有bug
|
CUDA_VISIBLE_DEVICES=0,1,2,3设置多卡跑run_sequence_labeling.py跑不起来,代码有bug
|
closed
|
2019-09-18T08:05:38Z
|
2020-05-28T10:52:38Z
|
https://github.com/PaddlePaddle/ERNIE/issues/326
|
[
"wontfix"
] |
hitwangshuai
| 2
|
apache/airflow
|
data-science
| 47,614
|
Databricks Operator set it owns task_key in depends_on instead of parent task key
|
### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==7.2.0
### Apache Airflow version
2.10.5
### Operating System
Debian Bookworm
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When DatabricksWorkflowTaskGroup contains at least one task that has a child task, launch task generates wrong json payload. Tasks with parents are setting it owns task_key in depends_on field.
### What you think should happen instead
Tasks with parent/s should set parent task_key instead of it owns task_key
### How to reproduce
Create DatabricksWorkflowTaskGroup. Then add a task_A >> task_B,
Task_B will add it's owns task_key in "depends_on" field instead of task_A task_key
### Anything else
I think the issue is located in file [airflow/providers/databricks/operators/databricks.py.py](https://github.com/apache/airflow/blob/providers-databricks/7.2.0/providers/databricks/src/airflow/providers/databricks/operators/databricks.py)
The first time _generated_databricks_task_key is executed, even if task_id is provided or not, self._databricks_task_key will be set. It means no matter how many times "_generate_databricks_task_key", even passing differents task_id param, is called, it always will return the same value, the one returned in the first call.
```
def _generate_databricks_task_key(self, task_id: str | None = None) -> str:
"""Create a databricks task key using the hash of dag_id and task_id."""
if not self._databricks_task_key or len(self._databricks_task_key) > 100:
self.log.info(
"databricks_task_key has not be provided or the provided one exceeds 100 characters and will be truncated by the Databricks API. This will cause failure when trying to monitor the task. A task_key will be generated using the hash value of dag_id+task_id"
)
task_id = task_id or self.task_id
task_key = f"{self.dag_id}__{task_id}".encode()
self._databricks_task_key = hashlib.md5(task_key).hexdigest()
self.log.info("Generated databricks task_key: %s", self._databricks_task_key)
return self._databricks_task_key
```
If we check block of code that converts a task into databricks_task, is setting "task_key" with self.databricks_task_key. At this point if _generate_databricks_task_key is called again, with or without task_id param, it will return always the same value due self._databricks_task_key is not "None" anymore.
```
def _convert_to_databricks_workflow_task(
self, relevant_upstreams: list[BaseOperator], context: Context | None = None
) -> dict[str, object]:
"""Convert the operator to a Databricks workflow task that can be a task in a workflow."""
base_task_json = self._get_task_base_json()
result = {
"task_key": self.databricks_task_key,
"depends_on": [
{"task_key": self._generate_databricks_task_key(task_id)}
for task_id in self.upstream_task_ids
if task_id in relevant_upstreams
],
**base_task_json,
}
if self.existing_cluster_id and self.job_cluster_key:
raise ValueError(
"Both existing_cluster_id and job_cluster_key are set. Only one can be set per task."
)
if self.existing_cluster_id:
result["existing_cluster_id"] = self.existing_cluster_id
elif self.job_cluster_key:
result["job_cluster_key"] = self.job_cluster_key
return result
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
open
|
2025-03-11T13:20:57Z
|
2025-03-24T02:01:18Z
|
https://github.com/apache/airflow/issues/47614
|
[
"kind:bug",
"area:providers",
"provider:databricks"
] |
pacmora
| 1
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 378
|
[Question] How use intra_var_miner for multiclass
|
Firstly, thank you for your great package and modules.
It works very fine, well documented.
Nevertheless, I struggle to determine what kind of loss I can use for binary or multiclass.
For example, I'm trying to create a kind of "anomaly" detector through metric learning.
In the ideal world, in the embedding space I would like "correct" events as cluster and the anomaly for from them.
I have 3 classes :
- class A,
- class B and
- class other/anomaly.
2 kinds of correct event (but different, and I want discriminate them) and many anomalies, I tried to define a multi loss in this way and I got good results but not perfect.
Ideally, in the space embedding, I just want 2 clusters (A and B) but no constrains about other/anomaly because it could be the rest of the world.
So I tried to used IntraPairVarianceLoss only on positives. Not sure it works here. In binary classification, I understand "pos_margin" and "neg_margin" for the miner, but when we want the constraint only on some classes (and not all of them)??
Also, is it very coherent to use TripletMargin with Arcface? I works quite fine, but I'm not confident.
I'm looking for some "intuition" about how to use loss that I understand very well for binary classification (and works very well) but for multiclass.
```
# MINERS
miner = miners.TripletMarginMiner(
margin=triplet_miner_margin,
type_of_triplets='all',
distance = CosineSimilarity(),
) # maybe not relevant in multiclass, but seems to work fine
intra_var_miner = miners.PairMarginMiner(
pos_margin=1.,
neg_margin=1.,
distance=CosineSimilarity()) # I would like a minimum of intra_variance but only for my "correct" class.
angular_miner = miners.AngularMiner(angle=20) # works well in multiclass.
# REDUCER
reducer = reducers.AvgNonZeroReducer()
# LOSSES
var_loss = losses.IntraPairVarianceLoss(reducer=reducer)
arcface_loss = losses.ArcFaceLoss(3, embedding_size, margin=28.6, scale=64, reducer=reducer)
main_loss = losses.TripletMarginLoss(
margin=triplet_loss_margin,
distance = CosineSimilarity(),
embedding_regularizer = regularizers.LpRegularizer(),
reducer=reducer,
)
loss_func = losses.MultipleLosses(
[main_loss, var_loss, arcface_loss],
miners=[miner, intra_var_miner, None],
# weights=multiple_loss_weights
)
```
|
closed
|
2021-11-04T16:49:55Z
|
2021-11-16T11:05:40Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/378
|
[
"question"
] |
cfrancois7
| 5
|
holoviz/panel
|
plotly
| 7,554
|
Perspective pane collapses rows when updating data
|
#### ALL software version info
panel 1.5.4
Windows, MacOS, Chrome, Edge
#### Description of expected behavior and the observed behavior
There appears to be a bug with the perspective pane automatically collapsing the hierarchy when the underlying data object is updated - **this only seems to happen _after_ clicking to expand or collapse a row**. This makes it pretty annoying and difficult to use when a periodic callback is used to refresh the data.
I'd expect the depth of the hierarchy to be retained to whatever is currently visible.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import pandas as pd
import numpy as np
import panel as pn
pn.extension('perspective')
def get_data(n=100):
return pd.DataFrame({'x': np.random.choice(['A', 'B'], size=n), 'y': np.random.choice(['C', 'D'], size=n), 'z': np.random.normal(size=n)})
table = pn.pane.Perspective(get_data(), group_by=['x', 'y'], sizing_mode='stretch_both')
def update_data():
table.object = get_data()
pn.state.add_periodic_callback(update_data)
pn.Row(table, sizing_mode='stretch_width', height=1000)
```

**After collapsing "A", the rest of the levels will collapse and will get stuck**

|
open
|
2024-12-16T23:33:25Z
|
2025-01-20T21:35:52Z
|
https://github.com/holoviz/panel/issues/7554
|
[] |
rob-tay
| 0
|
akfamily/akshare
|
data-science
| 5,415
|
关于AKShare的接口get_futures_daily在条件market='INE'时获取2018年3月以前的数据出错问题
|
**当前使用Python 3.13.0,AKShare版本为1.15.45,AKTools版本为0.0.89,运行环境为64位的Windows 10 终端下执行:**
>python
>>> import akshare as ak
>>> df = ak.get_futures_daily(start_date='20180301', end_date='20180331', market='DCE')
>>> df = ak.get_futures_daily(start_date='20180301', end_date='20180331', market='CFFEX')
>>> df = ak.get_futures_daily(start_date='20180301', end_date='20180331', market='SHFE')
>>> df = ak.get_futures_daily(start_date='20180301', end_date='20180331', market='GFEX')
>>> df = ak.get_futures_daily(start_date='20180301', end_date='20180331', market='CZCE')
>>>
**上述执行后都能正常返回数据,但执行以下条件就出错,而且不但是2018年3月份,目前试到2018年2月份的数据也获取出错:**
>>> df = ak.get_futures_daily(start_date='20180301', end_date='20180331', market='INE')
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python313\Lib\site-packages\pandas\core\indexes\base.py", line 3805, in get_loc
return self._engine.get_loc(casted_key)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "index.pyx", line 167, in pandas._libs.index.IndexEngine.get_loc
File "index.pyx", line 196, in pandas._libs.index.IndexEngine.get_loc
File "pandas\\_libs\\hashtable_class_helper.pxi", line 7081, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\\_libs\\hashtable_class_helper.pxi", line 7089, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'PRODUCTGROUPID'
**由于上送不了截图,只能贴部分出错信息上来,不知何故,在python环境里也无法正常获取 df = ak.get_futures_daily(start_date='20180301', end_date='20180331', market='INE')的数据,不止这个日期,2017年的数据都获取出错,但目前看到是条件为market='INE'的情况下。这种情况能修复处理吗?先感谢!**
|
closed
|
2024-12-12T12:17:38Z
|
2024-12-14T09:00:38Z
|
https://github.com/akfamily/akshare/issues/5415
|
[] |
dahong38
| 3
|
svc-develop-team/so-vits-svc
|
pytorch
| 55
|
epch 到10000停止了,但是推理时有很严重的噪音
|
到10000后训练就停止,推理时还会有严重的噪音,但可以隐约听到说话声音了
是否需要把配置文件中的"epochs": 10000, 调高让他接着练呢?还是说有哪一步可能做错了呢?
我确实没有使用Pre-trained model files: G_0.pth D_0.pth, 不知道是否和这个有关呢?
|
closed
|
2023-03-19T01:14:32Z
|
2023-03-31T07:14:03Z
|
https://github.com/svc-develop-team/so-vits-svc/issues/55
|
[] |
Max-Liu
| 7
|
Aeternalis-Ingenium/FastAPI-Backend-Template
|
sqlalchemy
| 30
|
TypeError: MultiHostUrl.__new__() got an unexpected keyword argument 'scheme'
|
backend_app | File "/usr/backend/src/api/dependencies/session.py", line 11, in <module>
backend_app | from src.repository.database import async_db
backend_app | File "/usr/backend/src/repository/database.py", line 43, in <module>
backend_app | async_db: AsyncDatabase = AsyncDatabase()
backend_app | ^^^^^^^^^^^^^^^
backend_app | File "/usr/backend/src/repository/database.py", line 15, in __init__
backend_app | self.postgres_uri: pydantic.PostgresDsn = pydantic.PostgresDsn(
backend_app | ^^^^^^^^^^^^^^^^^^^^^
backend_app | File "/usr/local/lib/python3.12/typing.py", line 1133, in __call__
backend_app | result = self.__origin__(*args, **kwargs)
backend_app | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend_app | TypeError: MultiHostUrl.__new__() got an unexpected keyword argument 'scheme'
|
open
|
2023-11-28T11:29:43Z
|
2024-11-12T10:25:00Z
|
https://github.com/Aeternalis-Ingenium/FastAPI-Backend-Template/issues/30
|
[] |
eshpilevskiy
| 1
|
tortoise/tortoise-orm
|
asyncio
| 994
|
how to use tortoise-orm in django3
|
Django3 does not support asynchronous ORM, how can I use tortoise-orm in django3, I can't find an example of that.
Thanks!
|
open
|
2021-11-30T10:03:22Z
|
2021-11-30T10:20:39Z
|
https://github.com/tortoise/tortoise-orm/issues/994
|
[] |
lastshusheng
| 2
|
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 210
|
部署问题
|
在docker部署的时候,发生错误,以下都是docker报的错误和一些配置截图,求大佬解答




|
closed
|
2023-06-08T03:08:52Z
|
2024-03-26T00:00:46Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/210
|
[
"BUG",
"enhancement"
] |
TheShyzzcy
| 5
|
horovod/horovod
|
machine-learning
| 3,189
|
Horovod Timeline & Scalability
|
**Environment:**
1. Framework: (TensorFlow, Keras)
2. Framework version: Tensorflow-gpu 2.1.0 / Keras 2.3.1
3. Horovod version: 0.21.3
4. MPI version: OpenMPI 4.1.1 / MPI API 3.1.0
5. CUDA version: 10.1
6. NCCL version: 2.5.7.1
7. Python version: 3.7
8. Spark / PySpark version:
9. Ray version:
10. OS and version: Ubuntu 16.04
11. GCC version: 7.3.0
12. CMake version: 3.16.5
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? Yes
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? Yes
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
**Question:**
Hi, I'm using Horovod for scaling some deep learning project with two servers. (Server1 has three 1080Ti GPUs and Server2 has two 1080Ti GPUs)
This is performance measurement results of [my code](https://github.com/SHEELE41/Benchmarks/blob/03e5505cf77f690c60959d902d6a38a3dabde535/Pilot1/P1B2/p1b2_baseline_keras2.py).

np 1-3 means that the number of GPUs was used only on Server1(Single Node, Multi-GPU), and distributed-np2~5 means that the number of GPUs was used including Server1 and Server2(Multi-Node, Multi-GPU).
I used Infiniband for Node to Node Communication.
As shown in table, total runtime doesn't decrease proportionately when increasing number of workers and I already know that this is mostly due to constant data loading times.
I increased the number of epochs to decrease data loading time rate in total time, but still showed low improvement rate.

I thought Remainder Time (Total - Data Loading - Horovod communication) is same with GPU computation time and it looks to decrease well (1 / GPUs).
So, I have three questions.
1. I saw that each case has same number(8) of allreduce panes regardless of number of GPUs at Horovod timeline. Is not number of panes determined by number of GPUs? Then what determines it? (I've got same number of panes at tensorflow2_keras_mnist.py)

2. Each pane has different total allreduce time and ALLREDUCE looks like wrapping all of sub works (NCCL_ALLREDUCE, QUEUE, MEMCPY_IN_FUSION_BUFFER, etc... ). So I estimated total allreduce time by getting Total Wall Duration time of first row in HorovodAllreduce_grads_0 pane (biggest total allreduce time, not HorovodAllreduce_grads_x_0). Is it right way to estimate Horovod communication time?


3. How can I reduce horovod communication time? I'm not sure I use Horovod in the right way.
Thanks for reading.
|
closed
|
2021-10-01T09:55:26Z
|
2022-12-02T17:10:42Z
|
https://github.com/horovod/horovod/issues/3189
|
[
"wontfix"
] |
SHEELE41
| 2
|
healthchecks/healthchecks
|
django
| 939
|
Truncate beginning of long request bodies instead of the end
|
I've started making use of the [feature to send log data to healthchecks](https://healthchecks.io/docs/attaching_logs/). I like that I get a truncated copy of the logs in my notification provider.
Under most circumstances I can imagine, an error in the script would show up at the end, and typically this error is truncated away.
I'm proposing a change to the truncating logic [here](https://github.com/healthchecks/healthchecks/blob/master/hc/api/transports.py#L70), something like this:
```
if body and maxlen and len(body) > maxlen:
body = body[-maxlen:] + "\n[truncated]"
```
but I wanted to get feedback first before spending time on this change.
This would ensure that the end of the log message is included in the notification, and the beginning of the log message is omitted.
|
open
|
2024-01-08T00:22:40Z
|
2024-04-11T17:41:29Z
|
https://github.com/healthchecks/healthchecks/issues/939
|
[] |
StevenMassaro
| 5
|
ludwig-ai/ludwig
|
computer-vision
| 3,218
|
[FR] Allow users to set the unmarshalling mode as RAISE for BaseConfig
|
**Is your feature request related to a problem? Please describe.**
Currently, when loading the user config, Ludwig allows unknown attributes to be set. There is a TODO item in the code at `ludwig/schema/utils.py:161`
```
unknown = INCLUDE # TODO: Change to RAISE and update descriptions once we want to enforce strict schemas.
```
Since changing this to RAISE by default might impact lots of users I still believe there's value in having this set up as RAISE. That being said it'd be great if users can overwrite this default via an environment variable.
**Describe the use case**
Allow users to enforce the schema on their config.
**Describe the solution you'd like**
The ability to overwrite the default behaviour of INCLDUE
**Describe alternatives you've considered**
I've tried monkeypatching ludwig. Although it's easily doable for a local environment, it becomes harder when sharing the monkeypatch with a team that runs everything in AWS Sagemaker.
**Additional context**
--
|
closed
|
2023-03-07T13:14:49Z
|
2023-03-17T21:12:20Z
|
https://github.com/ludwig-ai/ludwig/issues/3218
|
[
"feature"
] |
dragosmc
| 5
|
xinntao/Real-ESRGAN
|
pytorch
| 747
|
The OST dataset has many sub folders, have all the images been used for training?
|
closed
|
2024-02-07T01:21:00Z
|
2024-02-08T01:54:51Z
|
https://github.com/xinntao/Real-ESRGAN/issues/747
|
[] |
Note-Liu
| 0
|
|
PaddlePaddle/models
|
computer-vision
| 5,254
|
PaddleCV模型库下的3D检测里的M3D-RPN模型文档有问题
|
这个目录下的文档
快速开始目录下的cd M3D-RPN后
运行ln -s /path/to/kitti dataset/kitti提示:
ln:failed to create symbolic link "dataset/kitti":没有这个目录
另外:本模型是在1.8下开发的,什么时候能提供个2.0的版本呢
|
open
|
2021-02-01T03:26:28Z
|
2024-02-26T05:09:17Z
|
https://github.com/PaddlePaddle/models/issues/5254
|
[] |
Sqhttwl
| 2
|
plotly/dash
|
jupyter
| 3,208
|
Add support for retrieving `HTMLElement` by Dash component ID in clientside callbacks.
|
This has been [discussed on the forum](https://community.plotly.com/t/how-to-use-document-queryselector-with-non-string-ids-in-clientside-callback/91146). A feature request seems more appropriate.
**Is your feature request related to a problem? Please describe.**
When using clientside callbacks and pattern matching, the idiomatic for getting access to third party javascript objects seems to be (an example from ag Grid) :
```{python}
clientside_callback(
"""
(grid_id, html_id, ...) => {
// works
const gridApi = dash_ag_grid.getApi(id);
// doesn't work
const el = document.querySelector(`div#{html_id}`);
}
""",
Output(MATCH, "id"),
Input({"type": "ag-grid", "index": MATCH}, "id"),
Input({"type": "html-element", "index": MATCH}, "id"),
# ... any other inputs that would be used here to trigger this behavior
)
```
**Describe the solution you'd like**
I would like Dash to expose a method like `dash_clientside.get_element_by_id(id: String | Object<String, String | _WildCard>): HTMLElement` which takes advantage of dash internals correctly to replicate this use case.
**Describe alternatives you've considered**
There is a [forum post](https://community.plotly.com/t/adding-component-via-clientside-callbacks/76535/5?u=ctdunc) describing an implementation of `stringifyId` that can query HTMLElements using `document.querySelector`. However, even the author [acknowledges](https://community.plotly.com/t/how-to-use-document-queryselector-with-non-string-ids-in-clientside-callback/91146/2?u=ctdunc) that the behavior may not be consistent, and runtime checks are necessary to ensure correctness. Plus, if Dash ever changes how object IDs are serialized for use in the DOM, any projects depending on this solution will break in unexpected ways.
|
open
|
2025-03-11T12:59:20Z
|
2025-03-11T13:58:18Z
|
https://github.com/plotly/dash/issues/3208
|
[
"feature",
"P2"
] |
ctdunc
| 1
|
InstaPy/InstaPy
|
automation
| 6,241
|
Already unfollowed 'username'! or a private user that rejected your req
|
<!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
## Current Behavior
```
INFO [2021-06-19 17:23:14] [my.account] Ongoing Unfollow [1/426]: now unfollowing 'b'carmen_802020''...
INFO [2021-06-19 17:23:17] [my.account] --> Already unfollowed 'carmen_802020'! or a private user that rejected your req
INFO [2021-06-19 17:23:20] [my.account] Ongoing Unfollow [1/426]: now unfollowing 'b'_sofiabajusova_''...
INFO [2021-06-19 17:23:23] [my.account] --> Already unfollowed '_sofiabajusova_'! or a private user that rejected your req
INFO [2021-06-19 17:23:27] [my.account] Ongoing Unfollow [1/426]: now unfollowing 'b'petr.sakha777''...
INFO [2021-06-19 17:23:31] [my.account] --> Already unfollowed 'petr.sakha777'! or a private user that rejected your req
INFO [2021-06-19 17:23:35] [my.account] Ongoing Unfollow [1/426]: now unfollowing 'b'smirnovindokitaiskii''...
INFO [2021-06-19 17:23:38] [my.account] --> Already unfollowed 'smirnovindokitaiskii'! or a private user that rejected your req
INFO [2021-06-19 17:23:41] [my.account] Ongoing Unfollow [1/426]: now unfollowing 'b'mychailo_jatzkiv''...
```
But i'm still following them .................................................
Tr1pke@mylinuxbox:~$
## Possible Solution (optional)
unfollow no matter what
## InstaPy configuration
Normal config
|
open
|
2021-06-19T15:25:31Z
|
2021-07-21T00:19:18Z
|
https://github.com/InstaPy/InstaPy/issues/6241
|
[
"wontfix"
] |
Tr1pke
| 1
|
vaexio/vaex
|
data-science
| 1,229
|
[BUG-REPORT] Columns with spaces throw error with from_dict
|
Description
There seems to be a problem with columns that have spaces in them. I provided an example below that clearly demonstrates it.
df_pd = {}
df_pd['A'] = [0]
df_pd['SHORT VOLUME'] = ['4563']
df_pd['SHORT_VOLUME'] = ['4563']
df = vaex.from_dict(df_pd)
display(df)
Result:
image
What I would expect:
Both "short volume columns" would have values in them.
Let me know if you have any questions
Software information
Vaex version v4.0.0-alpha.13
Vaex was installed via: pip
OS: Mac OSX
|
open
|
2021-02-25T15:41:53Z
|
2021-03-09T23:52:43Z
|
https://github.com/vaexio/vaex/issues/1229
|
[
"priority: high"
] |
foooooooooooooooobar
| 3
|
StackStorm/st2
|
automation
| 5,908
|
Keystore RBAC Configuration Issues
|
## SUMMARY
Changes to the RBAC to incorporate the keystore items has created various issues with the config that cannot be corrected aside from assigning users "admin" roles. First, actions/workflows that are grant permission to a user by RBAC role config to DO NOT apply to keystore operations that are executed within it. This includes any internal client functions coded in an action or any tasks that call keystore actions within the workflow. Second, there is no way to configure RBAC to work around this limitation as none of the keystore operations are available to be configured in the global RBAC context and can only be applied to individual keys that are known by name which does not allow for the creation of any new system-level keys (as the name/resource ID is not known until the action is run). As a result, you cannot even work around the issue by creating a config that would allow a user to have "Admin" access to keystore items, but limit their ability to execute actions within the system.
Ideally, RBAC would be updated to allow ALL of the keystore operations
https://github.com/StackStorm/st2/blob/606f42f41ca4fd2ed69da43d6ea124a76ad826a2/st2common/st2common/rbac/types.py#L369
to be defined globally.
https://github.com/StackStorm/st2/blob/606f42f41ca4fd2ed69da43d6ea124a76ad826a2/st2common/st2common/rbac/types.py#L437
Along with the global config options, the RBAC config should incorporate the ability to define keystore resource IDs using a regex filter so it could allow for very granular access to specific (or groups of specific items) in the keystore for each different operation on a per user/role basis.
### STACKSTORM VERSION
3.7 and greater with RBAC enabled
##### OS, environment, install method
Centos 8/Rocky Linux
## Steps to reproduce the problem
Create an RBAC config that allows a user to perform an action that includes the reading or writing of any keystore item and run the workflow as that user.
## Expected Results
Action permission should allow the workflow to be executed.
## Actual Results
Workflow fails at task/action that attempts to perform the keystore action.
|
open
|
2023-02-20T16:39:28Z
|
2023-02-20T20:07:00Z
|
https://github.com/StackStorm/st2/issues/5908
|
[] |
jamesdreid
| 2
|
vitalik/django-ninja
|
rest-api
| 1,290
|
[BUG] JWTAuth() is inconsistent with django authentication?
|
```
@api.get(
path="/hello-user",
response=UserSchema,
auth=[JWTAuth()]
)
def hello_user(request):
return request.user
>>>
"GET - hello_user /api/hello-user"
Unauthorized: /api/hello-user
```
When disabling auth
```
@api.get(
path="/hello-user",
response=UserSchema,
# auth=[JWTAuth()]
)
def hello_user(request):
return request.user
>>>
"GET - hello_user /api/hello-user"
[02/Sep/2024 16:50:14] "GET /api/hello-user HTTP/1.1" 200 113
{"username": "neldivad", "is_authenticated": true, "email": "neldivad@gmail.com"}
# ??? Django says I'm authenticated by Ninja disagrees ???
```
This decorator is so frustrating to use. Different apps gets authenticated and sometimes it doesn't.
I tried logging out and logging in from admin page. Tried different browser, Tried incognito. This JWT auth is the one that has been giving me a huge issue.
|
open
|
2024-09-02T08:53:50Z
|
2024-09-21T06:03:49Z
|
https://github.com/vitalik/django-ninja/issues/1290
|
[] |
neldivad
| 1
|
httpie/cli
|
rest-api
| 960
|
Convert Httpie to other http requests
|
Excuse me, how can I convert Httpie to other http requests, such as cURL etc..
|
closed
|
2020-08-20T06:04:06Z
|
2020-08-20T12:11:46Z
|
https://github.com/httpie/cli/issues/960
|
[] |
wnjustdoit
| 1
|
aminalaee/sqladmin
|
asyncio
| 875
|
Add RichTextField to any field, control in ModelView
|
### Checklist
- [x] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
The current solution for CKEditor is not suitable (or reusable) if I have a model with two fields that requires CKEditor. Or any other model, that have different field name, not `content`. For each case, I need to add separate `edit_template.html` file, in order to control it for each case.
### Describe the solution you would like.
I would like to control it on the code, like
```python
class PostAdmin(ModelView, model=Post):
rich_fields = {'content'}
```
### Describe alternatives you considered
_No response_
### Additional context
_No response_
|
open
|
2025-01-24T13:20:50Z
|
2025-01-24T13:20:50Z
|
https://github.com/aminalaee/sqladmin/issues/875
|
[] |
mmzeynalli
| 0
|
jupyter-book/jupyter-book
|
jupyter
| 1,787
|
Convert admonitions to html when launching notebook
|
### Context
Problem: a note/tip/admonition is not rendered properly when the notebook is opened with Binder/Colab/...
### Proposal
Automatically convert note/tip/admonition to HTML standalone in the notebook (with the corresponding <style>).
### Tasks and updates
I can try to do it, but I am not sure on the feasibility and would appreciate guidance.
|
closed
|
2022-07-21T16:30:58Z
|
2022-07-22T17:36:04Z
|
https://github.com/jupyter-book/jupyter-book/issues/1787
|
[
"enhancement"
] |
fortierq
| 0
|
polakowo/vectorbt
|
data-visualization
| 394
|
Having trouble implementing backtesting, pf.stat(),pf.order.records_readable can't generate the result
|
Hi, I encounter some problem while running my strategy,these are the correct result. It works fine yesterday


But today, I try to run the same code again
some errors happens
pf.order.records_readable ->'portfolio' object has no attribute 'order'
portfolio.stats() ->unsupported operand type(s) for *: 'int' and 'NoneType'


also, init_cash =100 <- this function has been removed?
Thanks
|
closed
|
2022-02-23T12:24:04Z
|
2022-03-01T15:26:56Z
|
https://github.com/polakowo/vectorbt/issues/394
|
[] |
npc945702
| 2
|
InstaPy/InstaPy
|
automation
| 6,211
|
session.like_by_tags Instapy Crash
|
<!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
No Instapy Crash
## Current Behavior
Follow by tag, Session abort
## Possible Solution (optional)
## InstaPy configuration
During Session abort. I get this message:
Traceback (most recent call last):
File "C:\Users\XXXX\Desktop\Instapy\V1.py", line 68, in
session.like_by_tags(like_tags, amount=random.randint(9, 13))
File "C:\Users\XXXX\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\instapy.py", line 2081, in like_by_tags
follow_state, msg = follow_user(
File "C:\Users\XXXX\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\unfollow_util.py", line 557, in follow_user
follow_state, msg = verify_action(
File "C:\Users\XXXX\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\unfollow_util.py", line 1560, in verify_action
button.click()
File "C:\Users\XXXX\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webelement.py", line 80, in click
self._execute(Command.CLICK_ELEMENT)
File "C:\Users\XXXX\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "C:\Users\XXXX\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\XXXX\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementClickInterceptedException: Message: Element is not clickable at point (225,128) because another element
obscures it
|
open
|
2021-06-03T13:12:07Z
|
2021-07-21T00:19:38Z
|
https://github.com/InstaPy/InstaPy/issues/6211
|
[
"wontfix"
] |
RainbowTob
| 1
|
skypilot-org/skypilot
|
data-science
| 4,765
|
[Reservations] Automatic purchases of reservations on clouds
|
A user reported that they want a way that Skypilot automatically purchases AWS capcity blocks, and use it for the jobs to run, so they don't have to do the manual process of purchasing and using it in `sky launch`.
|
open
|
2025-02-20T01:27:37Z
|
2025-02-20T01:27:37Z
|
https://github.com/skypilot-org/skypilot/issues/4765
|
[] |
Michaelvll
| 0
|
Esri/arcgis-python-api
|
jupyter
| 1,385
|
AGOLAdminManager history method fails if data_format parameter is df or raw
|
**Describe the bug**
Using the AGOLAdminManager history method to query the AGOL history API will fail with a JSONDecodeError when the data_format paramter is set to 'raw' or to 'df.' It only works if data_format is set to 'csv.'
**To Reproduce**
Steps to reproduce the behavior:
```python
from arcgis.gis import GIS
from datetime import datetime
gis = GIS(agol_url, agol_username, agol_pw)
starttime = datetime(2022,11,10,19,0)
endtime = datetime(2022,11,10,23,59)
df=gis.admin.history(starttime, endtime, num=10000, all_events=True, data_format="df")
```
error:
```python
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
~\AppData\Local\ESRI\conda\envs\eim2\lib\site-packages\requests\models.py in json(self, **kwargs)
909 try:
--> 910 return complexjson.loads(self.text, **kwargs)
911 except JSONDecodeError as e:
~\AppData\Local\ESRI\conda\envs\eim2\lib\json\__init__.py in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
~\AppData\Local\ESRI\conda\envs\eim2\lib\json\decoder.py in decode(self, s, _w)
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
~\AppData\Local\ESRI\conda\envs\eim2\lib\json\decoder.py in raw_decode(self, s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
JSONDecodeError Traceback (most recent call last)
~\AppData\Local\ESRI\conda\envs\eim2\lib\site-packages\arcgis\gis\_impl\_con\_connection.py in _handle_response(self, resp, file_name, out_path, try_json, force_bytes, ignore_error_key)
884 try:
--> 885 data = resp.json()
886 except JSONDecodeError:
~\AppData\Local\ESRI\conda\envs\eim2\lib\site-packages\requests\models.py in json(self, **kwargs)
916 else:
--> 917 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
918
JSONDecodeError: [Errno Expecting value] <html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
: 0
During handling of the above exception, another exception occurred:
Exception Traceback (most recent call last)
<ipython-input-148-0c40d9041638> in <module>
----> 1 sdf=gis.admin.history(starttime, endtime, all_events=True, data_format="df")
~\AppData\Local\ESRI\conda\envs\eim2\lib\site-packages\arcgis\gis\admin\agoladmin.py in history(self, start_date, to_date, num, all_events, event_ids, event_types, actors, owners, actions, ips, sort_order, data_format, save_folder)
417 data = []
418
--> 419 res = self._gis._con.post(url, params)
420 data.extend(res["items"])
421 while len(res["items"]) > 0 and res["nextKey"]:
~\AppData\Local\ESRI\conda\envs\eim2\lib\site-packages\arcgis\gis\_impl\_con\_connection.py in post(self, path, params, files, **kwargs)
1405 if return_raw_response:
1406 return resp
-> 1407 return self._handle_response(
1408 resp=resp,
1409 out_path=out_path,
~\AppData\Local\ESRI\conda\envs\eim2\lib\site-packages\arcgis\gis\_impl\_con\_connection.py in _handle_response(self, resp, file_name, out_path, try_json, force_bytes, ignore_error_key)
886 except JSONDecodeError:
887 if resp.text:
--> 888 raise Exception(resp.text)
889 else:
890 raise
Exception: <html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Expected behavior**
Should expect to have a data frame returned with the data
**Platform (please complete the following information):**
- OS: [Windows]
- Browser [Chrome]
- Python API Version [2.0.1]
**Additional context**
Add any other context about the problem here, attachments etc.
|
closed
|
2022-11-14T17:28:27Z
|
2022-11-17T16:22:05Z
|
https://github.com/Esri/arcgis-python-api/issues/1385
|
[
"bug"
] |
crackernutter
| 4
|
miguelgrinberg/python-socketio
|
asyncio
| 680
|
socketio.exceptions.ConnectionError: Connection error
|
Hii,
I have checked the compatibility versions and everything mentioned in the previous issues, but I am still facing this error..
Here I have the correct version:

|
closed
|
2021-05-06T09:36:22Z
|
2021-06-27T19:45:28Z
|
https://github.com/miguelgrinberg/python-socketio/issues/680
|
[
"question"
] |
yathartharora
| 1
|
Miserlou/Zappa
|
flask
| 1,533
|
NameError: name 'LAMBDA_CLIENT' is not defined
|
## Context
Python 3.6
I'm trying to run async tasks with Zappa. The details of my environment are probably causing the issue, but my problem is more with the absence of error messages.
## Expected Behavior
If `boto3` can't initialize a session, I would expect to get a clear error about it.
## Actual Behavior
When `boto3` can't initialize, I get the following error:
```
NameError: name 'LAMBDA_CLIENT' is not defined
File "flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "flask/_compat.py", line 35, in reraise
raise value
File "flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "api/handler/health.py", line 39, in run_task_exception
health.get_exception()
File "zappa/async.py", line 424, in _run_async
capture_response=capture_response).send(task_path, args, kwargs)
File "zappa/async.py", line 137, in __init__
self.client = LAMBDA_CLIENT
```
## Possible Fix
The issue is coming from https://github.com/Miserlou/Zappa/blob/master/zappa/async.py#L105-L115:
```python
# Declare these here so they're kept warm.
try:
aws_session = boto3.Session()
LAMBDA_CLIENT = aws_session.client('lambda')
SNS_CLIENT = aws_session.client('sns')
STS_CLIENT = aws_session.client('sts')
DYNAMODB_CLIENT = aws_session.client('dynamodb')
except botocore.exceptions.NoRegionError as e: # pragma: no cover
# This can happen while testing on Travis, but it's taken care of
# during class initialization.
pass
...
class LambdaAsyncResponse(object):
"""
Base Response Dispatcher class
Can be used directly or subclassed if the method to send the message is changed.
"""
def __init__(self, lambda_function_name=None, aws_region=None, capture_response=False, **kwargs):
""" """
if kwargs.get('boto_session'):
self.client = kwargs.get('boto_session').client('lambda')
else: # pragma: no cover
self.client = LAMBDA_CLIENT
```
In this code, we try to initialize a `LAMBDA_CLIENT`, and if we fail, we just don't define the name without giving any error to the user. Then, in `LambdaAsyncResponse.__init__`, we will try to find the client in the kwargs, reverting to `LAMBDA_CLIENT` (which might not exist).
There are multiple possible fixes:
1. Always try to create a `boto3` client - the user would have to disable tasks by themselves.
2. Log an error when boto3 initialization fails.
3. Initialize a client inside `__init__` (probably the best one?).
## Steps to Reproduce
I'm not sure this is relevant for this issue - the code could be improved IMO.
## Your Environment
Same.
|
open
|
2018-06-14T13:21:21Z
|
2019-03-21T21:50:52Z
|
https://github.com/Miserlou/Zappa/issues/1533
|
[] |
charlax
| 3
|
PedroBern/django-graphql-auth
|
graphql
| 64
|
[Error!]: Invalid Token in verifyAccount Mutation
|
# Prerequisites
* [x] Is it a bug?
* [ ] Is it a new feature?
* [ ] Is it a a question?
* [x] Can you reproduce the problem?
* [x] Are you running the latest version?
* [ ] Did you check for similar issues?
* [ ] Did you perform a cursory search?
For more information, see the [CONTRIBUTING](https://github.com/PedroBern/django-graphql-auth/blob/master/CONTRIBUTING.md) guide.
# Description
VerifyAccount Mutation doesn't works
I've followed the [installation](https://django-graphql-auth.readthedocs.io/en/latest/installation/) instructions and checked the [Quickstart](https://github.com/PedroBern/django-graphql-auth/tree/master/quickstart) project to see if I've done anything wrong. The verifyAccount mutation fails. It throws me the error of: Invalid token

Oooh... the gif is too small... Here the process:
1. Register
<img width="1680" alt="register" src="https://user-images.githubusercontent.com/1699198/92314635-19825880-efa0-11ea-9481-78e464738e56.png">
2. VerifyToken
<img width="1680" alt="verifyToken" src="https://user-images.githubusercontent.com/1699198/92314648-374fbd80-efa0-11ea-99c4-9e8ae247b4e7.png">
3. VerifyAccount
<img width="1680" alt="verifyAccount" src="https://user-images.githubusercontent.com/1699198/92314651-3f0f6200-efa0-11ea-9f7a-f3aba0ecaad5.png">
|
closed
|
2020-09-05T22:45:22Z
|
2022-03-29T13:19:20Z
|
https://github.com/PedroBern/django-graphql-auth/issues/64
|
[] |
nietzscheson
| 5
|
PokeAPI/pokeapi
|
graphql
| 904
|
Missing data for distortion world location areas
|
Hi ! There is no data for `areas` when calling the API on `https://pokeapi.co/api/v2/location/distortion-world/`. This implies that `giratina-origin` doesn't have any encounter data.
Steps to Reproduce:
1. Go to `https://pokeapi.co/api/v2/location/distortion-world` and see the empty `areas` array
2. Go to `https://pokeapi.co/api/v2/pokemon/giratina-origin/encounters/`, there is no data provided
|
closed
|
2023-07-17T13:58:49Z
|
2023-07-17T21:47:05Z
|
https://github.com/PokeAPI/pokeapi/issues/904
|
[] |
truite-codeuse
| 4
|
tensorlayer/TensorLayer
|
tensorflow
| 525
|
tensorlayer output
|
Is there a function to return the probability of the outputs? I searched utils.py and couldn't find what I wanted.
|
closed
|
2018-04-23T12:45:51Z
|
2018-04-26T10:16:18Z
|
https://github.com/tensorlayer/TensorLayer/issues/525
|
[] |
kodayu
| 1
|
cleanlab/cleanlab
|
data-science
| 773
|
Add support for detecting label errors in Instance Segmentation data
|
Many users have requested this functionality.
For now, you should be able to use the existing code for semantic segmentation label error detection by converting your instance segmentation labels & predictions into semantic segmentation labels & predictions. That approach may not capture all possible types of label errors, but I'd guess it will work decently.
If you contribute a new method specifically for instance segmentation, it should definitely work better than simply converting the labels/predictions into semantic segmentation and running existing cleanlab code.
|
open
|
2023-07-13T22:45:20Z
|
2024-09-03T02:30:23Z
|
https://github.com/cleanlab/cleanlab/issues/773
|
[
"enhancement",
"help-wanted"
] |
jwmueller
| 7
|
amdegroot/ssd.pytorch
|
computer-vision
| 455
|
how can we train on custom VOC dataset??
|
I want to train on custom dataset with VOC format
|
open
|
2020-01-06T14:08:15Z
|
2021-11-16T12:27:34Z
|
https://github.com/amdegroot/ssd.pytorch/issues/455
|
[] |
ayaelalfy
| 6
|
RasaHQ/rasa
|
machine-learning
| 13,114
|
How to Handle Out of Context Messeges and Unclear Messges in Chat
|
Your input -> hello
Hey there! What's your name?
Your input -> my name is john
In which city do you live?
Your input -> im from new york
Can i get your phone number?
Your input -> 0123456789
Hey vishwa,New York is a very beautifull place. Your phone number is 0123456789
Your input -> quit
Your input -> what is chat gpt
I'm here to assist with any questions related to forms. If you need help with our services, features, or how to fill out a form, feel free to ask!
Your input -> sdas
Your input ->
i have some issues :
1 Out of Context
what is chat gpt or what is world these are out of context messages for these i need to respons like this "I'm here to assist with any questions related to forms. If you need help with our services, features, or how to fill out a form, feel free to ask!" becuse user can ask differnet questions this bot will only focus on form filling.
2 Unclear
sdas,asdw and in intent example if i give a something not difine in it there then say "Sorry, I didn't understand that. Can you rephrase?" ex there is example like this my name is [Jhon] (name) but i give a prompt like this my name is new york. for that i need to say i cant understand can you rephrase like that . and this is not only for name and i need a solution without hardcording how to do this what is the best way use in rasa.
how to get a solution for both senarios.
|
open
|
2025-03-18T18:14:29Z
|
2025-03-18T18:14:29Z
|
https://github.com/RasaHQ/rasa/issues/13114
|
[] |
Vishwa-ud
| 0
|
xzkostyan/clickhouse-sqlalchemy
|
sqlalchemy
| 274
|
Bug in _reflect_table() support for alembic versions < 1.11.0
|
**Describe the bug**
Versions of alembic lower than 1.11.0 will fail with syntax error when trying to produce a migration script.

```
migration = produce_migrations(mig_ctx, metadata)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/alembic/autogenerate/api.py", line 164, in produce_migrations
compare._populate_migration_script(autogen_context, migration_script)
File "/usr/local/lib/python3.11/site-packages/alembic/autogenerate/compare.py", line 55, in _populate_migration_script
_produce_net_changes(autogen_context, upgrade_ops)
File "/usr/local/lib/python3.11/site-packages/alembic/autogenerate/compare.py", line 89, in _produce_net_changes
comparators.dispatch("schema", autogen_context.dialect.name)(
File "/usr/local/lib/python3.11/site-packages/alembic/util/langhelpers.py", line 267, in go
fn(*arg, **kw)
File "/usr/local/lib/python3.11/site-packages/clickhouse_sqlalchemy/alembic/comparators.py", line 130, in compare_mat_view
_reflect_table(inspector, table)
File "/usr/local/lib/python3.11/site-packages/clickhouse_sqlalchemy/alembic/comparators.py", line 41, in _reflect_table
return _alembic_reflect_table(inspector, table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: _reflect_table() missing 1 required positional argument: 'include_cols'
```
**To Reproduce**
```python
from alembic.autogenerate import produce_migrations
con = "some-db-con"
mig_ctx = MigrationContext.configure(con)
migration = produce_migrations(mig_ctxt, metadata)
```
**Expected behavior**
The [_reflect_table()](https://github.com/xzkostyan/clickhouse-sqlalchemy/blob/master/clickhouse_sqlalchemy/alembic/comparators.py#L37) should have an extra parameter `insert_cols` with it's default as `None` and should be passed to `_alembic_reflect_table` when the alembic version is < 1.11.0
**Versions**
clickhouse_sqlalchemy >= 0.25
alembic == 1.8.1
python == 3.11.6
- Version of package with the problem.
- python == 3.11.6
|
closed
|
2023-11-08T23:29:44Z
|
2024-03-25T07:22:19Z
|
https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/274
|
[] |
DicksonChi
| 0
|
InstaPy/InstaPy
|
automation
| 6,177
|
Only Like Posts if Their Authors Belong to a Set of Users
|
I was hoping to write a bot that likes posts only by certain users. I noticed there's a function `session.set_ignore_users()` that ignores any user in a given list, and I am wondering if there is a function that behaves in a reversed way, i.e. it ignores any other users not specified by a given list. Thanks in advance.
|
closed
|
2021-05-10T04:36:12Z
|
2021-06-26T19:00:11Z
|
https://github.com/InstaPy/InstaPy/issues/6177
|
[
"wontfix"
] |
georgezywang
| 2
|
dsdanielpark/Bard-API
|
nlp
| 14
|
How to get BARD_API_KEY ?
|
I saw the usage, but idk how to get my BARD_API_KEY ?
|
closed
|
2023-05-17T18:10:30Z
|
2023-05-17T18:51:58Z
|
https://github.com/dsdanielpark/Bard-API/issues/14
|
[] |
SKbarbon
| 1
|
koaning/scikit-lego
|
scikit-learn
| 187
|
[DOCS] duplicate images in docs
|
We now have an issue that is similar to [this](https://github.com/spatialaudio/nbsphinx/issues/162) one. Certain images get overwritten. From console;
```
reading sources... [100%] preprocessing
/Users/vincent/Development/scikit-lego/doc/preprocessing.ipynb:102: WARNING: Duplicate substitution definition name: "image0".
/Users/vincent/Development/scikit-lego/doc/preprocessing.ipynb:153: WARNING: Duplicate substitution definition name: "image0".
```
|
closed
|
2019-09-08T19:35:50Z
|
2019-09-19T07:00:49Z
|
https://github.com/koaning/scikit-lego/issues/187
|
[
"documentation"
] |
koaning
| 3
|
deepset-ai/haystack
|
pytorch
| 8,925
|
Remove the note "Looking for documentation for Haystack 1.x? Visit the..." from documentation pages
|
closed
|
2025-02-25T10:53:16Z
|
2025-02-26T14:21:58Z
|
https://github.com/deepset-ai/haystack/issues/8925
|
[
"P1"
] |
julian-risch
| 0
|
|
sqlalchemy/alembic
|
sqlalchemy
| 1,090
|
Migration generated always set enum nullable to true
|
**Describe the bug**
I have a field with enum type and I want to set nullable to False, however, the migration generated is always set nullable to True
**Expected behavior**
If the field contains nullable = False, migration generated should set nullable = False
**To Reproduce**
```py
import enum
from sqlmodel import SQLModel, Field, UniqueConstraint, Column, Enum
class Contract_Status(str, enum.Enum):
CREATED = "Created"
COMPLETED = "Completed"
DEPOSITED = "Deposited"
CANCELLED = "Cancelled"
class Contract(SQLModel, table = True):
__table_args__ = (UniqueConstraint("ctrt_id"),)
ctrt_id: str = Field(primary_key = True, nullable = False)
status: Contract_Status = Field(sa_column = Column(Enum(Contract_Status, values_callable = lambda enum: [e.value for e in enum])), nullable = False)
```
**Error**
```
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('contract',
sa.Column('status', sa.Enum('Created', 'Completed', 'Deposited', 'Cancelled', name='contract_status'), nullable=True),
sa.Column('ctrt_id', sqlmodel.sql.sqltypes.AutoString(), nullable=False),
sa.PrimaryKeyConstraint('ctrt_id'),
sa.UniqueConstraint('ctrt_id')
)
# ### end Alembic commands ###
```
**Versions.**
- OS: MacOS Monterey 12.6 (Intel Processor)
- Python: 3.10.5
- Alembic: 1.8.1
- SQLAlchemy: 1.4.40
- Database: MySQL 8.0.29
- SQLModel: 0.0.8
|
closed
|
2022-09-23T04:51:44Z
|
2022-10-09T14:25:56Z
|
https://github.com/sqlalchemy/alembic/issues/1090
|
[
"awaiting info",
"cant reproduce"
] |
yixiongngvsys
| 3
|
christabor/flask_jsondash
|
flask
| 5
|
Remove jquery, bootstrap, etc.. from blueprint, put into requirements and example app.
|
The user will likely have them in their main app.
|
closed
|
2016-05-02T19:52:03Z
|
2016-05-03T18:24:08Z
|
https://github.com/christabor/flask_jsondash/issues/5
|
[] |
christabor
| 0
|
flairNLP/flair
|
nlp
| 2,683
|
Wrong Detection - person entity
|
**Describe the bug**
For any person entity, the prediction are added along with "hey"
Eg: "Hey Karthick"
**To Reproduce**
Model name - ner-english-fast, ner-english
Steps to reproduce the behavior (e.g. which model did you train? what parameters did you use? etc.).
**Expected behavior**
Eg: "Hey Karthick"
Predicitons from model: Hey Karthick PER
Expected predictions: Karthick PER
**Additional context**
Add any other context about the problem here.
<img width="701" alt="Screenshot 2022-03-21 at 1 47 22 PM" src="https://user-images.githubusercontent.com/55129453/159224981-41008757-4646-42b4-9814-029fdb235e85.png">
|
closed
|
2022-03-21T08:22:26Z
|
2022-09-09T02:02:36Z
|
https://github.com/flairNLP/flair/issues/2683
|
[
"bug",
"wontfix"
] |
karthicknarasimhan98
| 1
|
autogluon/autogluon
|
scikit-learn
| 3,944
|
Ray error when using preset `good_quality`
|
### Discussed in https://github.com/autogluon/autogluon/discussions/3943
<div type='discussions-op-text'>
<sup>Originally posted by **ArijitSinghEDA** February 22, 2024</sup>
I am using the preset `good_quality` in my `TabularPredictor`, but it gives the following error:
```
2024-02-22 13:57:06,461 WARNING worker.py:1729 -- Failed to set SIGTERM handler, processes mightnot be cleaned up properly on exit.
2024-02-22 13:57:32,647 ERROR services.py:1207 -- Failed to start the dashboard
2024-02-22 13:57:32,648 ERROR services.py:1232 -- Error should be written to 'dashboard.log' or 'dashboard.err'. We are printing the last 20 lines for you. See 'https://docs.ray.io/en/master/ray-observability/ray-logging.html#logging-directory-structure' to find where the log file is.
2024-02-22 13:57:32,648 ERROR services.py:1242 -- Couldn't read dashboard.log file. Error: [Errno 2] No such file or directory: '/tmp/ray/session_2024-02-22_13-57-06_577198_21267/logs/dashboard.log'. It means the dashboard is broken even before it initializes the logger (mostly dependency issues). Reading the dashboard.err file which contains stdout/stderr.
2024-02-22 13:57:32,648 ERROR services.py:1276 -- Failed to read dashboard.err file: cannot mmap an empty file. It is unexpected. Please report an issue to Ray github. https://github.com/ray-project/ray/issues
[2024-02-22 13:58:03,776 E 21267 21372] core_worker.cc:201: Failed to register worker 01000000ffffffffffffffffffffffffffffffffffffffffffffffff to Raylet. IOError: [RayletClient] Unable to register worker with raylet. No such file or directory
```
Versions Used:
Autogluon: 0.8.3
Ray: 2.6.3
grpcio: 1.58.0</div>
*EDIT*
It is an issue with Ray
|
closed
|
2024-02-22T08:32:39Z
|
2024-02-22T08:58:10Z
|
https://github.com/autogluon/autogluon/issues/3944
|
[] |
ArijitSinghEDA
| 0
|
zalandoresearch/fashion-mnist
|
computer-vision
| 21
|
Adding Japanese README translation
|
I found a Japanese translation of README.md on http://tensorflow.classcat.com/category/fashion-mnist/
Seems pretty complete to me. I sent a email to the website and asking for the permission to use it as official README-jp.md.
Still waiting their reply.
|
closed
|
2017-08-28T14:45:28Z
|
2017-08-29T09:02:47Z
|
https://github.com/zalandoresearch/fashion-mnist/issues/21
|
[] |
hanxiao
| 0
|
zihangdai/xlnet
|
nlp
| 57
|
Has the data been split into segments for pretraining
|
The paper says
> During the pretraining phase, following BERT, we randomly sample two segments (either from the same context or not) and treat the concatenation of two segments as one sequence to perform permutation language modeling.
I don't really get this, if there is no next sentence prediction what is the point of concatenating segments that do not belong to the same context? Won't that degrade the performance of the model? What is the objective behind using two segments (both from the same context and not)?
|
closed
|
2019-06-25T23:12:02Z
|
2019-07-07T20:14:04Z
|
https://github.com/zihangdai/xlnet/issues/57
|
[] |
rakshanda22
| 3
|
katanaml/sparrow
|
computer-vision
| 1
|
How to save the predicted output from LayoutLM or LayoutLMv2 ?
|
I trained LayoutLM for my dataset and I am getting predictions at the word level like in the image "ALVARO FRANCISCO MONTOYA" is true labeled as "party_name_1" but while prediction "ALVARO " is tagged as "party_name_1", "FRANCISCO" is tagged as "party_name_1", "MONTOYA" is tagged as "party_name_1". In short, i am getting prediction for each word but how to save these prediction as one predicted output like "ALVARO FRANCISCO MONTOYA" as "party_name_1". How to save this as a single output?
Any help would be greatful.
Below image is the predicted output image from LayoutLM.

|
closed
|
2022-03-31T11:05:15Z
|
2022-03-31T15:55:52Z
|
https://github.com/katanaml/sparrow/issues/1
|
[] |
karndeepsingh
| 3
|
blacklanternsecurity/bbot
|
automation
| 1,815
|
Excavate IPv6 URLs
|
We should have a test for excavating IPv6 URLs.
originally suggested by @colin-stubbs
|
open
|
2024-10-02T18:38:46Z
|
2025-02-28T15:02:40Z
|
https://github.com/blacklanternsecurity/bbot/issues/1815
|
[
"enhancement"
] |
TheTechromancer
| 0
|
onnx/onnx
|
tensorflow
| 5,885
|
[Feature request] Provide a means to convert to numpy array without byteswapping
|
### System information
ONNX 1.15
### What is the problem that this feature solves?
Issue onnx/tensorflow-onnx#1902 in tf2onnx occurs on big endian systems, and it is my observation that attributes which end up converting to integers are incorrectly byteswapped because the original data resided within a tensor. If `numpy_helper.to_array()` could be updated to optionally not perform byteswapping, then that could help solve this issue.
### Alternatives considered
As an alternative, additional logic could be added in tf2onnx to perform byteswapping on the data again, but this seems excessive.
### Describe the feature
I believe this feature is necessary to improve support for big endian systems.
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
converters
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_
|
closed
|
2024-01-31T20:58:56Z
|
2024-02-02T16:52:35Z
|
https://github.com/onnx/onnx/issues/5885
|
[
"topic: enhancement"
] |
tehbone
| 4
|
mirumee/ariadne
|
graphql
| 121
|
Support content type "application/graphql"
|
> If the "application/graphql" Content-Type header is present, treat the HTTP POST body contents as the GraphQL query string.
|
closed
|
2019-03-26T19:07:04Z
|
2024-04-03T09:15:39Z
|
https://github.com/mirumee/ariadne/issues/121
|
[] |
rafalp
| 3
|
aleju/imgaug
|
machine-learning
| 102
|
Background Image Processing Hangs If Ungraceful Exit
|
A process using background image processing will hang if runtime is terminated before background image processing is complete.
### System:
- Operating System: Ubuntu 16.04
- Python: 2.7.12
- imgaug: 0.2.5
### Code to Reproduce
```python
import imgaug as ia
import numpy as np
def batch_generator():
for i in range(10000):
batch = ia.Batch(images=np.zeros((10,3,255,255)))
yield batch
bg_loader = ia.BatchLoader(batch_generator)
bg_augmenter = ia.BackgroundAugmenter(bg_loader, ia.augmenters.Noop())
bg_augmenter.get_batch()
bg_augmenter.terminate()
bg_loader.terminate()
````
### Expected Result
Code exits gracefully and you go on with your life
### Actual Result
Parent process becomes zombie process and requires manual termination via PID
### Proposed solution
The queues owned by BatchLoader and BackgroundAugmenter should be closed in their respective terminate() calls.
|
closed
|
2018-03-07T16:15:56Z
|
2018-03-08T19:20:27Z
|
https://github.com/aleju/imgaug/issues/102
|
[] |
AustinDoolittle
| 2
|
dmlc/gluon-nlp
|
numpy
| 724
|
PrefetcherIter with worker_type='process' may hang
|
I noticed that using PrefetcherIter with worker_type='process' may leave the prefetching process hang. Specifically, if I do
```
kill $PARENT_PID
```
I will observe that the parent process exits, and that child process still runs. And the child process runs till https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/stream.py#L265-L271 and block there because the parent never calls `__next__`.
I'm a bit confused because the process is already marked as daemon=True, but it does not exit.
|
closed
|
2019-05-23T06:46:27Z
|
2020-10-15T20:16:10Z
|
https://github.com/dmlc/gluon-nlp/issues/724
|
[
"bug"
] |
eric-haibin-lin
| 5
|
erdewit/ib_insync
|
asyncio
| 314
|
QEventDispatcherWin32::wakeUp: Failed to post a message (Not enough quota is available to process this command.)
|
I am working with a pyqt5 application with ib_insync. I am loading a set of open orders and showing them in a table one I press the connect button. Worked fine so far.
I increased the number of open orders to around 50-60. Now, once I press the connect button, I get an endless stdout of messages with the text:
`QEventDispatcherWin32::wakeUp: Failed to post a message (Not enough quota is available to process this command.)`
I have found no info on this. It seems that the "not enough quota" happens to some people while trying to copy large amounts of data from a local network, but not related to Qt5.
Is it possible that Qt is getting saturated by the amount of ib_insync events (or the amount of data received through them), and hence it has not enough quota to update the GUI?
|
closed
|
2020-11-09T13:55:13Z
|
2020-11-11T20:31:37Z
|
https://github.com/erdewit/ib_insync/issues/314
|
[] |
romanrdgz
| 1
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,131
|
Socket.io not establishing connection with load balancer : timeout error
|
**Your question**
Hi Miguel, We are in the process of moving our cloud instance from Linode to Google Cloud Platform. Our entire web app uses socket.io for most frontend behavior. When we hit the domain on the GCP instance we are experiencing extremely slow speeds. In addition we are experiencing a server disconnect every 3-5 seconds. When we looked at the web server logs we saw the following:
**Logs**
A 2019-12-18T22:27:08.305600469Z [2019-12-18 22:27:08,304] ERROR in kombu_manager: Sleeping 32.0s
A 2019-12-18T22:27:08.305685444Z Traceback (most recent call last):
A 2019-12-18T22:27:08.305692804Z File "/usr/local/lib/python3.5/dist-packages/kombu/utils/functional.py", line 332, in retry_over_time
A 2019-12-18T22:27:08.305698205Z return fun(*args, **kwargs)
A 2019-12-18T22:27:08.305702737Z File "/usr/local/lib/python3.5/dist-packages/kombu/connection.py", line 261, in connect
A 2019-12-18T22:27:08.305707322Z return self.connection
A 2019-12-18T22:27:08.305711636Z File "/usr/local/lib/python3.5/dist-packages/kombu/connection.py", line 802, in connection
A 2019-12-18T22:27:08.305716109Z self._connection = self._establish_connection()
A 2019-12-18T22:27:08.305736993Z File "/usr/local/lib/python3.5/dist-packages/kombu/connection.py", line 757, in _establish_connection
A 2019-12-18T22:27:08.305741614Z conn = self.transport.establish_connection()
A 2019-12-18T22:27:08.305745468Z File "/usr/local/lib/python3.5/dist-packages/kombu/transport/pyamqp.py", line 130, in establish_connection
A 2019-12-18T22:27:08.305749625Z conn.connect()
A 2019-12-18T22:27:08.305753414Z File "/usr/local/lib/python3.5/dist-packages/amqp/connection.py", line 294, in connect
A 2019-12-18T22:27:08.305757408Z self.transport.connect()
A 2019-12-18T22:27:08.305761307Z File "/usr/local/lib/python3.5/dist-packages/amqp/transport.py", line 120, in connect
A 2019-12-18T22:27:08.305765180Z self._connect(self.host, self.port, self.connect_timeout)
A 2019-12-18T22:27:08.305769048Z File "/usr/local/lib/python3.5/dist-packages/amqp/transport.py", line 161, in _connect
A 2019-12-18T22:27:08.305773031Z self.sock.connect(sa)
A 2019-12-18T22:27:08.305776680Z File "/usr/local/lib/python3.5/dist-packages/eventlet/greenio/base.py", line 261, in connect
A 2019-12-18T22:27:08.305780599Z self._trampoline(fd, write=True, timeout=timeout, timeout_exc=_timeout_exc)
A 2019-12-18T22:27:08.305784455Z File "/usr/local/lib/python3.5/dist-packages/eventlet/greenio/base.py", line 208, in _trampoline
A 2019-12-18T22:27:08.305788491Z mark_as_closed=self._mark_as_closed)
A 2019-12-18T22:27:08.305792209Z File "/usr/local/lib/python3.5/dist-packages/eventlet/hubs/__init__.py", line 164, in trampoline
A 2019-12-18T22:27:08.305796128Z return hub.switch()
A 2019-12-18T22:27:08.305799746Z File "/usr/local/lib/python3.5/dist-packages/eventlet/hubs/hub.py", line 297, in switch
A 2019-12-18T22:27:08.305803719Z return self.greenlet.switch()
A 2019-12-18T22:27:08.310863763Z socket.timeout: timed out
**GCP architecture**
We are setup using RabbitMQ as the broker and then Redis as in-memory storage. Prior to setting up the load balancer, when we setup a single web server we had no problems and everything worked fine. When we setup the load balancer on top of a single web server, that is when the problems began occuring.
|
closed
|
2019-12-18T22:42:45Z
|
2020-06-30T22:52:00Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1131
|
[
"question"
] |
jtopel
| 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.