repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
ansible/awx
|
django
| 15,555
|
execution node install failed on rocky 8.10
|
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
Execution node fails to install on rocky 8.10 OS.
### AWX version
24.6.1
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [X] API
- [ ] Docs
- [ ] Collection
- [X] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
2.17.3
### Operating system
Rocky Linux release 8.10 (Green Obsidian)
### Web browser
_No response_
### Steps to reproduce
ansible-playbook -i inventory.yml install_receptor.yml -vvv
$ cat inventory.yml
---
all:
hosts:
remote-execution:
ansible_host: test-instance
ansible_user: test # user provided
ansible_ssh_private_key_file: ~/.ssh/id_rsa
$ cat install_receptor.yml
---
- hosts: all
become: yes
tasks:
- name: Create the receptor user
user:
name: "{{ receptor_user }}"
shell: /bin/bash
- import_role:
name: ansible.receptor.podman
- import_role:
name: ansible.receptor.setup
### Expected results
The playbook run successfully, and the installation of the execute node was successful.
### Actual results
$ ansible-playbook -i inventory.yml install_receptor.yml -vv
ansible-playbook [core 2.17.3]
config file = /home/OS_kkj/.ansible.cfg
configured module search path = ['/home/OS_kkj/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/OS_kkj/.local/lib/python3.12/site-packages/ansible
ansible collection location = /home/OS_kkj/.ansible/collections:/usr/share/ansible/collections
executable location = /home/OS_kkj/.local/bin/ansible-playbook
python version = 3.12.1 (main, Aug 30 2024, 16:00:05) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)] (/usr/local/bin/python3.12)
jinja version = 3.1.2
libyaml = True
Using /home/OS_kkj/.ansible.cfg as config file
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: install_receptor.yml ***********************************************************************************************************************************************************
1 plays in install_receptor.yml
PLAY [all] *******************************************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************************************
task path: /home/gaia_bot/test-instance_install_bundle/install_receptor.yml:2
[WARNING]: Platform linux on host remote-execution is using the discovered Python interpreter at /usr/bin/python3.12, but future installation of another Python interpreter could change
the meaning of that path. See https://docs.ansible.com/ansible-core/2.17/reference_appendices/interpreter_discovery.html for more information.
ok: [remote-execution]
TASK [Create the receptor user] **********************************************************************************************************************************************************
task path: /home/gaia_bot/test-instance_install_bundle/install_receptor.yml:5
ok: [remote-execution] => {"append": false, "changed": false, "comment": "", "group": 1001, "home": "/home/awx", "move_home": false, "name": "awx", "shell": "/bin/bash", "state": "present", "uid": 1001}
TASK [ansible.receptor.podman : Include variables] ***************************************************************************************************************************************
task path: /home/OS_kkj/.ansible/collections/ansible_collections/ansible/receptor/roles/podman/tasks/main.yml:3
included: /home/OS_kkj/.ansible/collections/ansible_collections/ansible/receptor/roles/podman/tasks/variables.yml for remote-execution
TASK [ansible.receptor.podman : Include OS-specific variables "RedHat"] ******************************************************************************************************************
task path: /home/OS_kkj/.ansible/collections/ansible_collections/ansible/receptor/roles/podman/tasks/variables.yml:2
ok: [remote-execution] => {"ansible_facts": {"__podman_packages": ["podman", "crun"]}, "ansible_included_var_files": ["/home/OS_kkj/.ansible/collections/ansible_collections/ansible/receptor/roles/podman/vars/RedHat.yml"], "changed": false}
TASK [ansible.receptor.podman : Define podman_packages] **********************************************************************************************************************************
task path: /home/OS_kkj/.ansible/collections/ansible_collections/ansible/receptor/roles/podman/tasks/variables.yml:7
ok: [remote-execution] => {"ansible_facts": {"podman_packages": ["podman", "crun"]}, "changed": false}
TASK [ansible.receptor.podman : Run OS-specific tasks] ***********************************************************************************************************************************
task path: /home/OS_kkj/.ansible/collections/ansible_collections/ansible/receptor/roles/podman/tasks/main.yml:7
included: /home/OS_kkj/.ansible/collections/ansible_collections/ansible/receptor/roles/podman/tasks/setup-RedHat.yml for remote-execution
TASK [ansible.receptor.podman : Install podman packages] *********************************************************************************************************************************
task path: /home/OS_kkj/.ansible/collections/ansible_collections/ansible/receptor/roles/podman/tasks/setup-RedHat.yml:2
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: SyntaxError: future feature annotations is not defined
fatal: [remote-execution]: FAILED! => {"changed": false, "module_stderr": "OpenSSH_8.0p1, OpenSSL 1.1.1k FIPS 25 Mar 2021\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug2: checking match for 'final all' host test-instance originally test-instance\r\ndebug2: match not found\r\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\r\ndebug1: configuration requests final Match pass\r\ndebug1: re-parsing configuration\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug2: checking match for 'final all' host test-instance originally test-instance\r\ndebug2: match found\r\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug2: Received exit status from master 1\r\nShared connection to test-instance closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"<stdin>\", line 12, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 951, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 894, in _find_spec\r\n File \"<frozen importlib._bootstrap_external>\", line 1157, in find_spec\r\n File \"<frozen importlib._bootstrap_external>\", line 1131, in _get_spec\r\n File \"<frozen importlib._bootstrap_external>\", line 1112, in _legacy_get_spec\r\n File \"<frozen importlib._bootstrap>\", line 441, in spec_from_loader\r\n File \"<frozen importlib._bootstrap_external>\", line 544, in spec_from_file_location\r\n File \"/tmp/ansible_ansible.legacy.dnf_payload_8ttprokq/ansible_ansible.legacy.dnf_payload.zip/ansible/module_utils/basic.py\", line 5\r\nSyntaxError: future feature annotations is not defined\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP *******************************************************************************************************************************************************************************
remote-execution : ok=6 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
### Additional information
_No response_
|
open
|
2024-09-27T09:50:47Z
|
2024-09-27T09:51:03Z
|
https://github.com/ansible/awx/issues/15555
|
[
"type:bug",
"component:api",
"needs_triage",
"community"
] |
kiju-kang
| 0
|
axnsan12/drf-yasg
|
rest-api
| 649
|
ImportError: cannot import name 'URLPattern' from 'rest_framework.compat'
|
Hi
I use drf-yasg and it is very useful tool !!!
In generators.py of drf-yasg, it uses following now.
https://github.com/axnsan12/drf-yasg/blob/9ccf24c27ad46db4f566170553d49d687b6d21d6/src/drf_yasg/generators.py#L11
However, I think rest_framework.compat don't use URLPattern now.
https://github.com/encode/django-rest-framework/commit/bb795674f86828fc5f15d6d61501cc781811e053#diff-ce493f71b3679e91b72126c168670399
So I think drf-yasg can't import URLPattern and raise Exception.
```
ImportError: cannot import name 'URLPattern' from 'rest_framework.compat' (/home/circleci/.local/share/virtualenvs/project-zxI9dQ-Q/lib/python3.7/site-packages/rest_framework/compat.py)
```
Although I don't know the accurate method to resolve it, why don't you add the following code in generators.py
```
from django.urls import ( # noqa
URLPattern,
URLResolver,
)
```
|
closed
|
2020-10-04T04:16:19Z
|
2020-10-25T19:12:57Z
|
https://github.com/axnsan12/drf-yasg/issues/649
|
[] |
miyagin15
| 6
|
Layout-Parser/layout-parser
|
computer-vision
| 41
|
Error element indices when setting `show_element_id` in the visualization
|
**Describe the bug**
When the input sequence is ordered differently from the element ids, the `lp.draw_box` will create inconsistent id annotation in the visualization.
**To Reproduce**
Example:
```
background = Image.new('RGB', (1000,1000), color='white')
layout = lp.Layout(
[
lp.TextBlock(block=lp.Rectangle(x_1=80, y_1=79.0, x_2=490, y_2=92.0), text=None, id=1, type=None, parent=0, next=None),
lp.TextBlock(block=lp.Rectangle(x_1=80, y_1=65.0, x_2=488.0, y_2=77.0), text=None, id=0, type=None, parent=0, next=None),
lp.TextBlock(block=lp.Rectangle(x_1=80.0, y_1=95.0, x_2=490, y_2=107.0), text=None, id=2, type=None, parent=0, next=None),
lp.TextBlock(block=lp.Rectangle(x_1=80, y_1=110.0, x_2=490, y_2=122.0), text=None, id=3, type=None, parent=0, next=None),
lp.TextBlock(block=lp.Rectangle(x_1=80.0, y_1=125.0, x_2=490.0, y_2=138.0), text=None, id=4, type=None, parent=0, next=None)
]
).scale((1,2))
lp.draw_box(background, layout, show_element_id=True)
```
Expected output:
<img width="533" alt="image" src="https://user-images.githubusercontent.com/22512825/116839673-3a3b9600-aba1-11eb-90e0-360509feeda3.png">
Actual output:
<img width="489" alt="image" src="https://user-images.githubusercontent.com/22512825/116839680-3d368680-aba1-11eb-945a-0dc3b752af08.png">
Temporary fix:
```python
lp.draw_box(background, [b.set(id=str(b.id)) for b in layout], show_element_id=True)
```
|
open
|
2021-05-03T03:51:09Z
|
2021-05-03T03:52:20Z
|
https://github.com/Layout-Parser/layout-parser/issues/41
|
[
"bug"
] |
lolipopshock
| 0
|
waditu/tushare
|
pandas
| 1,614
|
000905.SH, 000300.SH, 000016.SH,399932.SZ 历史数据错误
|
指数数据均通过pro.index_daily获取
中证500指数000905.SH
2010年1月26日的pre_close数据错误
应该是4481.16
2019年6月17日的close数据错误
应该是4802.8185
沪深300指数000300.SH
2010年1月26日的pre_close数据错误
应为3328.014
上证50指数000016.SH
2019年9月27日的close数据错误
应为2929.4659
中证消费指数399932.SZ
2010年1月25日的close数据和2010年1月26的pre_close数据错误
应为5823.38
tushareID: 421714
|
open
|
2021-12-21T03:59:35Z
|
2021-12-21T03:59:35Z
|
https://github.com/waditu/tushare/issues/1614
|
[] |
wcw159951
| 0
|
JaidedAI/EasyOCR
|
pytorch
| 846
|
cv2.error: OpenCV(4.5.4) :-1: error: (-5:Bad argument) in function 'getPerspectiveTransform'
|
This error show when i run easyocr.
I tried some latest easyOcr & cv2:
```
easyOcr 1.5.0, 1.6.1
opencv-python 4.5.4, 4.5.5, 4.6.0
```
but all of them still have this error.
```
File "C:\Program Files\Python310\Lib\site-packages\easyocr\easyocr.py", line 387, in readtext
result = self.recognize(img_cv_grey, horizontal_list, free_list,\
File "C:\Program Files\Python310\Lib\site-packages\easyocr\easyocr.py", line 324, in recognize
image_list, max_width = get_image_list(h_list, f_list, img_cv_grey, model_height = imgH)
File "C:\Program Files\Python310\Lib\site-packages\easyocr\utils.py", line 543, in get_image_list
transformed_img = four_point_transform(img, rect)
File "C:\Program Files\Python310\Lib\site-packages\easyocr\utils.py", line 401, in four_point_transform
M = cv2.getPerspectiveTransform(rect, dst)
cv2.error: OpenCV(4.5.5) :-1: error: (-5:Bad argument) in function 'getPerspectiveTransform'
> Overload resolution failed:
> - src data type = 23 is not supported
> - Expected Ptr<cv::UMat> for argument 'src'
```
|
open
|
2022-09-09T06:23:12Z
|
2022-09-09T06:26:55Z
|
https://github.com/JaidedAI/EasyOCR/issues/846
|
[] |
VERISBABY
| 0
|
numpy/numpy
|
numpy
| 28,106
|
Building NumPy from source for Windows on ARM using Clang-cl compiler
|
Hello Developers,
- I am facing an issue while trying to build NumPy for Windows on ARM (WoA) using the Clang-cl compiler. Building NumPy from source requires C and C++ compilers with proper intrinsic support.
- Previously, I was able to successfully compile NumPy for WoA using the MSVC-optimized C/C++ CL compiler, enabling CPU baseline features that support ARM.
- However, I encountered limitations with the MSVC C/C++ CL compiler, as it does not support certain CPU dispatcher features like ASIMDHP, ASIMDFHM, and SVE. Is there any specific reason why these CPU dispatch features are not supported for WoA in MSVC?
- Meanwhile, I attempted to compile NumPy for WoA using the clang-cl compiler (both from MSVC and LLVM toolchains) to check if the CPU dispatcher features would be enabled. While I found that, apart from SVE, all other test features—including baseline features—were supported, I ran into compilation errors due to unidentified instructions.
Steps to Reproduce
1. Clone the Source code of NumPy and checkout to latest branch
2. Install LLVM toolchain/MSVC clang toolset
3. Remove the clang and clang++ from the bin directory to avoid conflicts
4. Add the bin path at the top of environment path variable
Compilers used for compilation:

Error and Workaround:
1. While building meson_cpu target, got an error with respect to invalid operand "fstcw" in multiarray_tests_c source file. Upon going through source code, the fstcw is floating-point control instructions for x86 assembly. So I made workaround to make one more condition to check whether it is a ARM64 arch build. Then the build proceeded:

Workaround:
before:

After:

Issue:
1. Currently the build fails at 240+ targets while compiling meson_cpu due to unidentified assembly instructions:

Can anyone give some suggestions to overcome this issue? I need enable CPU dispatch support for NumPy on WoA to get better optimised version of NumPy.
Thanks!
|
closed
|
2025-01-06T07:14:58Z
|
2025-01-28T03:07:25Z
|
https://github.com/numpy/numpy/issues/28106
|
[
"component: SIMD"
] |
Mugundanmcw
| 21
|
WZMIAOMIAO/deep-learning-for-image-processing
|
deep-learning
| 428
|
大佬问下VGG分类model里面最后输出是batch_size*num_class但是label是batch_size*1这个没法算loss吧程序会报错
|
closed
|
2021-12-11T05:55:06Z
|
2021-12-11T06:14:55Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/428
|
[] |
liufeng34
| 1
|
|
sngyai/Sequoia
|
pandas
| 43
|
有考虑使用Docker吗
|
如题
|
closed
|
2022-12-09T22:17:23Z
|
2025-03-05T09:48:46Z
|
https://github.com/sngyai/Sequoia/issues/43
|
[] |
Noth1ngTosayLee
| 2
|
scikit-learn/scikit-learn
|
python
| 30,736
|
`randomized_svd` incorrect for complex valued matrices
|
### Describe the bug
The `randomized_svd` utility function accepts complex valued inputs without error, but the result is inconsistent with `scipy.linalg.svd`.
### Steps/Code to Reproduce
```python
import numpy as np
from scipy import linalg
from sklearn.utils.extmath import randomized_svd
rng = np.random.RandomState(42)
X = rng.randn(100, 20) + 1j * rng.randn(100, 20)
_, s, _ = linalg.svd(X)
_, s2, _ = randomized_svd(X, n_components=5)
print("s:", s[:5])
print("s2:", s2[:5])
```
### Expected Results
I expected the singular values to be numerically close.
### Actual Results
```
s: [19.81481515 18.69019042 17.62107998 17.23689681 16.3148512 ]
s2: [11.25690754 9.97157079 9.01542947 8.06160863 7.54068744]
```
### Versions
```shell
System:
python: 3.11.4 (main, Jul 5 2023, 08:40:20) [Clang 14.0.6 ]
executable: /Users/clane/miniconda3/bin/python
machine: macOS-13.7-arm64-arm-64bit
Python dependencies:
sklearn: 1.7.dev0
pip: 25.0
setuptools: 65.5.0
numpy: 2.2.2
scipy: 1.15.1
Cython: 3.0.11
pandas: 2.2.3
matplotlib: 3.10.0
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 8
prefix: libscipy_openblas
filepath: /Users/clane/Projects/misc/scikit-learn/.venv/lib/python3.11/site-packages/numpy/.dylibs/libscipy_openblas64_.dylib
version: 0.3.28
threading_layer: pthreads
architecture: neoversen1
user_api: blas
internal_api: openblas
num_threads: 8
prefix: libscipy_openblas
filepath: /Users/clane/Projects/misc/scikit-learn/.venv/lib/python3.11/site-packages/scipy/.dylibs/libscipy_openblas.dylib
version: 0.3.28
threading_layer: pthreads
architecture: neoversen1
user_api: openmp
internal_api: openmp
num_threads: 8
prefix: libomp
filepath: /opt/homebrew/Cellar/libomp/19.1.3/lib/libomp.dylib
version: None
```
|
open
|
2025-01-30T01:40:26Z
|
2025-02-04T10:51:40Z
|
https://github.com/scikit-learn/scikit-learn/issues/30736
|
[
"Bug"
] |
clane9
| 0
|
cvat-ai/cvat
|
pytorch
| 8,896
|
CANNOT Upload Annotations
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1-Create a new project.
2- Add Raw labels
3-Submit & Open.
4-Create a new task under the project
5-Drag .png files
6-Submit & Open to create the task
7- export annotations
8- run the object detection/models with the exported annotations from previous step>> this will create new annotations file (UpdatedAnnotations.xml)
9-import (UpdatedAnnotations.xml) to the same task in step 4
then receiving this error (Cannot read properties of undefined (reading 'push'))
### Expected Behavior
I expect that annotations will be uploaded successfully and replace the existing annotations.
### Possible Solution
I am a bit of a beginner (i tried searching online with no success)
### Context
will need to see results of our object detection/models annotated
### Environment
```Markdown
Ubuntu 22.04.5 LTS
------------------------
Client: Docker Engine - Community
Version: 27.4.1
API version: 1.47
Go version: go1.22.10
Git commit: b9d17ea
Built: Tue Dec 17 15:45:42 2024
OS/Arch: linux/amd64
Context: default
Server: Docker Engine - Community
Engine:
Version: 27.4.1
API version: 1.47 (minimum version 1.24)
Go version: go1.22.10
Git commit: c710b88
Built: Tue Dec 17 15:45:42 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.7.24
GitCommit: 88bf19b2105c8b17560993bee28a01ddc2f97182
runc:
Version: 1.2.2
GitCommit: v1.2.2-0-g7cb3632
docker-init:
Version: 0.19.0
GitCommit: de40ad0
```
|
closed
|
2025-01-02T07:03:37Z
|
2025-01-31T13:54:24Z
|
https://github.com/cvat-ai/cvat/issues/8896
|
[
"bug",
"need info"
] |
kotbiat
| 1
|
tensorflow/datasets
|
numpy
| 5,329
|
tfds build failed
|
**What I need help with / What I was wondering**
I am trying to build a dataset using tfds build, while this error occured: TypeError: Unknown resource path: <class 'importlib_resources.readers.MultiplexedPath'>: MultiplexedPath('/home/language_table_use')
> root@15298479e1f0:/home/language_table_use# tfds build
INFO[build.py]: Loading dataset from path: /home/language_table_use/language_table_use_dataset_builder.py
2024-03-20 10:09:39.534800: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-03-20 10:09:39.577882: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-03-20 10:09:39.577940: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-03-20 10:09:39.577973: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-03-20 10:09:39.586478: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "/usr/local/bin/tfds", line 8, in <module>
sys.exit(launch_cli())
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/scripts/cli/main.py", line 103, in launch_cli
app.run(main, flags_parser=_parse_flags)
File "/usr/local/lib/python3.11/dist-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/usr/local/lib/python3.11/dist-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/scripts/cli/main.py", line 98, in main
args.subparser_fn(args)
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/scripts/cli/build.py", line 311, in _build_datasets
for builder in builders:
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/scripts/cli/build.py", line 362, in _make_builders
yield make_builder()
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/scripts/cli/build.py", line 477, in _make_builder
builder = builder_cls(**builder_kwargs) # pytype: disable=not-instantiable
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/language_table_use/language_table_use_dataset_builder.py", line 19, in __init__
super().__init__(*args, **kwargs)
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/core/logging/__init__.py", line 288, in decorator
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/core/dataset_builder.py", line 1319, in __init__
super().__init__(**kwargs)
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/core/logging/__init__.py", line 288, in decorator
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/core/dataset_builder.py", line 287, in __init__
self.info.initialize_from_bucket()
^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/core/logging/__init__.py", line 168, in __call__
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/core/dataset_builder.py", line 476, in info
info = self._info()
^^^^^^^^^^^^
File "/home/language_table_use/language_table_use_dataset_builder.py", line 24, in _info
return self.dataset_info_from_configs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/core/dataset_builder.py", line 1098, in dataset_info_from_configs
metadata = self.get_metadata()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/core/dataset_builder.py", line 245, in get_metadata
return dataset_metadata.load(cls._get_pkg_dir_path())
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/core/dataset_builder.py", line 235, in _get_pkg_dir_path
cls.pkg_dir_path = _get_builder_datadir_path(cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/tensorflow_datasets/core/dataset_builder.py", line 150, in _get_builder_datadir_path
return epath.resource_path(pkg_names[0]).joinpath(*pkg_names[1:-1])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/etils/epath/resource_utils.py", line 148, in resource_path
raise TypeError(f'Unknown resource path: {type(path)}: {path}')
TypeError: Unknown resource path: <class 'importlib_resources.readers.MultiplexedPath'>: MultiplexedPath('/home/language_table_use')
**Environment information**
(if applicable)
* Operating System: Ubuntu 22.04.2 LTS
* Python version: Python 3.11.0rc1
* `tensorflow-datasets`/`tfds-nightly` version: tensorflow-datasets 4.9.4
* `tensorflow`/`tensorflow-gpu`/`tf-nightly`/`tf-nightly-gpu` version: tensorflow 2.14.0
|
closed
|
2024-03-20T10:22:20Z
|
2024-03-21T05:21:23Z
|
https://github.com/tensorflow/datasets/issues/5329
|
[
"help"
] |
CharlieLi2S
| 2
|
supabase/supabase-py
|
fastapi
| 510
|
httpx.ReadTimeout: The read operation timed out
|
**Describe the bug**
to call vector search on 73k records supabase (python) failed due timeout error. please put option developer can increase or decrease timeout according situation. otherwise python library not worthful.
**To Reproduce**
the error can be produce to make embedding to search in huge records about 73k.
**Expected behavior**
i need to increate the timeout for the rpc method so it can easily with failed.
in JavaScript its working where it can takes 25 to 30 secs in python it failed.
**Screenshots**
Traceback (most recent call last):
File "F:\fielder\textkernelfunc\Lib\site-packages\httpx\_transports\default.py", line 60, in map_httpcore_exceptions
yield
File "F:\fielder\textkernelfunc\Lib\site-packages\httpx\_transports\default.py", line 218, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpcore\_sync\connection_pool.py", line 253, in handle_request
raise exc
File "F:\fielder\textkernelfunc\Lib\site-packages\httpcore\_sync\connection_pool.py", line 237, in handle_request
response = connection.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpcore\_sync\connection.py", line 90, in handle_request
return self._connection.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpcore\_sync\http11.py", line 112, in handle_request
raise exc
File "F:\fielder\textkernelfunc\Lib\site-packages\httpcore\_sync\http11.py", line 91, in handle_request
) = self._receive_response_headers(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpcore\_sync\http11.py", line 155, in _receive_response_headers
event = self._receive_event(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpcore\_sync\http11.py", line 191, in _receive_event
data = self._network_stream.read(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpcore\backends\sync.py", line 26, in read
with map_exceptions(exc_map):
File "C:\Users\ishaq\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "F:\fielder\textkernelfunc\Lib\site-packages\httpcore\_exceptions.py", line 14, in map_exceptions
raise to_exc(exc)
httpcore.ReadTimeout: The read operation timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "F:\fielder\textkernelfunc\dbhelper.py", line 249, in <module>
rs = asyncio.run(job.getTopJobsByCV(tk_res['documentText']))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ishaq\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Users\ishaq\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ishaq\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\dbhelper.py", line 239, in getTopJobsByCV
response = self.connection.rpc('match_jobs', rpc_params).execute()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\postgrest\_sync\request_builder.py", line 55, in execute
r = self.session.request(
^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpx\_client.py", line 821, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpx\_client.py", line 908, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpx\_client.py", line 936, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpx\_client.py", line 973, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpx\_client.py", line 1009, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\fielder\textkernelfunc\Lib\site-packages\httpx\_transports\default.py", line 217, in handle_request
with map_httpcore_exceptions():
File "C:\Users\ishaq\AppData\Local\Programs\Python\Python311\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "F:\fielder\textkernelfunc\Lib\site-packages\httpx\_transports\default.py", line 77, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ReadTimeout: The read operation timed out
**Desktop (please complete the following information):**
- windows 10
-vs code
- python 3.11
|
closed
|
2023-07-28T18:06:13Z
|
2024-06-25T07:11:25Z
|
https://github.com/supabase/supabase-py/issues/510
|
[
"documentation"
] |
ishaqmahsud
| 4
|
jackzhenguo/python-small-examples
|
data-science
| 35
|
README.md 文档中的图片修复
|
README.md 文档中的图片修复
如果不需要, 是不是可以删除呢, 影响阅读
|
closed
|
2020-04-18T02:47:23Z
|
2020-04-27T09:41:53Z
|
https://github.com/jackzhenguo/python-small-examples/issues/35
|
[] |
tianheg
| 4
|
arogozhnikov/einops
|
numpy
| 343
|
[Feature suggestion] Support of tensordict (and tensorclass)
|
Tensordicts are dicts of tensors with a common batch dimension, tensorclasses dataclasses of tensor.
https://pytorch.org/tensordict/
Tensordict seems to be still not fully mature, but is already quite useful. If I am not mistaken, it supports most operations required to write a einops backend -- einsum is missing.
Before I (or somebody else) does the work to draft an implementation, I am wondering if this might be included in einops or if @arogozhnikov would either like to wait and see if tensordict gets merged into pytorch proper (https://github.com/pytorch/pytorch/pull/112441) or if the missing einsum functionality would be a stopper.
For us, it would already be immensely helpful to be able to use rearrange on the batch dimensions of tensordicts.
Cheers
|
open
|
2024-09-28T10:28:26Z
|
2024-12-18T10:42:44Z
|
https://github.com/arogozhnikov/einops/issues/343
|
[
"feature suggestion"
] |
fzimmermann89
| 3
|
onnx/onnx
|
deep-learning
| 6,532
|
[Feature request] aten::_native_mutli-head_attention
|
### System information
1.17
### What is the problem that this feature solves?
When trying to export the pytorch model with MHA to onnx format, this problem appears. It seems like the pytorch creates the "aten::_native_multi_head_attention node" which isn't supported by onnx yet, see ref [https://discuss.pytorch.org/t/multiheadattention-export-to-onnx-fails-when-using-torch-no-grad/198843](url). Although in this thread someone proposed a solution but it doesn't work for me.
### Alternatives considered
_No response_
### Describe the feature
Onnx will support the pytorch MHA implementation in the latest version which is the base of many attention variants.
### Will this influence the current api (Y/N)?
No
### Feature Area
operators
### Are you willing to contribute it (Y/N)
No
### Notes
_No response_
|
closed
|
2024-11-05T08:05:20Z
|
2024-11-08T02:00:52Z
|
https://github.com/onnx/onnx/issues/6532
|
[
"topic: enhancement"
] |
wcycqjy
| 1
|
jupyter/nbviewer
|
jupyter
| 865
|
Gist support does not work with GHE
|
**Describe the bug**
When configured to integrate with GitHub Enterprise, Gists do not work.
**To Reproduce**
Enter Gist URL, e.g. https://ghe.mycompany.com/gist/c3ca62e42d590f81c3a906824b2528b0, in an nbviewer configured `GITHUB_API_URL=https://ghe.mycompany.com/v3/api/`.
**Expected behavior**
The notebook in the Gist is rendered in nbviewer.
**Additional context**
Related to #850 and #863.
|
closed
|
2019-11-18T03:38:12Z
|
2019-12-08T20:28:24Z
|
https://github.com/jupyter/nbviewer/issues/865
|
[] |
ivan-gomes
| 2
|
python-gino/gino
|
sqlalchemy
| 74
|
Add example of how to create the database tables
|
All of the current examples punt on showing how to to create the database tables:
```# You will need to create the database and table manually```
Requiring the user to create the database itself is ok, but having to create all application tables manually would, at least for me, be a show stopper for adopting Gino.
We would need to be able to use something equivalent to SQLAlchemy MetaData.sync_all(). I tried calling Gino().sync_all() (as Gino inherits from sa.MetaData), but it complained about not having a database connected to it. Maybe I did something wrong?
Does Gino() support a way to semi-automatically create the database tables from the application defined Models? This would be used either on application startup or from separate adminstration scripts. We might even defers to calling non-async methods of SQLAlchemy before the event loop is started. Anything that allows us to use the defined models would be good.
|
closed
|
2017-09-23T05:12:59Z
|
2017-09-23T08:17:10Z
|
https://github.com/python-gino/gino/issues/74
|
[] |
kinware
| 3
|
Lightning-AI/pytorch-lightning
|
deep-learning
| 19,751
|
Validation does not produce any output in PyTorch Lightning using my UNetTestModel
|
### Bug description
I'm trying to validate my model using PyTorch Lightning, but no output or logs are generated during the validation process, despite setting up everything correctly.

And this is my model part:
`class UNetTestModel(pl.LightningModule, HyperparametersMixin):
def __init__(
self,
encoder_name='resnet50',
encoder_weights='imagenet',
in_channels=1,
classes=14,
loss_fn=DiceCELossWithKL(softmax=True, lambda_dice=0.85, lambda_ce=0.15, lambda_kl=2.0, to_onehot_y=True, include_background=True),
loss_function='DiceCELossWithKL',
learning_rate=3e-3,
):
super().__init__()
self.save_hyperparameters()
self.model = smp.Unet(
encoder_name=encoder_name,
encoder_weights=encoder_weights,
in_channels=in_channels,
classes=classes,
)
self.loss_fn = loss_fn
self.val_accuracy = torchmetrics.classification.Accuracy(task="multiclass", num_classes=14, average='macro', ignore_index=0)
self.val_accuracy_classwise = torchmetrics.classification.Accuracy(task="multiclass", num_classes=14, average='none', ignore_index=0)
self.Dice = torchmetrics.classification.Dice(multiclass=True, num_classes=14, average='macro', ignore_index=0)
self.F1 = torchmetrics.classification.MulticlassF1Score(num_classes=14, average="macro", ignore_index=0)
self.Jaccard = torchmetrics.classification.MulticlassJaccardIndex(num_classes=14, average="macro", ignore_index=0)
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_idx):
images, labels = batch
outputs = self.forward(images)
loss = self.loss_fn(outputs, labels.unsqueeze(1))
self.log('train_loss', loss, on_step=True, on_epoch=False, logger=True, prog_bar=True)
return loss
def validation_step(self, batch, batch_idx):
images, labels = batch
outputs = self.forward(images)
loss = self.loss_fn(outputs, labels.unsqueeze(1))
accuracy = self.val_accuracy(outputs, labels)
Dice = self.Dice(outputs, labels)
F1 = self.F1(outputs, labels)
Jaccard = self.Jaccard(outputs, labels)
acc = self.val_accuracy_classwise(outputs, labels)
self.log('val_loss', loss, on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_accuracy', accuracy, on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_F1', F1, on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_Dice', Dice, on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_Jaccard', Jaccard, on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_acc_4', acc[4], on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_acc_5', acc[5], on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_acc_10', acc[10], on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_acc_12', acc[12], on_step=True, on_epoch=False, logger=True, prog_bar=True)
self.log('val_acc_13', acc[13], on_step=True, on_epoch=False, logger=True, prog_bar=True)
return {"loss": loss, "accuracy": accuracy}
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_closure, **kwargs):
if self.trainer.global_step < 50:
lr_scale = min(1.0, float(self.trainer.global_step + 1) / 50)
for pg in optimizer.param_groups:
pg["lr"] = lr_scale * self.hparams.learning_rate
optimizer.step(closure=optimizer_closure)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=5, eta_min=0.000001, last_epoch=-1)
return {
'optimizer': optimizer,
'lr_scheduler': {
'scheduler': scheduler,
'interval': 'epoch',
'frequency': 1,
}
}
`
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
To view the bug, you can run the colab notebook cells (not those cells marked with Opt). You can reproduce the bug in CheckMetrics cell. It is reproduce-able in kaggle and colab thus really annoying 😡. It would be sooo much oblidged if anyone could help me with this.
https://colab.research.google.com/#fileId=https%3A//storage.googleapis.com/kaggle-colab-exported-notebooks/smu-dataset-dl-update-with-new-dataset-5df7b4b9-0565-494d-b22a-c0306ec0418e.ipynb%3FX-Goog-Algorithm%3DGOOG4-RSA-SHA256%26X-Goog-Credential%3Dgcp-kaggle-com%2540kaggle-161607.iam.gserviceaccount.com/20240410/auto/storage/goog4_request%26X-Goog-Date%3D20240410T014637Z%26X-Goog-Expires%3D259200%26X-Goog-SignedHeaders%3Dhost%26X-Goog-Signature%3D0e9d4ad91ecc6e2fae0a51622b97399160be483c4737424d1584ebdcba2a80b870f32feacc256675774b85db2c72329819040ffa6c923e20835b331d995cfea132418460df7cfba6d261e6e3381354d8ca92188ddba7e502fa71fee33c63ed5d5246df0964c3766b7a26c92b559e3e359f4bc4e78b78edf3114d0d52ab54244f7c28b560f6a31a14389b27cb86837fcfb0579c6784958ab181af41a2a915a57eaa6e0e80bc9acc55bca97cbc0311caa0e870004659e568e2acae6de0af29ff8f08bbc9ebea6118b8b9d48aea9d20593a1e3516763105e0c296679a649968501b481f722936008f893bacf0856e288c202e3124902da7cdf635d174169c05b27a&scrollTo=HbUeiVEzr21d&line=4&uniqifier=1
```
### Error messages and logs

### Environment
basic Colab env with pip-qqq-accessible lightning
### More info
_No response_
|
closed
|
2024-04-10T07:10:13Z
|
2024-09-30T12:44:30Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19751
|
[
"bug",
"needs triage"
] |
lgy112112
| 0
|
voila-dashboards/voila
|
jupyter
| 1,063
|
Get current kernel's id/name
|
Hi, im trying to get the id/name of the kernel that the code is executed on.
Is it possible?
|
closed
|
2022-01-19T12:34:55Z
|
2022-01-20T09:25:26Z
|
https://github.com/voila-dashboards/voila/issues/1063
|
[] |
riegerben
| 5
|
gevent/gevent
|
asyncio
| 1,180
|
Add pyproject.toml file?
|
pip 10 supports `pyproject.toml` files as per [PEP518](https://www.python.org/dev/peps/pep-0518/). In theory this lets us declare our build-time dependencies (notably on Cython) so that a source install can simply be 'pip install gevent.tar.gz' without having to manually install Cython:
```toml
[build-system]
requires = ["setuptools", "wheel", "Cython >= 0.28.2"]
```
There's a problem with this, however.
Unfortunately, [pip 10](https://pip.pypa.io/en/latest/reference/pip/#pep-518-support) will only install wheels for things listed in `requires`. If you're on a platform without a binary wheel for Cython (e.g., FreeBSD or Python 3.7b3 on any system), then [the installation simply bails](https://github.com/pypa/pip/issues/5244):
```
Could not find a version that satisfies the requirement Cython>=0.28.2 (from versions: )
No matching distribution found for Cython>=0.28.2
```
Because `pyproject.toml` also specifies build isolation, installing Cython ahead of time *also* fails, unless you specify the `--no-build-isolation` flag. So until pip supports installing build deps from source, it's not clear that `pyproject.toml` is a net win.
|
closed
|
2018-04-18T14:27:48Z
|
2019-04-12T22:20:11Z
|
https://github.com/gevent/gevent/issues/1180
|
[
"internal"
] |
jamadden
| 1
|
gradio-app/gradio
|
data-visualization
| 10,604
|
browser environment within gradio for computer use agents
|
- [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
omni parser v2 and many other opens models would benifit with a browser inside spaces that could run these computer use agent models within the spaces enviroment using the zero gpu or other gpu spaces.
**Describe the solution you'd like**
maybe implemented something from browserbase into gradio?
@AK391 @abidlabs @gradio-pr-bot @pngwn @aliabd @freddyaboulton @dawoodkhan82 @aliabid94 @hannahblair @omerXfaruq @whitphx
|
open
|
2025-02-17T12:09:48Z
|
2025-02-17T22:10:10Z
|
https://github.com/gradio-app/gradio/issues/10604
|
[
"enhancement"
] |
geeve-research
| 0
|
python-restx/flask-restx
|
flask
| 224
|
ValueError: too many values to unpack (expected 2) for requestparser
|
I was using Flask-restplus and migrated to flask-restx. But when I parse a request params, I am getting this error.
Suddenly started this error and Same error i was getting for flask-restplus too.
args = self.parser.parse_args()
File "/dv/virtualenvs-apps/env/lib/python3.6/site-packages/flask_restx/reqparse.py", line 387, in parse_args
value, found = arg.parse(req, self.bundle_errors)
File "/dv/virtualenvs-apps/env/lib/python3.6/site-packages/flask_restx/reqparse.py", line 215, in parse
source = self.source(request)
File "/dv/virtualenvs-apps/env/lib/python3.6/site-packages/flask_restx/reqparse.py", line 153, in source
values.update(value)
File "/dv/virtualenvs-apps/env/lib/python3.6/site-packages/werkzeug/datastructures.py", line 627, in update
for key, value in iter_multi_items(other_dict):
ValueError: too many values to unpack (expected 2)
Any help will be highly appreciated.
|
open
|
2020-09-14T10:31:33Z
|
2021-01-02T17:57:08Z
|
https://github.com/python-restx/flask-restx/issues/224
|
[
"question"
] |
BHAUTIK04
| 3
|
babysor/MockingBird
|
pytorch
| 41
|
这次训练一半会出现这个EOFError: Ran out of input,怎么回事 PermissionError: [WinError 5] 拒绝访问。
|

来看看,解决一下
|
closed
|
2021-08-23T09:22:46Z
|
2021-10-01T03:20:37Z
|
https://github.com/babysor/MockingBird/issues/41
|
[] |
wangkewk
| 1
|
miguelgrinberg/flasky
|
flask
| 546
|
Selenium testing failing
|
Hi Miguel,
I'm just about to finish your book, absolutely love it. I've been following along as I read and noticed that the packages are quite outdated (as expected). I decided to code your app using the latest available packages and so far apart from some minor syntax differences it was smooth sailing. This was the case up until the 'End-to-End Testing with Selenium' (15d), it took me 2 days to make it work with `Selenium v4.7.2` and `Unittest` and I just wanted to leave it here in case someone else runs into this problem as well as ask if this is the correct way to do it. It feels more like a hack to me rather than the actual solution so I would really appreciate your input.
Below are the packages I'm using as well as my solution to the problem. By the way I'm also using `ChromeDriver 108.0.5359.71`
I figured out the solution tanks to <https://github.com/pallets/flask/issues/2776>
`requirements/common.txt`
```
alembic==1.8.1
bleach==5.0.1
blinker==1.5
click==8.1.3
colorama==0.4.5
dnspython==2.2.1
dominate==2.7.0
email-validator==1.3.0
Flask==2.2.2
Flask-Bootstrap==3.3.7.1
Flask-HTTPAuth==4.7.0
Flask-Login==0.6.2
Flask-Mail==0.9.1
Flask-Migrate==3.1.0
Flask-Moment==1.0.5
Flask-PageDown==0.4.0
Flask-SQLAlchemy==3.0.2
Flask-WTF==1.0.1
greenlet==2.0.0
idna==3.4
itsdangerous==2.1.2
Jinja2==3.1.2
Mako==1.2.3
Markdown==3.4.1
MarkupSafe==2.1.1
packaging==21.3
pyparsing==3.0.9
python-dateutil==2.8.2
python-dotenv==0.21.0
six==1.16.0
SQLAlchemy==1.4.42
visitor==0.1.3
webencodings==0.5.1
Werkzeug==2.2.2
WTForms==3.0.1
```
`requirements/common.txt`
```
-r common.txt
charset-normalizer==2.1.1
certifi==2022.9.24
commonmark==0.9.1
coverage==6.5.0
defusedxml==0.7.1
Faker==15.2.0
httpie==3.2.1
multidict==6.0.2
Pygments==2.13.0
PySocks==1.7.1
requests==2.28.1
requests-toolbelt==0.10.1
rich==12.6.0
selenium==4.7.2
urllib3==1.26.12
```
`main/views.py`
```python
[...]
@main.route('/shutdown')
def server_shutdown():
if not current_app.testing:
abort(404)
# request.environ.get('werkzeug.server.shutdown') has been deprecated
# So I used the following instead:
os.kill(os.getpid(), signal.SIGINT)
return 'Shutting down...'
[...]
```
`config.py`
```python
[...]
# I added the following configuration which is the FIX to my problem
class TestingWithSeleniumConfig(TestingConfig):
@staticmethod
def init_app(app):
if os.environ.get('FLASK_RUN_FROM_CLI'):
os.environ.pop('FLASK_RUN_FROM_CLI')
[...]
config = {
[...]
'testing-with-selenium': TestingWithSeleniumConfig,
[...]
}
```
`tests/test_selenium.py`
```python
import re
import threading
import unittest
from selenium import webdriver
from selenium.webdriver.common.by import By
from app import create_app, db, fake
from app.models import Role, User, Post
class SeleniumTestCase(unittest.TestCase):
# I don't like things hardcoded where possible
HOST = 'localhost'
PORT = 5000
# PyCharm complaining without those
client = None
app = None
app_context = None
server_thread = None
@classmethod
def setUpClass(cls):
options = webdriver.ChromeOptions()
options.add_argument('headless')
# This suppresses some jibberish from webdriver
options.add_experimental_option('excludeSwitches', ['enable-logging'])
# noinspection PyBroadException
try:
cls.client = webdriver.Chrome(options=options)
except Exception:
pass
# Skip these tests if the web browser could not be started
if cls.client:
# Create the application
# FIX: making use of 'testing-with-selenium' config
cls.app = create_app('testing-with-selenium')
cls.app_context = cls.app.app_context()
cls.app_context.push()
# Suppress logging to keep unittest output clean
import logging
logger = logging.getLogger('werkzeug')
logger.setLevel('ERROR')
# Create the database and populate with some fake data
db.create_all()
Role.insert_roles()
fake.users(10)
fake.posts(10)
# Add an administrator user
admin_role = Role.query.filter_by(permissions=0xff).first()
admin = User(email='john@example.com', username='john', password='cat', role=admin_role, confirmed=True)
db.session.add(admin)
db.session.commit()
# Start the flask server in a thread
cls.server_thread = threading.Thread(target=cls.app.run, kwargs={
'host': cls.HOST,
'port': cls.PORT,
'debug': False,
'use_reloader': False,
'use_debugger': False
})
cls.server_thread.start()
@classmethod
def tearDownClass(cls):
if cls.client:
# Stop the Flask server and the browser
cls.client.get(f'http://{cls.HOST}:{cls.PORT}/shutdown')
cls.client.quit()
cls.server_thread.join()
# Destroy the database
db.drop_all()
db.session.remove()
# Remove application context
cls.app_context.pop()
def setUp(self):
if not self.client:
self.skipTest('Web browser not available')
def tearDown(self):
pass
def test_admin_home_page(self):
# Navigate to home page
self.client.get(f'http://{self.HOST}:{self.PORT}/')
self.assertTrue(re.search(r'Hello,\s+Stranger!', self.client.page_source))
# Navigate to login page
self.client.find_element(By.LINK_TEXT, 'Log In').click()
self.assertIn('<h1>Login</h1>', self.client.page_source)
# Login
self.client.find_element(By.NAME, 'email').send_keys('john@example.com')
self.client.find_element(By.NAME, 'password').send_keys('cat')
self.client.find_element(By.NAME, 'submit').click()
self.assertTrue(re.search(r'Hello,\s+john!', self.client.page_source))
# Navigate to the user's profile page
self.client.find_element(By.LINK_TEXT, 'Profile').click()
self.assertIn('<h1>john</h1>', self.client.page_source)
```
|
open
|
2022-12-10T21:06:01Z
|
2023-09-02T15:40:25Z
|
https://github.com/miguelgrinberg/flasky/issues/546
|
[] |
sularz-maciej
| 3
|
supabase/supabase-py
|
flask
| 1,026
|
Async client: Unresolved attribute reference 'execute' for class 'BaseFilterRequestBuilder'
|
# Bug report
<!--
⚠️ We receive a lot of bug reports which have already been solved or discussed. If you are looking for help, please try these first:
- Docs: https://docs.supabase.com
- Discussions: https://github.com/supabase/supabase/discussions
- Discord: https://discord.supabase.com
Before opening a bug report, please verify the following:
-->
- [x] I confirm this is a bug with Supabase, not with my own application.
- [x] I confirm I have searched the [Docs](https://docs.supabase.com), GitHub [Discussions](https://github.com/supabase/supabase/discussions), and [Discord](https://discord.supabase.com).
## Describe the bug
> [!NOTE]
> This report was created having in consideration the instruction made by @silentworks in the thread https://discord.com/channels/839993398554656828/1323676367765897297.
When using the async client and calling the `execute` method after an `eq` filter, PyCharm generate an `Unresolved attribute reference 'execute' for class 'BaseFilterRequestBuilder'` warning.
Exploring the source of the `eq` method I figured out the class `BaseFilterRequestBuilder` doesn't have a definition for the `execute` method and doesn't extend any class which define it. In fact, there is no definition in the whole "base_request_builder.py" file (PyCharm word search).
On the other hand, the `select` method is currently defined in `AsyncRequestBuilder` and returns an instance of `AsyncSelectRequestBuilder` (which extends `AsyncQueryRequestBuilder`). This dependency chain correctly provide the `execute` method definition.
## To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
1. Create and async client
```python
from supabase import create_async_client
from app.env import env_vars
supabase = await create_async_client(
supabase_url=env_vars['SUPABASE_URL'],
supabase_key=env_vars['SUPABASE_KEY'],
)
```
2. Create a select query to any table and schema:
```python
result = await supabase.schema('stuff').table('regs').select().eq('key', key).execute()
```
3. PyCharm generate a warning when calling the `execute()` method
## Expected behavior
Call the `execute` method after `eq` without a warning.
## Screenshots

## System information
- OS: GNU/Linux Fedora 41 KDE Workstation
- Version of supabase: 2.11.0
- Version of Python: 3.13.0
- Version of Miniconda: 24.7.1
- Version of Poetry: 1.8.4 (installed with pipx)
## Additional context
Using PyCharm Community 2024.3.1.1 (243.22562.220)
|
open
|
2025-01-07T20:44:29Z
|
2025-01-14T16:21:28Z
|
https://github.com/supabase/supabase-py/issues/1026
|
[
"bug"
] |
KBeDevel
| 4
|
Teemu/pytest-sugar
|
pytest
| 75
|
tests skipped with pytest.mark.skipif not shown at all
|
If I use pytest.mark.skipif for a test, the test is not shown as
'skipped' with pytest-sugar, while regular pytest does.
If I use nose.plugins.skip.SkipTest from a test instead, pytest-sugar
_does_ see it as skipped.
Example code is the following:
```
import pytest
import nose
def test_foo():
assert True
@pytest.mark.skipif(True, reason='testing skipping')
def test_skipped():
assert False
def test_nose_skipped():
raise nose.plugins.skip.SkipTest('nose skipping')
assert False
```
I tried to add some debug statements in pytest_sugar.py, and noticed
that for the skipif test there is a 'setup' trigger where
report.status says 'skipped' but it becomes 'passed' in the 'teardown'
trigger. There is no 'call' trigger for that skipped test.
I have not been able to determine any useful things further.
This is reported against pytest-sugar 0.5.1 and pytest 2.8.7.
I verified that the same problem is present with older versions of pytest-sugar, so it seems this never worked.
|
closed
|
2016-02-06T15:04:51Z
|
2016-03-29T11:43:06Z
|
https://github.com/Teemu/pytest-sugar/issues/75
|
[] |
patrickdepinguin
| 4
|
marimo-team/marimo
|
data-visualization
| 3,401
|
support multiline for mo.ui.chat input & bubble
|
### Describe the bug
<img width="500" alt="Image" src="https://github.com/user-attachments/assets/7dd67d3a-35f8-49e1-85f2-00bc6dd8db6c" />
1. The input textbox does not support multiple lines.
Suggestion: make it an autosizing textarea. Some options:
a) Add a new package - https://github.com/Andarist/react-textarea-autosize
b) some css, js hack - https://css-tricks.com/auto-growing-inputs-textareas/#aa-other-ideas
c) same behaviour as chat panel - but I think this isn't an option, that uses codemirror which is overkill.
I tried and b) should work, avoids useLayoutEffect hooks from a)
2. Shift+enter / cmd+enter to add new line, Enter to submit.
- This is a bit tricky since we need to override the code editor shortcut (shift+enter will go to the next cell)
- One suggestion is we keep the default behaviour of textarea (Enter creates a new line), and we set `Ctrl+Enter` to submit the form
3. The chat bubble will auto format everything to 1 line.
I believe there's a simple solution,`chat-ui.tsx`
```.tsx
<p className={cn(message.role === "user" && "whitespace-pre-wrap")}>
{renderMessage(message)}
</p>
```
### Environment
<details>
```
{
"marimo": "0.10.12",
"OS": "Darwin",
"OS Version": "23.6.0",
"Processor": "arm",
"Python Version": "3.12.8",
"Binaries": {
"Browser": "131.0.6778.265",
"Node": "v23.2.0"
},
"Dependencies": {
"click": "8.1.3",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.20.1",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.18.0",
"pymdown-extensions": "10.13",
"pyyaml": "6.0.2",
"ruff": "0.6.9",
"starlette": "0.45.0",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.1"
},
"Optional Dependencies": {
"anywidget": "0.9.13",
"duckdb": "1.1.3",
"ibis-framework": "9.5.0",
"pandas": "2.2.3",
"polars": "1.18.0",
"pyarrow": "17.0.0"
}
}
```
</details>
### Code to reproduce
```.py
mo.ui.chat(
mo.ai.llm.openai(
"gpt-4o-mini",
system_message="You are a helpful chemist. Output your answer in markdown or latex.",
),
show_configuration_controls=True,
)
```
|
closed
|
2025-01-11T17:06:54Z
|
2025-01-13T16:09:41Z
|
https://github.com/marimo-team/marimo/issues/3401
|
[
"bug"
] |
Light2Dark
| 2
|
encode/apistar
|
api
| 589
|
Set correct content-length for 204/304 responses with JSONResponse
|
The HTTP Codes 204 (No-Content) and 304 (Not-Modified) currently return a Content-Length > 0 when using the JSONResponse:
```
http.JSONResponse(None, status_code=204)
```
This will serialize "None" and return a response with content-length of 4:
```
HTTP/1.1 204 No Content
content-length: 4
content-type: application/json
```
The same happens when passing an empty string and empty dict. The first parameter is always serialized, no matter what the status code.
This is very annoying when using a client written in Go, as the built-in http library will complain with errors such as `2018/06/17 23:56:11 Unsolicited response received on idle HTTP channel starting with "null"; err=<nil>`
Sanic had a similar issue which was fixed a few months ago: https://github.com/channelcat/sanic/pull/1113
It would be great if there was an exception for status codes 204/304 and the response would set Content-Length to 0 and not return any body.
|
closed
|
2018-06-17T22:05:40Z
|
2018-06-20T21:37:35Z
|
https://github.com/encode/apistar/issues/589
|
[] |
arthurk
| 2
|
tqdm/tqdm
|
jupyter
| 684
|
How to work with logging? (Screen + File)
|
4.29.1 3.7.2 (default, Dec 29 2018, 00:00:04)
[Clang 4.0.1 (tags/RELEASE_401/final)] darwin
Can tqdm work with logging?
Well, I want to log to both screen and file, with the bulid-in logging standard module.
This link https://github.com/tqdm/tqdm/issues/313#issuecomment-346819396 works partly well, but not suitable for "Screen+File", just Screen.
And Google/StackOverflow didn't give the answer.
THX.
|
open
|
2019-02-28T15:59:51Z
|
2019-03-04T01:05:20Z
|
https://github.com/tqdm/tqdm/issues/684
|
[
"duplicate 🗐",
"question/docs ‽"
] |
jinyu121
| 1
|
s3rius/FastAPI-template
|
asyncio
| 96
|
Request object isn't passed as argument
|
Thanks for this package. I have created graphql app using template but getting below error. It seems fastapi doesn't pass request object.
```log
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 184, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/applications.py", line 261, in __call__
await super().__call__(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/middleware/errors.py", line 146, in __call__
await self.app(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/exceptions.py", line 58, in __call__
await self.app(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 656, in __call__
await route.handle(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 315, in handle
await self.app(scope, receive, send)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 77, in app
await func(session)
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/routing.py", line 264, in app
solved_result = await solve_dependencies(
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 498, in solve_dependencies
solved_result = await solve_dependencies(
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 498, in solve_dependencies
solved_result = await solve_dependencies(
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 498, in solve_dependencies
solved_result = await solve_dependencies(
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 523, in solve_dependencies
solved = await solve_generator(
File "/Users/test/Library/Caches/pypoetry/virtualenvs/fastapi-graphql-practice-1UuEp-7G-py3.10/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 443, in solve_generator
cm = asynccontextmanager(call)(**sub_values)
File "/Users/test/.pyenv/versions/3.10.2/lib/python3.10/contextlib.py", line 314, in helper
return _AsyncGeneratorContextManager(func, args, kwds)
File "/Users/test/.pyenv/versions/3.10.2/lib/python3.10/contextlib.py", line 103, in __init__
self.gen = func(*args, **kwds)
TypeError: get_db_session() missing 1 required positional argument: 'request'
INFO: connection open
INFO: connection closed
```
|
closed
|
2022-07-05T07:01:34Z
|
2022-10-13T21:26:26Z
|
https://github.com/s3rius/FastAPI-template/issues/96
|
[] |
devNaresh
| 16
|
lepture/authlib
|
flask
| 456
|
Two tests/jose/test_jwe.py tests failing
|
While packaging this package for openSUSE we try to start running the testsuite during the packaging (so that we may catch some unexpected failure to build package correctly) and when running `tests/jose` I got this:
```
[ 11s] + python3.9 -mpytest tests/jose
[ 11s] ============================= test session starts ==============================
[ 11s] platform linux -- Python 3.9.12, pytest-7.1.1, pluggy-1.0.0
[ 11s] rootdir: /home/abuild/rpmbuild/BUILD/authlib-1.0.1, configfile: tox.ini
[ 11s] plugins: anyio-3.5.0, asyncio-0.17.2
[ 11s] asyncio: mode=auto
[ 11s] collected 134 items
[ 11s]
[ 11s] tests/jose/test_jwe.py ..................F.............................. [ 36%]
[ 11s] ............................F [ 58%]
[ 12s] tests/jose/test_jwk.py ....................... [ 75%]
[ 12s] tests/jose/test_jws.py ................ [ 87%]
[ 12s] tests/jose/test_jwt.py ................. [100%]
[ 12s]
[ 12s] =================================== FAILURES ===================================
[ 12s] __________________________ JWETest.test_dir_alg_xc20p __________________________
[ 12s]
[ 12s] self = <tests.jose.test_jwe.JWETest testMethod=test_dir_alg_xc20p>
[ 12s]
[ 12s] def test_dir_alg_xc20p(self):
[ 12s] jwe = JsonWebEncryption()
[ 12s] key = OctKey.generate_key(256, is_private=True)
[ 12s] protected = {'alg': 'dir', 'enc': 'XC20P'}
[ 12s] > data = jwe.serialize_compact(protected, b'hello', key)
[ 12s]
[ 12s] tests/jose/test_jwe.py:2657:
[ 12s] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[ 12s] authlib/jose/rfc7516/jwe.py:80: in serialize_compact
[ 12s] enc = self.get_header_enc(protected)
[ 12s] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[ 12s]
[ 12s] self = <authlib.jose.rfc7516.jwe.JsonWebEncryption object at 0x7fad22f4ac40>
[ 12s] header = {'alg': 'dir', 'enc': 'XC20P'}
[ 12s]
[ 12s] def get_header_enc(self, header):
[ 12s] if 'enc' not in header:
[ 12s] raise MissingEncryptionAlgorithmError()
[ 12s] enc = header['enc']
[ 12s] if self._algorithms and enc not in self._algorithms:
[ 12s] raise UnsupportedEncryptionAlgorithmError()
[ 12s] if enc not in self.ENC_REGISTRY:
[ 12s] > raise UnsupportedEncryptionAlgorithmError()
[ 12s] E authlib.jose.errors.UnsupportedEncryptionAlgorithmError: unsupported_encryption_algorithm: Unsupported "enc" value in header
[ 12s]
[ 12s] authlib/jose/rfc7516/jwe.py:678: UnsupportedEncryptionAlgorithmError
[ 12s] _______________ JWETest.test_xc20p_content_encryption_decryption _______________
[ 12s]
[ 12s] self = <tests.jose.test_jwe.JWETest testMethod=test_xc20p_content_encryption_decryption>
[ 12s]
[ 12s] def test_xc20p_content_encryption_decryption(self):
[ 12s] # https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-xchacha-03#appendix-A.3.1
[ 12s] > enc = JsonWebEncryption.ENC_REGISTRY['XC20P']
[ 12s] E KeyError: 'XC20P'
[ 12s]
[ 12s] tests/jose/test_jwe.py:2672: KeyError
[ 12s] =========================== short test summary info ============================
[ 12s] FAILED tests/jose/test_jwe.py::JWETest::test_dir_alg_xc20p - authlib.jose.err...
[ 12s] FAILED tests/jose/test_jwe.py::JWETest::test_xc20p_content_encryption_decryption
[ 12s] ======================== 2 failed, 132 passed in 1.08s =========================
```
[Complete build log](https://github.com/lepture/authlib/files/8655846/_log.txt) with all details of packages used and steps taken to reproduce.
- OS: openSUSE/Tumbleweed as of 2022-05-10
- Python Version: various versions of Python, this traceback is from 3.9.12, pytest-7.1.1, and pluggy-1.0.0
- Authlib Version: 1.0.1
|
open
|
2022-05-09T23:41:53Z
|
2025-02-21T14:52:18Z
|
https://github.com/lepture/authlib/issues/456
|
[
"bug",
"jose"
] |
mcepl
| 4
|
gee-community/geemap
|
streamlit
| 2,231
|
geemap import fails on GitHub actions
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
Github runner setup:
```
Current runner version: '2.322.0'
Operating System
Runner Image
Runner Image Provisioner
GITHUB_TOKEN Permissions
Secret source: Actions
Prepare workflow directory
Prepare all required actions
Getting action download info
Download action repository 'actions/checkout@v3' (SHA:f43a0e5ff2bd294095638e18286ca9a3d1956744)
Download action repository 'actions/setup-python@v2' (SHA:e9aba2c848f5ebd159c070c61ea2c4e2b122355e)
Download action repository 'pre-commit/action@v3.0.0' (SHA:646c83fcd040023954eafda54b4db0192ce70507)
Download action repository 'conda-incubator/setup-miniconda@v3' (SHA:505e6394dae86d6a5c7fbb6e3fb8938e3e863830)
Getting action download info
Download action repository 'actions/cache@v3' (SHA:2f8e54208210a422b2efd51efaa6bd6d7ca8920f)
Complete job name: test
```
### Description
It appears that importing `from IPython.core.display import display` should now be `from IPython.display import display`. I believe that this is causing the issue in Github Actions.
### What I Did
Here is the error in Github Actions:
```
ImportError while loading conftest '/home/runner/work/basinscout/basinscout/tests/conftest.py'.
tests/conftest.py:5: in <module>
from .fixtures import *
tests/fixtures/__init__.py:2: in <module>
from .basinscout_fxt import (
tests/fixtures/basinscout_fxt.py:7: in <module>
from basinscout import BasinScout
basinscout/__init__.py:1: in <module>
from .basinscout import BasinScout
basinscout/basinscout.py:34: in <module>
from .features.field import Field
basinscout/features/field.py:26: in <module>
from ..models.sb_irrigate import _get_openet_dataframe, _get_prism_dataframe
basinscout/models/sb_irrigate.py:22: in <module>
from geemap import common as geemap
/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/geemap/__init__.py:55: in <module>
raise e
/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/geemap/__init__.py:45: in <module>
from .geemap import *
/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/geemap/geemap.py:30: in <module>
from . import core
/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/geemap/core.py:15: in <module>
from . import toolbar
/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/geemap/toolbar.py:20: in <module>
from IPython.core.display import display
E ImportError: cannot import name 'display' from 'IPython.core.display' (/usr/share/miniconda/envs/bscout/lib/python3.11/site-packages/IPython/core/display.py)
Please restart Jupyter kernel after installation if you encounter any errors when importing geemap.
```
|
closed
|
2025-03-04T23:19:07Z
|
2025-03-05T23:39:40Z
|
https://github.com/gee-community/geemap/issues/2231
|
[
"bug"
] |
dharp
| 10
|
sinaptik-ai/pandas-ai
|
data-science
| 1,224
|
Analisis
|
closed
|
2024-06-11T17:20:51Z
|
2024-09-17T16:06:31Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1224
|
[] |
ajitetuko
| 0
|
|
jupyter-widgets-contrib/ipycanvas
|
jupyter
| 104
|
image_data Serialization
|
The setup you currently have for image serialization works so I don't think there's a strong reason to change it, but I figured out a simpler serialization scheme from `canvas` -> `numpy array` that I think would allow you to drop the Pillow dependency if you wanted and would reduce the complexity of `toBytes` in `util.ts`. Figured I ought to share in case its helpful given how much looking at your source code has helped me.
**`toBytes`**
To get the bytes from a canvas you currently do this
https://github.com/martinRenou/ipycanvas/blob/09ca4e04c3c43bca5df46975cad28ada3d50750e/src/widget.ts#L311
https://github.com/martinRenou/ipycanvas/blob/09ca4e04c3c43bca5df46975cad28ada3d50750e/src/utils.ts#L101-L117
I think you could instead just do
```typescript
const bytes = this.ctx.getImageData(
0,
0,
this.canvas.width,
this.canvas.height
).data.buffer;
```
though I found it nice to also include hte width and height in what I pass back to python ([ref](https://github.com/ianhi/ipysegment/blob/5b57d8c83bca79f40d3f1949420fb63642bba4f5/src/widget.ts#L18))
`return { width: image.width, height: image.height, data: new DataView(image.data.buffer) };`
then on the python side to get the numpy array without PIL the deserializer looks like ([example](https://github.com/ianhi/ipysegment/blob/5b57d8c83bca79f40d3f1949420fb63642bba4f5/ipysegment/segment.py#L21))
```python
_bytes = None if json['data'] is None else json['data'].tobytes()
return np.copy(np.frombuffer(_bytes, dtype=np.uint8).reshape(json['width'], json['height'], 4))
```
the first line is just the function that gets used byteserialization in ipywidgets and the `copy` is separate the memory from that used by javascript and thus make it a writeable array.
Though I think an even more general solution is to use the serializers and traittypes from https://github.com/vidartf/ipydatawidgets which allow `image_data` to just be directly set and accessed as numpy arrays. For example this widget which displays a 2D RGBA image based on numpy arrays https://github.com/vidartf/ipydatawidgets/blob/master/ipydatawidgets/ndarray/media.py
----
I hope this is helpful! If this isn't then no worries and feel free to close this.
|
closed
|
2020-06-29T16:18:29Z
|
2020-09-13T01:18:28Z
|
https://github.com/jupyter-widgets-contrib/ipycanvas/issues/104
|
[] |
ianhi
| 9
|
sgl-project/sglang
|
pytorch
| 4,456
|
[Track] VLM accuracy in MMMU benchmark
|
This issue keeps track of all vlm models accuracy in MMMU benchmark
``` python
python benchmark/mmmu/bench_sglang.py
python benchmark/mmmu/bench_hf.py --model-path model
```
| | sglang | hf |
|--|--|--|
| Qwen2-VL-7B-Instruct | 0.485 | 0.255 |
| Qwen2.5-VL-7B-Instruct | 0.477 | 0.242 |
| MiniCPM-V-2_6 | 0.426 | |
| DeepseekVL2| 0.447 | |
| Deepseek-Janus-Pro-7B| | |
| Llava + Llama| | |
| Llava + qwen| | |
| Llava + Mistral| | |
| Mlama | | |
| Gemma-3-it-4B| 0.409 | 0.403 |
| InternVL2.5-38B | 0.61 | |
|
open
|
2025-03-15T17:09:50Z
|
2025-03-23T15:04:20Z
|
https://github.com/sgl-project/sglang/issues/4456
|
[
"good first issue",
"visIon-LM"
] |
yizhang2077
| 4
|
aio-libs/aiomysql
|
asyncio
| 952
|
Is there any way to release inactive connection?
|
### Is your feature request related to a problem?
_No response_
### Describe the solution you'd like
I want set lifetime at client side (not using wait_timeout)
### Describe alternatives you've considered
connection that when created at startupprobe does not release for a log time.
I want release this connection when no more query execute
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct
|
closed
|
2023-08-11T08:13:17Z
|
2023-08-14T11:12:15Z
|
https://github.com/aio-libs/aiomysql/issues/952
|
[
"enhancement"
] |
hyeongguen-song
| 0
|
thp/urlwatch
|
automation
| 462
|
ignore_* options do not appear to work when added to urlwatch.yaml
|
Per the documentation, added the following parameters to $HOME/.config/urlwatch/urlwatch.yaml
url:
ignore_connection_errors: true
ignore_http_error_codes: 4xx, 5xx
I still get connection errors. When I add these two options beneath the URL that is failing with connection errors, I no longer get e-mails about it.
Version being used:
ii urlwatch 2.17-1 all monitors webpages for you
|
closed
|
2020-03-11T17:56:24Z
|
2020-07-20T09:53:39Z
|
https://github.com/thp/urlwatch/issues/462
|
[] |
jpiszcz
| 7
|
kubeflow/katib
|
scikit-learn
| 1,670
|
golint has been archived
|
/kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
Currently, we are using [golint](https://github.com/kubeflow/katib/blob/master/hack/verify-golint.sh) as a linter for Go, although [golint repository](https://github.com/golang/lint) has been archived. So, it might be better replace golint to other linter(ex: [golangci-lint](https://github.com/golangci/golangci-lint)).
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
|
closed
|
2021-09-17T06:01:35Z
|
2021-09-24T06:09:38Z
|
https://github.com/kubeflow/katib/issues/1670
|
[
"kind/feature"
] |
tenzen-y
| 7
|
huggingface/pytorch-image-models
|
pytorch
| 1,610
|
What batch size number other than 1024 have you tried when training a DeiT model?
|
What batch size number other than batch size of 1024 have you tried when training a DeiT or ViT model? In the paper, DeiT (https://arxiv.org/abs/2012.12877), they used a batch size of 1024 and they mentioned that the learning rate should be scaled according to the batch size.
However, I was wondering if you guys have any experience or successfully train a DeiT model with a batch size that is even less than 512? If yes, what accuracy did you achieve?
This would be helpful for someone training on constrained resources that cannot train on a batch size of 1024.
|
closed
|
2023-01-03T18:12:20Z
|
2023-02-01T17:35:08Z
|
https://github.com/huggingface/pytorch-image-models/issues/1610
|
[] |
Phuoc-Hoan-Le
| 2
|
stanford-oval/storm
|
nlp
| 167
|
PGVector and GraphRAG support (Retrieval for RAG and GraphRAG)
|
Hi,
I have my own RAG and GraphRAG systems and I wanted to integrate STROM Retrieval for PGVector and GraphRAG.
Can you please integrate support for such Retrievals?
|
open
|
2024-09-12T05:50:57Z
|
2025-01-03T19:49:13Z
|
https://github.com/stanford-oval/storm/issues/167
|
[] |
k2ai
| 2
|
lukas-blecher/LaTeX-OCR
|
pytorch
| 350
|
How to join the contribution?
|
I have experience with model compression and I want to contrbute. I want to know where to get the training dataset and pretrained weights?
|
open
|
2023-12-26T06:40:19Z
|
2023-12-26T06:40:19Z
|
https://github.com/lukas-blecher/LaTeX-OCR/issues/350
|
[] |
MaybeRichard
| 0
|
PaddlePaddle/ERNIE
|
nlp
| 343
|
encode.sh报错:The shape of Input(Length) should be [batch_size]
|
Hi, 我在使用encode.py来计算一句话的embedding, 正如Readme介绍的那样:https://github.com/PaddlePaddle/ERNIE#faq1-how-to-get-sentencetokens-embedding-of-ernie
我的encode脚本为:
```shell
TASK_DATA_PATH=.
MODEL_PATH=/path/to/model
export FLAGS_sync_nccl_allreduce=1
export CUDA_VISIBLE_DEVICES=4
python -u ernie_encoder.py \
--use_cuda true \
--batch_size 3 \
--output_dir "./test" \
--init_pretraining_params ${MODEL_PATH}/trained_chinese/params \
--data_set ${TASK_DATA_PATH}/baidu_input/dev.tsv \
--vocab_path ${MODEL_PATH}/vocab.txt \
--max_seq_len 128 \
--ernie_config_path ${MODEL_PATH}/ernie_config.json
```
其中,baidu_input/dev.tsv 如下:
```shell
label\ttext_a\n
0\t你吃了么\n
1\t我吃过了\n
0\t谢谢你啊\n
```
模型报错:
```shell
Device count: 1
Total num examples: 3
WARNING:root:paddle.fluid.layers.py_reader() may be deprecated in the near future. Please use paddle.fluid.io.PyReader() instead.
Traceback (most recent call last):
File "ernie_encoder.py", line 182, in <module>
main(args)
File "ernie_encoder.py", line 130, in main
args, pyreader_name='reader', ernie_config=ernie_config)
File "ernie_encoder.py", line 77, in create_model
unpad_enc_out = fluid.layers.sequence_unpad(enc_out, length=seq_lens)
File "/home/mapingshuo/paddle_release_home/python-distribute/lib/python2.7/site-packages/paddle/fluid/layers/nn.py", line 4842, in sequence_unpad
outputs={'Out': out})
File "/home/mapingshuo/paddle_release_home/python-distribute/lib/python2.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)
File "/home/mapingshuo/paddle_release_home/python-distribute/lib/python2.7/site-packages/paddle/fluid/framework.py", line 2116, in append_op
attrs=kwargs.get("attrs", None))
File "/home/mapingshuo/paddle_release_home/python-distribute/lib/python2.7/site-packages/paddle/fluid/framework.py", line 1499, in __init__
self.desc.infer_shape(self.block.desc)
paddle.fluid.core_avx.EnforceNotMet:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
2 paddle::operators::SequenceUnpadOp::InferShape(paddle::framework::InferShapeContext*) const
3 paddle::framework::OpDesc::InferShape(paddle::framework::BlockDesc const&) const
------------------------------------------
Python Call Stacks (More useful to users):
------------------------------------------
File "/home/mapingshuo/paddle_release_home/python-distribute/lib/python2.7/site-packages/paddle/fluid/framework.py", line 2116, in append_op
attrs=kwargs.get("attrs", None))
File "/home/mapingshuo/paddle_release_home/python-distribute/lib/python2.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)
File "/home/mapingshuo/paddle_release_home/python-distribute/lib/python2.7/site-packages/paddle/fluid/layers/nn.py", line 4842, in sequence_unpad
outputs={'Out': out})
File "ernie_encoder.py", line 77, in create_model
unpad_enc_out = fluid.layers.sequence_unpad(enc_out, length=seq_lens)
File "ernie_encoder.py", line 130, in main
args, pyreader_name='reader', ernie_config=ernie_config)
File "ernie_encoder.py", line 182, in <module>
main(args)
----------------------
Error Message Summary:
----------------------
PaddleCheckError: Expected len_dims.size() == 1, but received len_dims.size():2 != 1:1.
The shape of Input(Length) should be [batch_size]. at [/home/mapingshuo/Paddle/paddle/fluid/operators/sequence_ops/sequence_unpad_op.cc:41]
[operator < sequence_unpad > error]
```
请问是什么原因,谢谢
|
closed
|
2019-10-15T04:58:46Z
|
2019-11-25T02:54:09Z
|
https://github.com/PaddlePaddle/ERNIE/issues/343
|
[] |
mapingshuo
| 2
|
pydata/xarray
|
numpy
| 9,789
|
Support DataTree in apply_ufunc
|
Sub-issue of https://github.com/pydata/xarray/issues/9106
|
open
|
2024-11-16T21:41:20Z
|
2024-11-16T21:41:32Z
|
https://github.com/pydata/xarray/issues/9789
|
[
"contrib-help-wanted",
"topic-DataTree"
] |
shoyer
| 0
|
521xueweihan/HelloGitHub
|
python
| 2,876
|
【开源自荐】🏎 Nping: 使用 Rust 开发的实时可视化终端 Ping 命令
|
## 推荐项目
- 项目地址: https://github.com/hanshuaikang/Nping
- 类别:Rust
- 项目标题:使用 Rust 开发的实时可视化终端 Ping 命令
- 项目描述:Nping 顾名思义, 牛批的 Ping, 是一个用 Rust 开发的使用 ICMP 协议 的 Ping 工具。支持多个地址并发 Ping, 支持可视化延迟/抖动/丢包率/平均延迟 实时展示,并附带酷炫的实时折线图展示。
- 亮点:支持多地址并发 Ping, 支持实时图表可视化
运行截图:


- 后续更新计划:
- 支持更酷炫的 UI 展示
- 支持更多的网络监控指标
|
closed
|
2024-12-30T12:14:51Z
|
2025-02-16T05:31:46Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2876
|
[] |
hanshuaikang
| 0
|
mwaskom/seaborn
|
matplotlib
| 3,748
|
How to hide legend while using seaborn.objects?
|
I really like the flexibility of seaborn.objects. I would like to ask how to hide its legend?
Next is my code:
```python
import pandas as pd
import numpy as np
# Define parameters
num_cages = 20 # Number of cages
num_weeks = 10 # Number of weeks
samples_per_week = 5 # Number of samples per week
# Generate data
np.random.seed(42) # For reproducibility
# Create Week variable (1-10)
weeks = np.tile(np.arange(1, num_weeks + 1), num_cages * samples_per_week)
# Create Cage variable (1-20)
cages = np.repeat(np.arange(1, num_cages + 1), num_weeks * samples_per_week)
# Create Weight variable (randomly generated, assumed mean=50, std=5)
weights = np.random.normal(loc=50, scale=5, size=len(weeks))
# Create DataFrame
total_data = pd.DataFrame({
'Week': weeks,
'Cage': cages,
'Weight': weights
})
(
so.Plot(total_data, x='Week', y="Weight", color='Cage')
.facet('Cage', wrap=cols).label(col="Cage")
.layout(extent=[0.,0.,3.,3.])
.limit(y=(0, None))
.scale(color="Paired")
.add(so.Line(), so.Agg())
.add(so.Dot(), so.Agg('mean'))
.add(so.Band(), so.Est())
)
```

|
closed
|
2024-08-18T02:43:15Z
|
2024-09-15T19:03:50Z
|
https://github.com/mwaskom/seaborn/issues/3748
|
[] |
MarkChenXY163
| 1
|
graphql-python/graphene
|
graphql
| 894
|
Is there any way to name field on the ObjectTypes keyword?
|
I want to define Type as follows.
```
type Edge {
from: Int
to: Int
}
```
But,Python give me a syntax error because `from` is keyword
```
class Edge(graphene.ObjectType):
from = graphene.Int()
to = graphene.Int()
```
Can't I define this Type?
|
closed
|
2019-01-18T06:15:57Z
|
2019-01-21T02:20:49Z
|
https://github.com/graphql-python/graphene/issues/894
|
[] |
nkg168
| 2
|
rthalley/dnspython
|
asyncio
| 954
|
Timeout in asynchronous and synchronous DoQ query doesn't work
|
There is a timeout parameter in dns.asyncquery.quic and dns.query.quic, however when set it to a value, it doesn't work as expected, when connecting to an unreachable server, both the asynchronous and synchronous quic function always time out after about 1 minute.
To reproduce:
```
import asyncio
import dns.message
import dns.asyncquery
import dns.rdatatype
async def check_doq(ip, query_name, timeout):
query = dns.message.make_query(query_name, dns.rdatatype.A)
response = await dns.asyncquery.quic(query, ip, timeout=timeout)
print(response)
if __name__ == "__main__":
asyncio.run(check_doq("1.2.3.4", "dnspython.org", 6))
```
System info:
- dnspython version [2.4]
- Python version [3.10.12]
- OS version [CentOS 7.8.2003]
|
closed
|
2023-07-11T03:38:47Z
|
2023-07-13T00:07:48Z
|
https://github.com/rthalley/dnspython/issues/954
|
[
"Bug",
"Fixed"
] |
fleurysun
| 2
|
microsoft/MMdnn
|
tensorflow
| 56
|
Hi @kitstar , When I convert tensorlflow to mxnet. The following error occurs.
|
Platform (like ubuntu 16.04/win10):
Python version:
Source framework with version (like Tensorflow 1.4.1 with GPU):
Destination framework with version (like CNTK 2.3 with GPU):
Pre-trained model path (webpath or webdisk path):
Running scripts:
|
closed
|
2018-01-18T12:31:34Z
|
2018-01-18T12:32:29Z
|
https://github.com/microsoft/MMdnn/issues/56
|
[] |
Albert-Ye
| 0
|
Miserlou/Zappa
|
flask
| 2,047
|
lambda_concurrency setting not adjusting provisioned concurrency
|
<!--- Provide a general summary of the issue in the Title above -->
## Context
Using zappa 0.5.0 "lambda_concurrency": 2 to set https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html#configuration-concurrency-provisioned
## Expected Behavior
<!--- Tell us what should happen -->
was expecting **Provisioned concurrency configurations** in Lambda to be set to 2 for the version just uploaded
## Actual Behavior
<!--- Tell us what happens instead -->
Provisioned concurrency configurations is unset and remains at 2 for my previous version
## Your Environment
* Zappa version used: 0.5.0
|
open
|
2020-02-28T12:29:59Z
|
2021-11-09T21:39:12Z
|
https://github.com/Miserlou/Zappa/issues/2047
|
[
"feature-request"
] |
alecl
| 9
|
deezer/spleeter
|
deep-learning
| 288
|
[Discussion]EnvironmentFileNotFound,help!
|
solved
|
closed
|
2020-03-10T08:21:25Z
|
2020-03-11T01:31:14Z
|
https://github.com/deezer/spleeter/issues/288
|
[
"question"
] |
freejacklee
| 0
|
ScrapeGraphAI/Scrapegraph-ai
|
machine-learning
| 949
|
why are my results so bad here?
|
```"""
Example of Search Graph
"""
import os
from dotenv import load_dotenv
from scrapegraphai.graphs import SearchGraph
from china_unis import universities
os.environ.clear()
load_dotenv()
# ************************************************
# Define the configuration for the graph
# ************************************************
openai_key = os.getenv("OPENAI_API_KEY")
graph_config = {
"llm": {
"api_key": openai_key,
"model": "openai/gpt-4o-2024-08-06",
},
"max_results": 2,
"verbose": True,
}
prompt = f"""
Get me the contact email addresses of the following universities:
{universities[:10]}
"""
# ************************************************
# Create the SearchGraph instance and run it
# ************************************************
search_graph = SearchGraph(
prompt=prompt, config=graph_config
)
result = search_graph.run()
print(result)
# Save results to both JSON and TXT formats for flexibility
import json
from pathlib import Path
from datetime import datetime
# Create output directory if it doesn't exist
output_dir = Path("output")
output_dir.mkdir(exist_ok=True)
# Generate timestamp for unique filenames
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
# Save as JSON
json_path = output_dir / f"university_contacts_{timestamp}.json"
with open(json_path, "w", encoding="utf-8") as f:
json.dump(result, f, indent=2, ensure_ascii=False)
# Save as TXT
txt_path = output_dir / f"university_contacts_{timestamp}.txt"
with open(txt_path, "w", encoding="utf-8") as f:
f.write(str(result))
print(f"\nResults saved to:")
print(f"JSON: {json_path}")
print(f"TXT: {txt_path}")
```
input:
```
universities = [
"Beijing Foreign Studies University",
"Beijing Jiaotong University",
"Beijing Language and Culture University",
"Beijing Radio and Television University",
"Beijing University of Chinese Medicine",
"Beijing University of Posts and Telecommunications",
"Central China Normal University",
"Chong Qing University",
"Donghua University",
"East China Normal University",
"Harbin Engineering University",
"Harbin Institute of Technology Shenzhen Graduate School",
"Henan University",
"Hubei University",
"Jiangxi Normal University",
"Jilin University",
"Nanjing University",
"Ningbo University",
"Northeast Normal University",
"Northwest University",
"Northwestern Polytechnical University",
"Ocean University of China",
"Peking University",
"Renmin University of China",
"Shaanxi Normal University",
"Shanghai International Studies University",
"Shanghai Normal University",
"Shanghai University",
"Shanghai University of Traditional Chinese Medicine",
"Sichuan Normal University",
"Sichuan University",
"Sun Yat-sen University",
"The Central Academy of Drama",
"Tianjin University",
"Tianjin University of Finance and Economics",
"Tsinghua University",
"Wuhan University",
"Yanbian University",
"Yangzhou University",
"Zhejiang University",
"Zhongnan University of Economics and Law",
"Zhuhai College of Jilin University",
"Shanghai University",
"Sichuan Normal University",
"Chong Qing University",
"Shanghai University of Finance & Economics",
"Beijing Institute of Technology",
"North China University of Technology",
"Beijing University of Chemical Technology",
"Shantou University",
"China Medical University",
"Chinese Culture University",
"Dharma Drum Buddhist College",
"Feng Chia University",
"Fo Guang University",
"Nanhua University",
"National Central University",
"National Cheng Kung University",
"National Chengchi University",
"National Taipei University",
"National Taipei University of Technology",
"National Taiwan Normal University",
"National Taiwan University",
"Shih Chien University",
"Tatung University",
"Tzu Chi University",
"Chung Yuan Christian University",
"Southern Taiwan University of Science and Technology",
"National Taiwan University",
"National University of Kaohsiung",
"Asia University",
"University of Taipei",
"Lingnan University",
"The Hong Kong Institute of Education"
]
```
gets me only:
```
{
"Beijing Foreign Studies University": [
"summerschool@bfsu.edu.cn",
"study@bfsu.edu.cn"
],
"Beijing Jiaotong University": "NA",
"Beijing Language and Culture University": "NA",
"Beijing Radio and Television University": "NA",
"Beijing University of Chinese Medicine": "NA",
"Beijing University of Posts and Telecommunications": "NA",
"Central China Normal University": "NA",
"Chong Qing University": "NA",
"Donghua University": "NA",
"East China Normal University": "NA",
"sources": [
"https://iss.bfsu.edu.cn/notice_intro.php?id=84",
"https://osao.bfsu.edu.cn/info/1042/2097.htm",
"https://greatyop.com/chinese-universities-agency-no-province/",
"https://freestudyinchina.com/silk-road-scholarship-beijing-jiaotong-university/"
]
}
```
|
open
|
2025-03-14T12:18:01Z
|
2025-03-21T08:21:04Z
|
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/949
|
[] |
nyck33
| 2
|
jmcnamara/XlsxWriter
|
pandas
| 424
|
Feature request: Example creating and sending workbooks on-the-fly with Python 3 & Django
|
```python
import xlsxwriter
from io import BytesIO
from django.http import StreamingHttpResponse
from django.views.generic import View
def get_foo_table_data():
"""
Some table data
"""
return [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
]
class MyView(View):
def get(self, request):
data = get_foo_table_data()
# create workbook with worksheet
output = BytesIO()
book = xlsxwriter.Workbook(output)
sheet = book.add_worksheet()
# fill worksheet with foo data
for row, columns in enumerate(data):
for column, cell_data in enumerate(columns):
sheet.write(row, column, cell_data)
book.close() # close book and save it in "output"
output.seek(0) # seek stream on begin to retrieve all data from it
# send "output" object to stream with mimetype and filename
response = StreamingHttpResponse(
output, content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
)
response['Content-Disposition'] = 'attachment; filename=foo.xlsx'
return response
```
|
closed
|
2017-03-28T15:54:01Z
|
2018-08-16T22:55:56Z
|
https://github.com/jmcnamara/XlsxWriter/issues/424
|
[
"documentation",
"short term"
] |
MrYoda
| 12
|
the0demiurge/ShadowSocksShare
|
flask
| 82
|
节点全部阵亡?
|
closed
|
2019-09-21T06:02:18Z
|
2019-09-21T06:16:16Z
|
https://github.com/the0demiurge/ShadowSocksShare/issues/82
|
[] |
unl89
| 1
|
|
kynan/nbstripout
|
jupyter
| 122
|
`--dryrun` flag
|
See which files would be affected by `nbstripout`
|
closed
|
2020-03-29T19:23:33Z
|
2020-05-10T17:45:29Z
|
https://github.com/kynan/nbstripout/issues/122
|
[
"type:enhancement",
"resolution:fixed"
] |
VikashKothary
| 2
|
DistrictDataLabs/yellowbrick
|
scikit-learn
| 723
|
ParallelCoordinates have different index
|
**Describe the bug**
The graph showing Output of ParallelCoordinates is incorrect.
As,the code in the docs outputs a different graph.
**To Reproduce**
```python
import os
from yellowbrick.download import download_all
## The path to the test data sets
FIXTURES = os.path.join(os.getcwd(), "data")
## Dataset loading mechanisms
datasets = {
"bikeshare": os.path.join(FIXTURES, "bikeshare", "bikeshare.csv"),
"concrete": os.path.join(FIXTURES, "concrete", "concrete.csv"),
"credit": os.path.join(FIXTURES, "credit", "credit.csv"),
"energy": os.path.join(FIXTURES, "energy", "energy.csv"),
"game": os.path.join(FIXTURES, "game", "game.csv"),
"mushroom": os.path.join(FIXTURES, "mushroom", "mushroom.csv"),
"occupancy": os.path.join(FIXTURES, "occupancy", "occupancy.csv"),
"spam": os.path.join(FIXTURES, "spam", "spam.csv"),
}
import pandas as pd
def load_data(name, download=True):
"""
Loads and wrangles the passed in dataset by name.
If download is specified, this method will download any missing files.
"""
# Get the path from the datasets
path = datasets[name]
# Check if the data exists, otherwise download or raise
if not os.path.exists(path):
if download:
download_all()
else:
raise ValueError((
"'{}' dataset has not been downloaded, "
"use the download.py module to fetch datasets"
).format(name))
# Return the data frame
return pd.read_csv(path)
# Load the classification data set
data = load_data("occupancy")
# Specify the features of interest and the classes of the target
features = [
"temperature", "relative humidity", "light", "C02", "humidity"
]
classes = ["unoccupied", "occupied"]
# Extract the instances and target
X = data[features]
y = data.occupancy
from yellowbrick.features import ParallelCoordinates
# Instantiate the visualizer
visualizer = ParallelCoordinates(
classes=classes, features=features, sample=0.5, shuffle=True
)
# Fit and transform the data to the visualizer
visualizer.fit_transform(X, y)
# Finalize the title and axes then display the visualization
visualizer.poof(outpath='rank1d_graph.pdf')
```
**Dataset**
occupancy dataset
Its in yellowbrick/yellowbrick/datasets/fixtures
**Expected behavior**
As expected, in the above visualization, the domain of the light feature should be from [0, 1600], far larger than the range of temperature in [50, 96].
**Graph shown is attached below**
[rank1d_graph.pdf](https://github.com/DistrictDataLabs/yellowbrick/files/2827967/rank1d_graph.pdf)
[rank1d_graph.pdf](https://github.com/DistrictDataLabs/yellowbrick/files/2827969/rank1d_graph.pdf)
**Desktop (please complete the following information):**
- OS: Linux18.04
[rank1d_graph.pdf](https://github.com/DistrictDataLabs/yellowbrick/files/2827972/rank1d_graph.pdf)
- Python Version 3.6
- Yellowbrick Version 0.9
|
closed
|
2019-02-04T12:40:49Z
|
2019-02-05T02:55:38Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/723
|
[
"type: question"
] |
dnabanita7
| 5
|
deezer/spleeter
|
deep-learning
| 417
|
[Feature] Traditional instruments
|
## Description
<!-- Can you use traditional instrument to train the program like Zither, erhu, monochord, flute, etc. When I use Speeter for the song with those instrument, Speeter does not recognize and get a bad result, if you need songs with those instrument, just let me know by email, i will send you. By the way, Thank you all of you for your great product, I love it -->
## Additional information
<!-- Add any additional description -->
My email: ductrungktdn@gmail.com
|
closed
|
2020-06-12T06:07:53Z
|
2020-06-12T12:24:20Z
|
https://github.com/deezer/spleeter/issues/417
|
[
"enhancement",
"feature"
] |
ductrungktdn
| 0
|
RobertCraigie/prisma-client-py
|
asyncio
| 430
|
Do not override already set env variables from `.env`
|
## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently the `.env` variables take precedence over the system environment variables, this can cause issues as the Prisma CLI will use the system environment variables instead which could lead to migrations being applied to a different database if you have two different connection strings set.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
System environment variables should take priority.
## Additional context
Mentioned in #420.
|
closed
|
2022-06-18T13:05:51Z
|
2022-06-18T13:50:09Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/430
|
[
"bug/2-confirmed",
"kind/bug",
"process/candidate",
"topic: client",
"level/beginner",
"priority/high"
] |
RobertCraigie
| 0
|
wger-project/wger
|
django
| 1,154
|
Not possible to reset password
|
```
AttributeError at /en/user/password/reset/check/abc/set-password
'NoneType' object has no attribute 'is_bound'
```
|
closed
|
2022-10-17T10:27:26Z
|
2022-11-30T13:22:21Z
|
https://github.com/wger-project/wger/issues/1154
|
[
"bug"
] |
rolandgeider
| 2
|
mherrmann/helium
|
web-scraping
| 7
|
arm64 support
|
Can we get geckodriver for arm64v8?
|
closed
|
2020-03-13T20:23:12Z
|
2020-10-17T13:23:23Z
|
https://github.com/mherrmann/helium/issues/7
|
[] |
kamikaze
| 4
|
dbfixtures/pytest-postgresql
|
pytest
| 332
|
Dependabot couldn't authenticate with https://pypi.python.org/simple/
|
Dependabot couldn't authenticate with https://pypi.python.org/simple/.
You can provide authentication details in your [Dependabot dashboard](https://app.dependabot.com/accounts/ClearcodeHQ) by clicking into the account menu (in the top right) and selecting 'Config variables'.
[View the update logs](https://app.dependabot.com/accounts/ClearcodeHQ/update-logs/48530955).
|
closed
|
2020-09-24T04:35:32Z
|
2020-09-24T07:29:31Z
|
https://github.com/dbfixtures/pytest-postgresql/issues/332
|
[] |
dependabot-preview[bot]
| 0
|
qwj/python-proxy
|
asyncio
| 17
|
Socks5 relay uses a wrong protocol?
|
Hi, I am using pproxy as a socks5 relay server.
according to `proto.py`, `Socks5.connect` sends `b'\x05\x01\x02\x01' + ...` to server at first. But other socks5 clients only sends `b'\x05\x02\x00\x01`
Another proof is running two `pproxy` process, one is:
```bash
pproxy -l socks5://:8080/ -v
```
the other is:
```bash
pproxy -l socks5://:8081/ -v -r socks5://localhost:8080 --test http://ifconfig.co/
```
It results in `Unsupported protocol b'1' from ::1` and `Exception: Unknown remote protocol`.
I am going to figure out the correct protocol then.
Thanks!
|
closed
|
2018-11-30T03:03:25Z
|
2018-11-30T03:57:08Z
|
https://github.com/qwj/python-proxy/issues/17
|
[] |
laohyx
| 5
|
hbldh/bleak
|
asyncio
| 1,189
|
Generic Access Profile not being discovered on MacOS
|
* bleak version: 0.11.0
* Python version: Python 3.9.13
* Operating System: MacOS Ventura 13.0.1 (22A400)
* BlueZ version (`bluetoothctl -v`) in case of Linux: N/A
### Description
I am trying to read the "Device Name" characteristic by connecting to the device and using `read_gatt_char`. The Generic Access Profile is present when I inspect the device with LightBlue
I am receiving:
bleak.exc.BleakError: Characteristic 00002a00-0000-1000-8000-00805f9b34fb was not found!
And the service is not being discovered:
```
Connected: True
0000180a-0000-1000-8000-00805f9b34fb (Handle: 14): Device Information
00001818-0000-1000-8000-00805f9b34fb (Handle: 37): Cycling Power
00001826-0000-1000-8000-00805f9b34fb (Handle: 51): Fitness Machine
0000181c-0000-1000-8000-00805f9b34fb (Handle: 70): User Data
```
### What I Did
```
async def main(address):
async with BleakClient(address) as client:
print(f"Connected: {client.is_connected}")
for x in client.services:
print(x)
test = await client.read_gatt_char('00002a00-0000-1000-8000-00805f9b34fb')
```
### Logs
```
test = await client.read_gatt_char('00002a00-0000-1000-8000-00805f9b34fb')
File "/usr/local/lib/python3.9/site-packages/bleak/backends/corebluetooth/client.py", line 237, in read_gatt_char
raise BleakError("Characteristic {} was not found!".format(char_specifier))
bleak.exc.BleakError: Characteristic 00002a00-0000-1000-8000-00805f9b34fb was not found!
```
|
closed
|
2023-01-03T16:46:01Z
|
2023-07-19T01:51:56Z
|
https://github.com/hbldh/bleak/issues/1189
|
[
"duplicate",
"3rd party issue",
"Backend: Core Bluetooth"
] |
andrewgrabbs
| 1
|
nteract/papermill
|
jupyter
| 581
|
Papermill fails for notebooks without nb.metadata.kernelspec.language (e.g. from VSCode)
|
I think notebooks created through VSCode do not specify the language in the kernelspec.
So when I try to run Papermill, it fails with the stack trace:
```
Traceback (most recent call last):
File "/Users/kennysong/GDrive/Projects/test/env/lib/python3.7/site-packages/ipython_genutils/ipstruct.py", line 132, in __getattr__
result = self[key]
KeyError: 'language'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/kennysong/GDrive/Projects/test/env/bin/papermill", line 8, in <module>
sys.exit(papermill())
File "/Users/kennysong/GDrive/Projects/test/env/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/Users/kennysong/GDrive/Projects/test/env/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/kennysong/GDrive/Projects/test/env/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/kennysong/GDrive/Projects/test/env/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/kennysong/GDrive/Projects/test/env/lib/python3.7/site-packages/click/decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/kennysong/GDrive/Projects/test/env/lib/python3.7/site-packages/papermill/cli.py", line 256, in papermill
execution_timeout=execution_timeout,
File "/Users/kennysong/GDrive/Projects/test/env/lib/python3.7/site-packages/papermill/execute.py", line 91, in execute_notebook
nb = parameterize_notebook(nb, parameters, report_mode)
File "/Users/kennysong/GDrive/Projects/test/env/lib/python3.7/site-packages/papermill/parameterize.py", line 77, in parameterize_notebook
language = nb.metadata.kernelspec.language
File "/Users/kennysong/GDrive/Projects/test/env/lib/python3.7/site-packages/ipython_genutils/ipstruct.py", line 134, in __getattr__
raise AttributeError(key)
AttributeError: language
```
You can replicate this by creating a blank `.ipynb` from VSCode, and then trying to run it with Papermill (with at least one parameter).
I think an easy workaround would be to allow the user to specify "language" as a command line flag.
|
closed
|
2021-02-27T07:25:59Z
|
2021-03-08T20:55:28Z
|
https://github.com/nteract/papermill/issues/581
|
[
"bug"
] |
kennysong
| 1
|
django-cms/django-cms
|
django
| 7,351
|
[DOC] No changes for version 3.10.1 in CHANGELOG.md
|
https://github.com/django-cms/django-cms/blob/develop/CHANGELOG.rst
|
closed
|
2022-06-28T04:21:25Z
|
2022-06-28T10:06:38Z
|
https://github.com/django-cms/django-cms/issues/7351
|
[
"component: documentation"
] |
DmytroLitvinov
| 3
|
deepspeedai/DeepSpeed
|
pytorch
| 6,525
|
[BUG] pydantic_core._pydantic_core.ValidationError: 1 validation error for DeepSpeedZeroConfig
|
We are using DeepSpeed; transformer, accelerate to fine tune Qwen llm, and hit the below issue.
[rank2]: pydantic_core._pydantic_core.ValidationError: 1 validation error for DeepSpeedZeroConfig
[rank2]: stage3_prefetch_bucket_size
[rank2]: Input should be a valid integer, got a number with a fractional part [type=int_from_float, input_value=15099494.4, input_type=float]
[rank2]: For further information visit https://errors.pydantic.dev/2.9/v/int_from_float
Relevant stack:
[rank2]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train
[rank2]: return inner_training_loop(
[rank2]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/trainer.py", line 1690, in _inner_training_loop
[rank2]: model, self.optimizer, self.lr_scheduler = self.accelerator.prepare(
[rank2]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/accelerate/accelerator.py", line 1318, in prepare
[rank2]: result = self._prepare_deepspeed(*args)
[rank2]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/accelerate/accelerator.py", line 1815, in _prepare_deepspeed
[rank2]: engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
[rank2]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/deepspeed/__init__.py", line 179, in initialize
[rank2]: config_class = DeepSpeedConfig(config, mpu, mesh_device=mesh_device)
[rank2]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 797, in __init__
[rank2]: self._initialize_params(copy.copy(self._param_dict))
[rank2]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 817, in _initialize_params
[rank2]: self.zero_config = get_zero_config(param_dict)
[rank2]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/deepspeed/runtime/zero/config.py", line 71, in get_zero_config
[rank2]: return DeepSpeedZeroConfig(**zero_config_dict)
[rank2]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/deepspeed/runtime/config_utils.py", line 57, in __init__
[rank2]: super().__init__(**data)
[rank2]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/pydantic/main.py", line 211, in __init__
[rank2]: validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
[rank2]: pydantic_core._pydantic_core.ValidationError: 1 validation error for DeepSpeedZeroConfig
**This was working prior to Pydantic migration PR** https://github.com/microsoft/DeepSpeed/pull/5167
In our case, the stage3_prefetch_bucket_size parameter in DeepSpeedZeroConfig is calculated as 0.9 * hidden_size * hidden_size as per
https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/deepspeed.py#L244 .
Hidden_size is 4096 and stage3_prefetch_bucket_size turns out to be 15099494.4
One of the solution is to convert the stage3_prefetch_bucket_size value to int in transformer library (deepspeed integration file)
```
diff --git a/src/transformers/integrations/deepspeed.py b/src/transformers/integrations/deepspeed.py
index aae1204ac..622080d41 100644
--- a/src/transformers/integrations/deepspeed.py
+++ b/src/transformers/integrations/deepspeed.py
@@ -241,7 +241,7 @@ class HfTrainerDeepSpeedConfig(HfDeepSpeedConfig):
# automatically assign the optimal config values based on model config
self.fill_only(
"zero_optimization.stage3_prefetch_bucket_size",
- 0.9 * hidden_size * hidden_size,
+ int(0.9 * hidden_size * hidden_size),
)
self.fill_only(
"zero_optimization.stage3_param_persistence_threshold",
```
I am not sure if this is the right solution, requesting DeepSpeed team's help here.
|
closed
|
2024-09-11T21:29:14Z
|
2024-11-04T03:01:54Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6525
|
[
"bug",
"training"
] |
jagadish-amd
| 15
|
aws/aws-sdk-pandas
|
pandas
| 2,441
|
wr.athena.to_iceberg does same account table look up first even though data_source is specified
|
### Describe the bug
When calling:
`wr.athena.to_iceberg`
It first check if the table exists by calling (see line #3 below)
```
try:
# Create Iceberg table if it doesn't exist
if not catalog.does_table_exist(database=database, table=table, boto3_session=boto3_session):
_create_iceberg_table(
df=df,
database=database,
table=table,
path=table_location, # type: ignore[arg-type]
wg_config=wg_config,
partition_cols=partition_cols,
additional_table_properties=additional_table_properties,
index=index,
data_source=data_source,
workgroup=workgroup,
encryption=encryption,
kms_key=kms_key,
boto3_session=boto3_session,
dtype=dtype,
)
```
In the
`wr.catalog.does_table_exist()` call, there is nothing that can specify data_source or catalog_id (even though in the API you could add catalog_id).
This caused my to_iceberg call to fail because I need to write to cross account. Can this be fixed to either allow catalog_id or data_source to be used?
### How to Reproduce
```
*P.S. Please do not attach files as it's considered a security risk. Add code snippets directly in the message body as much as possible.*
```
Just try the to_iceberg call and try to write a table but don't give yourself permission to your same account glue catalog. You will get the following access denied error:
An error occurred (AccessDeniedException) when calling the GetTable operation
### Expected behavior
_No response_
### Your project
_No response_
### Screenshots
_No response_
### OS
amazon linux
### Python version
3.9
### AWS SDK for pandas version
3.3.0
### Additional context
_No response_
|
closed
|
2023-08-25T15:57:01Z
|
2023-08-31T09:18:55Z
|
https://github.com/aws/aws-sdk-pandas/issues/2441
|
[
"bug"
] |
Xiangyu-C
| 5
|
miguelgrinberg/python-socketio
|
asyncio
| 1,256
|
AsyncServer.enter_room() is not awaitable
|
**Describe the bug**
When attempting to call `AsyncServer.enter_room()` with await, a TypeError exception is thrown:
`TypeError: object NoneType can't be used in 'await' expression`
**To Reproduce**
Steps to reproduce the behavior:
1. Implement a basic AsyncServer class
2. On `connect`, attempt to enter SID into a room with `await sio.enter_room(sid, 'testroom')`
3. TypeError will be thrown
Calling the function without an await statement enters the client into the room as expected.
**Expected behavior**
The docs, along with code, label this function as a coroutine, when in fact it does not return any awaitable, making it functionally not a coroutine.
**Additional context**
The `enter_room()` method calls the AsyncServer's manager instance:
https://github.com/miguelgrinberg/python-socketio/blob/d40b3a33fff5c6b896559fc534ccd611ab9cf1f4/src/socketio/async_manager.py#L72
Which calls the synchronous inherited function, `basic_enter_room()`. As such, `None` is returned instead of an awaitable.
|
closed
|
2023-10-13T21:58:03Z
|
2023-10-15T14:27:19Z
|
https://github.com/miguelgrinberg/python-socketio/issues/1256
|
[
"question"
] |
tt2468
| 3
|
falconry/falcon
|
api
| 2,082
|
Add default `testpaths` to `pytest`'s config
|
Some of our users/packagers have expressed a wish to be able to run `pytest` without any parameters. Otherwise `pytest` seems to pick all tests, including tutorials and what not.
Add `testpaths = tests` to our configs.
While at it, maybe we should consolidate these configs inside `pyproject.toml`, as opposed to `setup.cfg`?
The `pytest` docs state that
> Usage of `setup.cfg` is not recommended unless for very simple use cases. `.cfg` files use a different parser than `pytest.ini` and `tox.ini` which might cause hard to track down problems. When possible, it is recommended to use the latter files, or `pyproject.toml`, to hold your pytest configuration.
|
closed
|
2022-06-27T17:52:34Z
|
2022-06-27T19:07:14Z
|
https://github.com/falconry/falcon/issues/2082
|
[
"maintenance",
"community"
] |
vytas7
| 0
|
horovod/horovod
|
deep-learning
| 3,825
|
CI for tf-head: package `tf-nightly-gpu` must be replaced by `tf-nightly`
|
Example: https://github.com/horovod/horovod/actions/runs/4007634282/jobs/6882833168
```
2023-01-25T18:03:55.7944804Z #39 1.522 =========================================================
2023-01-25T18:03:55.7945146Z #39 1.522 The "tf-nightly-gpu" package has been removed!
2023-01-25T18:03:55.7945386Z #39 1.522
2023-01-25T18:03:55.7945680Z #39 1.522 Please install "tf-nightly" instead.
2023-01-25T18:03:55.7945896Z #39 1.522
2023-01-25T18:03:55.7946153Z #39 1.522 Other than the name, the two packages have been identical
2023-01-25T18:03:55.7946550Z #39 1.522 since tf-nightly 2.1, or roughly since Sep 2019. For more
2023-01-25T18:03:55.7947051Z #39 1.522 information, see: pypi.org/project/tf-nightly-gpu
2023-01-25T18:03:55.7947364Z #39 1.522 =========================================================
```
|
closed
|
2023-01-25T18:08:12Z
|
2023-01-26T10:04:22Z
|
https://github.com/horovod/horovod/issues/3825
|
[
"bug"
] |
maxhgerlach
| 0
|
akfamily/akshare
|
data-science
| 5,291
|
能否新增港股期货实时行情的接口?
|
目前的futures_global_em接口里头没有港股期货实时行情,在文档里头也找不到港股期货的接口
|
closed
|
2024-10-30T19:19:52Z
|
2024-10-31T10:26:43Z
|
https://github.com/akfamily/akshare/issues/5291
|
[] |
yong900630
| 1
|
smarie/python-pytest-cases
|
pytest
| 330
|
@fixture incompatible with pytest 8.0.0
|
pytest 8.0.0 just released earlier today, and it looks like there's an incompatibility with `pytest_cases.fixture`.
Minimal example:
```python
from pytest_cases import fixture
@fixture
def a():
return "a"
def test_a(a):
assert a == "a"
```
pytest 7.4.4, pytest-cases 3.8.2, Python 3.11.7 — Passes ✅
pytest 8.0.0, pytest-cases 3.8.2, Python 3.11.7 — Fails ❌
Traceback:
```
.venv/lib/python3.11/site-packages/_pytest/nodes.py:152: in _create
return super().__call__(*k, **kw)
.venv/lib/python3.11/site-packages/_pytest/python.py:1801: in __init__
fixtureinfo = fm.getfixtureinfo(self, self.obj, self.cls)
.venv/lib/python3.11/site-packages/_pytest/fixtures.py:1490: in getfixtureinfo
names_closure, arg2fixturedefs = self.getfixtureclosure(
E TypeError: getfixtureclosure() got an unexpected keyword argument 'initialnames'
During handling of the above exception, another exception occurred:
.venv/lib/python3.11/site-packages/pluggy/_hooks.py:501: in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
.venv/lib/python3.11/site-packages/pluggy/_manager.py:119: in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
.venv/lib/python3.11/site-packages/_pytest/python.py:277: in pytest_pycollect_makeitem
return list(collector._genfunctions(name, obj))
.venv/lib/python3.11/site-packages/_pytest/python.py:486: in _genfunctions
definition: FunctionDefinition = FunctionDefinition.from_parent(
.venv/lib/python3.11/site-packages/_pytest/python.py:1809: in from_parent
return super().from_parent(parent=parent, **kw)
.venv/lib/python3.11/site-packages/_pytest/nodes.py:275: in from_parent
return cls._create(parent=parent, **kw)
.venv/lib/python3.11/site-packages/_pytest/nodes.py:167: in _create
return super().__call__(*k, **known_kw)
.venv/lib/python3.11/site-packages/_pytest/python.py:1801: in __init__
fixtureinfo = fm.getfixtureinfo(self, self.obj, self.cls)
.venv/lib/python3.11/site-packages/_pytest/fixtures.py:1490: in getfixtureinfo
names_closure, arg2fixturedefs = self.getfixtureclosure(
E TypeError: getfixtureclosure() got an unexpected keyword argument 'initialnames'
```
|
closed
|
2024-01-28T02:56:57Z
|
2024-03-08T21:14:37Z
|
https://github.com/smarie/python-pytest-cases/issues/330
|
[] |
jayqi
| 3
|
healthchecks/healthchecks
|
django
| 410
|
[UI / UX] Show Request Body instead of "HTTP POST from <ip address>" message in Log display of checks dashboard.
|
Hi, Thanks for the great project btw.
I'd like to suggest change of UI / UX in Log display of checks dashboard.
As-is: Logs displays preview message with "HTTP POST from <ip address>"
To-be: Logs displays preview message with Request Body like "Temperature is now 35c. It looks good.".
In my humble opinion, "HTTP POST from <ip address>" is quite useless.
That message does not help cron job administrator because what he wants is in Request Body.
So what do you think about replacing the preview message with the request body?
Many thanks to healthchecks team.
Kind regards.
|
closed
|
2020-08-06T09:43:10Z
|
2021-08-27T09:45:49Z
|
https://github.com/healthchecks/healthchecks/issues/410
|
[] |
aeharvlee
| 4
|
pbugnion/gmaps
|
jupyter
| 325
|
Use of Different Marker Symbol on maps
|
Hello All,
How can I set different marker symbol for different data.
Like P for parking or some sign for restaurants using gmaps.
Thanks In advance
|
open
|
2019-10-20T21:38:37Z
|
2019-10-20T21:38:37Z
|
https://github.com/pbugnion/gmaps/issues/325
|
[] |
Prapi123
| 0
|
python-restx/flask-restx
|
flask
| 525
|
Using marshal with required fields for a response
|
Hey, I'm just looking for some clarity if I'm understanding marshal wrong.
I have created a model with some fields marked as required.
I have a controller with a GET method annotated with @ns.marshall_with(model)
This method gets some data from another datasource, if the data is missing one of the fields marked as required in the model, the marshaled object stills returns 200 with said property and a value of null.
Right here was my first head scratch I would have expected that a required field that was not mapped to throw an error. But I thought, given I'm doing the marshal with an annotation, it is just making sure the returning object has the specified property, no the value. Here is question 1, am I right here? Is this what is happening?
I keep on testing with the assumption above, so I imported 'marshal' and try to marshal the object before the return
my_data = marshal(data_source_object, model)
Again here my understanding was that this time a validation would happen when marshalling and it would raise an exception due to the missing field but then again, it just added the property with null value.
Could somebody clear this one out for me? am I doing this wrong and marshal will not throw errors on missing required fields? and if so, what is a good way to validate this kind of scenarios?
Thanks for any help in advance
|
closed
|
2023-03-02T16:36:52Z
|
2023-03-02T20:07:57Z
|
https://github.com/python-restx/flask-restx/issues/525
|
[
"question"
] |
PabloCR
| 2
|
proplot-dev/proplot
|
matplotlib
| 26
|
Matplotlib subplots not working with seaborn distplot
|
#### Code sample, a copy-pastable example if possible
A "[Minimal, Complete and Verifiable Example](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports)" will make it much easier for maintainers to help you.
```python
import pandas as pd
import xarray as xr
import numpy as np
def drop_nans_and_flatten(dataArray: xr.DataArray) -> np.ndarray:
"""flatten the array and drop nans from that array. Useful for plotting histograms.
Arguments:
---------
: dataArray (xr.DataArray)
the DataArray of your value you want to flatten
"""
# drop NaNs and flatten
return dataArray.values[~np.isnan(dataArray.values)]
# create dimensions of xarray object
times = pd.date_range(start='1981-01-31', end='2019-04-30', freq='M')
lat = np.linspace(0, 1, 224)
lon = np.linspace(0, 1, 176)
rand_arr = np.random.randn(len(times), len(lat), len(lon))
# create xr.Dataset
coords = {'time': times, 'lat':lat, 'lon':lon}
dims = ['time', 'lat', 'lon']
ds = xr.Dataset({'precip': (dims, rand_arr)}, coords=coords)
ds['month'], ds['year'] = ds['time.month'], ds['time.year']
```
Making plot using `proplot`
```python
import proplot as plot
import calendar
f, axs = plot.subplots(nrows=4, ncols=3, axwidth=1.5, figsize=(8,12), share=2) # share=3, span=1,
axs.format(
xlabel='Precip', ylabel='Density', suptitle='Distribution',
)
month_abbrs = list(calendar.month_abbr)
mean_ds = ds.groupby('time.month').mean(dim='time')
flattened = []
for mth in np.arange(1, 13):
ax = axs[mth - 1]
ax.set_title(month_abbrs[mth])
print(f"Plotting {month_abbrs[mth]}")
flat = drop_nans_and_flatten(mean_ds.sel(month=mth).precip)
flattened.append(flat)
sns.distplot(flat, ax=ax, **{'kde': False})
```
#### Actual result vs. expected result
This should explain **why** the current behavior is a problem and why the expected result is a better solution.
The proplot returns a plot like follows:
<img width="576" alt="download-25" src="https://user-images.githubusercontent.com/21049064/61491130-72f78480-a9a6-11e9-9781-65aa8b8da98a.png">
It looks empty plot. Also the axes are only sharing the x-axis for each column but I want it to be shared across all subplots.
The `matplotlib` version does what I expect.
```python
fig, axs = plt.subplots(4, 3, figsize=(8, 12), sharex=True, sharey=True)
month_abbrs = [m for m in calendar.month_abbr if m != '']
for mth in range(0, 12):
ax_ix = np.unravel_index(mth, (4, 3))
ax = axs[ax_ix]
mth_str = month_abbrs[mth]
sns.distplot(flattened[mth], ax=ax)
ax.set_title(mth_str)
fig.suptitle('Distribution of Rainfall each Month');
```
<img width="576" alt="download-26" src="https://user-images.githubusercontent.com/21049064/61491139-78ed6580-a9a6-11e9-9f21-c3292aeac8ca.png">
|
closed
|
2019-07-18T20:56:47Z
|
2020-04-09T17:24:04Z
|
https://github.com/proplot-dev/proplot/issues/26
|
[
"bug"
] |
tommylees112
| 5
|
psf/black
|
python
| 3,887
|
E704 and concise formatting for dummy implementations
|
This preview style change https://github.com/psf/black/pull/3796 violates E704 as implemented by flake8 / pycodestyle.
Note that the change itself is PEP 8 compliant. E704 is disabled by default in pycodestyle for this reason. ruff intentionally does not implement E704, instead explicitly deferring to Black on this question.
At a minimum we should update https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#flake8 , but figured I'd open an issue to discuss.
|
closed
|
2023-09-13T00:00:41Z
|
2024-01-30T18:23:18Z
|
https://github.com/psf/black/issues/3887
|
[
"T: bug",
"F: empty lines",
"C: preview style"
] |
hauntsaninja
| 5
|
scikit-hep/awkward
|
numpy
| 2,551
|
Form and np.dtype inconsistencies
|
### Version of Awkward Array
2.2.4
### Description and code to reproduce
Hi @agoose77 - here are some very minor issues I run into when conversion an `np.dtype` to a string required by a Form. Perhaps, we should to handle this in a more consistent manner... Please, have a look. Thanks.
- [ ] `EmptyArray` can have `parameters`, but:
```python
Issue: EmptyForm cannot contain parameters.
```
- [ ] Should 'int32' and 'i32' be the same?
```python
AssertionError: assert ListOffsetForm('int32', NumpyForm('float64')) == ListOffsetForm('i32', NumpyForm('float64'))
```
- [ ] `ak.index._dtype_to_form` dict has no `bool`:
```python
@property
def form(self):
> return ByteMaskedForm(ak.index._dtype_to_form[self._mask.dtype], self.content.form, self._valid_when, parameters=self._parameters,)
E KeyError: dtype('bool')
```
using `ak.types.numpytype.dtype_to_primitive(self._mask.dtype)`:
```python
> assert builder.form == layout.form
E AssertionError: assert ByteMaskedForm('bool', NumpyForm('float64'), True) == ByteMaskedForm('i8', NumpyForm('float64'), True)
```
|
closed
|
2023-06-29T14:32:19Z
|
2023-06-29T15:22:32Z
|
https://github.com/scikit-hep/awkward/issues/2551
|
[
"bug (unverified)"
] |
ianna
| 3
|
ultralytics/yolov5
|
machine-learning
| 12,645
|
Negative-width bounding boxes when running on M1 (mps) HW
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I am running a trained YOLOv5x6 model using val.py with the --save_json option. I have a few images where the resulting .json file includes one or more boxes with negative width values (not negative x or y values, which seem normal and are discussed in other issues, but negative _width_ values), but I have only observed this behavior when running on M1 hardware (with the "mps" device). All the boxes I've observed that have this property are low-ish confidence, but neither the confidence nor the absolute width/height are so low that I would call this a "junk bounding box".
I get identical, non-negative boxes processing the same images with the same weights on CUDA or CPU devices. Just in case the default precision is different on M1, I tried this with and without half-precision inference, but could not replicate the negative boxes on CUDA/CPU devices.
So, this is a two-part question:
1. Are negative-width bounding boxes expected, and/or is there an interpretation of these boxes? Should I expect to take the absolute value of the width and add it to the x value as if it were positive to get the right edge of the box, or should still add the (negative) width to the x value to get the other edge of the box?
2. In general, are there common patterns of differences between M1 and CUDA/CPU devices? Are results expected to be identical to reasonable precision?
If the answers are "no, that should never happen" and "no, they should all be identical", I will try to provide a reproducible example. I don't currently have permission to share the offending images, so I'm trying to see whether there's a logical explanation before going down that path.
Thanks!
-Dan
### Additional
_No response_
|
closed
|
2024-01-18T04:15:23Z
|
2024-12-18T02:36:46Z
|
https://github.com/ultralytics/yolov5/issues/12645
|
[
"question",
"Stale"
] |
agentmorris
| 16
|
tflearn/tflearn
|
data-science
| 272
|
2D convolution on CIFAR10 example shows poor results
|
I posted this in stackoverflow, but it may be more appropriate here.
https://stackoverflow.com/questions/38885226/2d-convolution-in-tflearn-cnn-trashes-learning-for-mnist-and-cifar-10-benchmarks
Essentially I just want to make sure there isn't a bug in the 2D convolution or maxpooling functions. Following the tutorial script here:
https://github.com/tflearn/tflearn/blob/master/examples/images/convnet_cifar10.py
I get completely random classifications (10% validation accuracy), all the way through 50 epochs. Commenting out the 2d convolution and max pooling steps though, it goes up to about 50% validation accuracy. To me this sounds intuitively wrong (as if the convolution function is somehow accidentally shuffling the input data), but perhaps this example is simply a poor classifier. Is this reproducible for others, or is there something wrong with my tflearn or tensorflow builds?
|
closed
|
2016-08-11T16:48:26Z
|
2016-08-16T04:20:50Z
|
https://github.com/tflearn/tflearn/issues/272
|
[] |
scottyler89
| 11
|
jupyter/nbgrader
|
jupyter
| 1,736
|
Failing tests
|
It seems to me that there are two issues causing tests to fail.
1- JupyterLab [announcements](https://jupyterlab.readthedocs.io/en/stable/user/announcements.html) pop-up is interfering with `test_formgrader.spec.ts`, and
2- `node-canvas` is requested for a version that doesn't have a build for the `node` used in the testing environment, and it can't be built. (Related?: https://github.com/jupyterlab/jupyterlab/pull/13722)
I am writing them down in case someone else encounters them too. Maybe I can create a pr as well soon.
|
closed
|
2023-02-24T10:56:44Z
|
2023-03-03T14:36:37Z
|
https://github.com/jupyter/nbgrader/issues/1736
|
[] |
tuncbkose
| 2
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 860
|
Does quality depend on batch size? I only have 6 gb of memory, but I want results like in paper.
|
closed
|
2021-10-04T18:16:02Z
|
2021-10-11T16:24:18Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/860
|
[] |
fancat-programer
| 8
|
|
horovod/horovod
|
machine-learning
| 3,938
|
horovod slower than pytorch DDP
|
**Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet) : pytorch
2. Framework version: 2.0
3. Horovod version: 0.28
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version:
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version:
12. CMake version:
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
I have two questions
1) I am running [yolox_ddp](https://github.com/Megvii-BaseDetection/YOLOX) on 8 A100 gpus. Then I modified code to use horovod and also modified data loader accordingly as per [example](https://horovod.readthedocs.io/en/stable/pytorch.html). During my training, horovod result seems **5x slower** than pytorch DDP result. any reason why this much slower?
2) I am trying to run same code on habana gaudi2 using hccl collective ops. but installing horovod using pip doesn't have option to build with hccl. how can build horovod with hccl?
Thanks
|
closed
|
2023-06-02T00:01:26Z
|
2023-12-15T04:10:51Z
|
https://github.com/horovod/horovod/issues/3938
|
[
"question",
"wontfix"
] |
purvang3
| 4
|
jmcnamara/XlsxWriter
|
pandas
| 802
|
Installing the package doesn't include the test helper functions
|
Hi,
I am using XlsxWriter to do SOMETHING but it appears to do SOMETHING ELSE.
I am using Python version 3.8.0 and XlsxWriter 1.4.0
It seems to me that installing the package with Poetry (or just Pip), does not include the `test` package, which contains some helper functions (e.g. to compare 2 XLSX files).
|
closed
|
2021-05-04T15:13:50Z
|
2021-05-04T16:25:07Z
|
https://github.com/jmcnamara/XlsxWriter/issues/802
|
[] |
StampixSMO
| 1
|
tqdm/tqdm
|
jupyter
| 1,537
|
AttributeError: 'tqdm' object has no attribute 'last_print_t' on Python 3.12
|
- [x] I have marked all applicable categories:
+ [x] exception-raising bug
+ [ ] visual output bug
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
### Environment
x86_64-linux
Python 3.12.1
pytest 7.4.3
pytest-asyncio 0.23.2
pluggy 1.3.0
tqdm 4.66.1
### Description
I'm aware Python 3.12 is not supported according to the trove classifiers, but I hope it will be soon. We're seeing the following exception many times during testing on Python 3.12.1, the tests run fine on 3.11.7.
```pytb
Exception ignored in: <function tqdm.__del__ at 0x7ffff64a2c00>
Traceback (most recent call last):
File "/build/tqdm-4.66.1/tqdm/std.py", line 1149, in __del__
self.close()
File "/build/tqdm-4.66.1/tqdm/std.py", line 1278, in close
if self.last_print_t < self.start_t + self.delay:
^^^^^^^^^^^^^^^^^
AttributeError: 'tqdm' object has no attribute 'last_print_t'
```
|
open
|
2023-12-13T14:05:07Z
|
2024-11-07T12:29:38Z
|
https://github.com/tqdm/tqdm/issues/1537
|
[] |
mweinelt
| 2
|
keras-team/keras
|
data-science
| 20,370
|
Issue on file_editor
|
If I add the compile method before saving model, it shows the error like this.
- keras/src/saving/file_editor_test.py
````
def get_source_model():
inputs = keras.Input((2,))
x = keras.layers.Dense(3, name="mydense")(inputs)
outputs = keras.layers.Dense(3, name="output_layer")(x)
model = keras.Model(inputs, outputs)
model.compile(loss='mse', optimizer='adam')
return model
````
- error
````
self = <h5py._hl.selections2.ScalarReadSelection object at 0x70c6dda64640>
fspace = <h5py.h5s.SpaceID object at 0x70c6dd94c1d0>
args = (slice(None, None, None),)
def __init__(self, fspace, args):
if args == ():
self.mshape = None
elif args == (Ellipsis,):
self.mshape = ()
else:
> raise ValueError("Illegal slicing argument for scalar dataspace")
ValueError: Illegal slicing argument for scalar dataspace
````
Since the compiler value has no shape in h5 store, when parsing value by "[:]", it returned the error.
````
def _extract_weights_from_store(self, data, metadata=None, inner_path=""):
....
else:
result[key] = value[:]
````
I think we could remove the other values except for "layers" on "self.weights_dict".
However, confirmation is needed.
|
closed
|
2024-10-17T12:18:43Z
|
2024-10-19T11:02:49Z
|
https://github.com/keras-team/keras/issues/20370
|
[] |
shashaka
| 2
|
Kinto/kinto
|
api
| 2,588
|
Dev server returns 409 on POST
|
## Steps to reproduce
```
➜ http POST $SERVER/buckets/main/collections -a mat:123
HTTP/1.1 201 Created
Access-Control-Expose-Headers: Content-Type, Content-Length, Backoff, Retry-After, Alert
Connection: keep-alive
Content-Length: 96
Content-Security-Policy: default-src 'none'; frame-ancestors 'none'; base-uri 'none';
Content-Type: application/json
Date: Tue, 18 Aug 2020 09:53:05 GMT
ETag: "1597744385553"
Last-Modified: Tue, 18 Aug 2020 09:53:05 GMT
Server: nginx
X-Content-Type-Options: nosniff
{
"data": {
"id": "KuX3wy-C",
"last_modified": 1597744385553
},
"permissions": {
"write": [
"account:mat"
]
}
}
➜ http POST $SERVER/buckets/main/collections -a mat:123
HTTP/1.1 201 Created
Access-Control-Expose-Headers: Content-Type, Content-Length, Backoff, Retry-After, Alert
Connection: keep-alive
Content-Length: 96
Content-Security-Policy: default-src 'none'; frame-ancestors 'none'; base-uri 'none';
Content-Type: application/json
Date: Tue, 18 Aug 2020 09:53:08 GMT
ETag: "1597744388379"
Last-Modified: Tue, 18 Aug 2020 09:53:08 GMT
Server: nginx
X-Content-Type-Options: nosniff
{
"data": {
"id": "k4MaRMTU",
"last_modified": 1597744388379
},
"permissions": {
"write": [
"account:mat"
]
}
}
➜ http POST $SERVER/buckets/main/collections/k4MaRMTU/records -a mat:123
HTTP/1.1 201 Created
Access-Control-Expose-Headers: Content-Type, Content-Length, Backoff, Retry-After, Alert
Connection: keep-alive
Content-Length: 124
Content-Security-Policy: default-src 'none'; frame-ancestors 'none'; base-uri 'none';
Content-Type: application/json
Date: Tue, 18 Aug 2020 09:53:22 GMT
ETag: "1597744402010"
Last-Modified: Tue, 18 Aug 2020 09:53:22 GMT
Server: nginx
X-Content-Type-Options: nosniff
{
"data": {
"id": "4a11b2f9-8c12-4348-ab6c-41da4da4ef5c",
"last_modified": 1597744402010
},
"permissions": {
"write": [
"account:mat"
]
}
}
➜ http POST $SERVER/buckets/main/collections/k4MaRMTU/records -a mat:123
HTTP/1.1 409 Conflict
Access-Control-Expose-Headers: Content-Type, Content-Length, Backoff, Retry-After, Alert
Connection: keep-alive
Content-Length: 100
Content-Security-Policy: default-src 'none'; frame-ancestors 'none'; base-uri 'none';
Content-Type: application/json
Date: Tue, 18 Aug 2020 09:53:24 GMT
Retry-After: 3
Server: nginx
X-Content-Type-Options: nosniff
{
"code": 409,
"errno": 122,
"error": "Conflict",
"message": "Integrity constraint violated, please retry."
}
```
See also #2295
|
closed
|
2020-08-18T09:54:33Z
|
2020-08-18T10:17:09Z
|
https://github.com/Kinto/kinto/issues/2588
|
[] |
leplatrem
| 1
|
fastapi-users/fastapi-users
|
fastapi
| 1,391
|
on_after_failed_login and on_before_login (Feature Request)
|
Thanks alot for the wonderful job done thus far. You approach is really superb. I am pretty new to FastAPI but much in love with this library.
I would love to request if these 2 events could be added to the authentication (/login).
**on_after_failed_login():** in case of multiple failed attempt, I would love to keep track of this failed attempt and possibly delay/deny future attempts
**on_before_login() -> bool:** based on the number of previous failed login attempts I may want to decide if to allow or deny login at this moment. Am thinking this would only be called after every other conditions/parameters like the password, is_active, is_verified has been checked and ready generate the jwt/access_token, such that it would allow us raise exception/return false to deny the user login access
** Also, in situation where the admin created a new user account and forwarded the credentials to the user's email. I maybe want to force the user to change the default password. I may want to abort the login process just before the final stage and redirect the user to the change password screen before s/he can proceed
This would be cleaner for further customization
|
closed
|
2024-05-23T14:51:07Z
|
2024-07-14T13:23:48Z
|
https://github.com/fastapi-users/fastapi-users/issues/1391
|
[] |
Pauldic
| 1
|
yzhao062/pyod
|
data-science
| 489
|
Project dependencies may have API risk issues
|
Hi, In **pyod**, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
```
joblib
matplotlib
numpy>=1.19
numba>=0.51
scipy>=1.5.1
six
statsmodels
```
The version constraint **==** will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint **No Upper Bound** and **\*** will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project,
The version constraint of dependency **six** can be changed to *>=1.4.0,<=1.16.0*.
The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
In version **six-1.3**, the API **six.add_metaclass** whch is used by the current project in **pyod/models/base.py** is missing.
<img width="460" alt="image" src="https://user-images.githubusercontent.com/109138844/227123724-8fb95b55-b4ac-4ccf-b6ad-13495c4fd39d.png">
<details>
<summary>The calling methods from the six</summary>
<pre>
six.iteritems
six.add_metaclass
</pre>
</details>
<details>
<summary>The calling methods from the all methods</summary>
<pre>
histogram.astype.astype
anomaly_scores.cpu.detach.numpy
self.OCSVM.super.__init__
self.clf_.fit
pyod.models.cof.COF
torch.nn.BatchNorm1d
pyod.models.sos.SOS.fit
self.CD.super.__init__
self._train_autoencoder
self.hist_loss_disc.append
pyod.models.sos.SOS
numpy.where
n_dim.n_samples.np.random.rand.astype
operator.itemgetter
i.X.copy.reshape
self.predict_confidence
tensorflow.keras.backend.square
self._set_small_large_clusters
combo.models.score_comb.aom
pyod.models.cblof.CBLOF.predict
matplotlib.pyplot.subplot.set_ylim
warnings.warn
pyod.models.lscp.LSCP.predict
self.fit_query
numpy.delete
pyod.models.sod.SOD.decision_function
overall_loss.append
sklearn.ensemble.IsolationForest
res.append
i.str.names.compile
numpy.empty
self.discriminator.predict
self.bin_edges_.append
Z.reshape.reshape
self.enc.optimizer.apply_gradients
zip
scipy.spatial.distance.squareform.max
l.rstrip
pythresh.thresholds.moll.MOLL
self.l2_regularizer.l2.self.hidden_activation.hidden_neurons.Dense
pyod.models.copod.COPOD.explain_outlier
numpy.unique
hasattr
X.self.discriminator.predict.ravel
itertools.combinations
six.add_metaclass
numpy.min
self.disc_xz
pyod.utils.check_parameter
self.O.sum
os.path.dirname
data.cuda.float
version_path.open.read
fig.add_subplot.set_ylabel
wcos_list.append
pyod.models.copod.COPOD.predict
_g.sbn_path_index.itemgetter
sklearn.preprocessing.MinMaxScaler.fit.transform
sklearn.linear_model.LinearRegression.fit
self.clustering_estimator_.predict
self._get_param_names
pythresh.thresholds.regr.REGR
pyod.models.lscp.LSCP.fit
utils.utility.get_optimal_n_bins
numpy.random.RandomState.rand
utils.utility.invert_order
__f
self.estimators_features_.append
second.np.array.reshape
self.XGBOD.super.__init__
test_scores.reshape.scaler.transform.ravel.clip
outliers.np.array.ravel
NotImplementedError
self.detector_.decision_function
tensorflow.keras.optimizers.legacy.Adam
matplotlib.pyplot.suptitle
numpy.argsort
numpy.abs
scipy.spatial.distance.cdist
pyod.models.knn.KNN.fit_predict
tensorflow.keras.layers.Dropout
numpy.zeros_like
h5py.File
numpy.max
self.discriminator.train_on_batch
str
pyod.models.suod.SUOD.decision_function
pyod.models.loda.LODA.predict
utils.utility.generate_bagging_indices
pythresh.thresholds.dsn.DSN
potential.np.array.squeeze.append
numpy.bincount
numpy.searchsorted
centerer.transform
estimator.decision_function
tensorflow.keras.models.Model.add_loss
_pairwise_distances_no_broadcast_helper
key.partition
pyod.models.deep_svdd.DeepSVDD
sklearn_base._partition_estimators
numpy.sum
self.fit
scipy.sparse.csr_matrix
getattr
sklearn.preprocessing.RobustScaler
scipy.special.erf.clip
self.combine_model.compile
numpy.digitize
self.HBOS.super.__init__
_calculate_outlier_scores
range
n_dim.n_samples.np.random.randn.epsilon.astype
inner_autoencoder.to.load_state_dict
numpy.cumsum
pyod.models.lof.LOF.fit
gaal_base.create_discriminator
i.beta.copy
statsmodels.distributions.empirical_distribution.ECDF
matplotlib.pyplot.ylim
sklearn.utils.check_consistent_length
numpy.argmax
pythresh.thresholds.vae.VAE
self.neigh.kneighbors
self.InnerAutoencoder.super.__init__
self.loss_fn.backward
disc_zz_tape.gradient
torch.nn.Dropout
angles_scalers2.append
tensorflow.keras.Input
join.split
pyod.models.suod.SUOD.fit
tensorflow.zeros_like
estimator.fit
self.n_jobs.Parallel
self._get_alpha_n
list.append
print
inner_autoencoder.to.zero_grad
numpy.isin
self._generate_new_features
pyod.models.mo_gaal.MO_GAAL.fit
self.generator.compile
pyod.models.alad.ALAD
self._d2a.sum
self.n_features_.self.dropout_rate.Dropout
pyod.models.rgraph.RGraph.predict
self._build_model
numpy.sqrt
combo.models.score_comb.median
self.estimators_.append
scipy.spatial.distance.pdist
criterion.item
sklearn.mixture.GaussianMixture
numpy.get_printoptions
self.active_support_elastic_net
self.decoder.add_module
self.LOF.super.__init__
numpy.diff
numpy.random.choice.astype
torch.float32.dist.torch.tensor.to
multiprocessing.cpu_count
self.threshold_.test_scores.astype
train_scores.reshape
arff.load.keys
self.generator
self._decision_function
self._make_estimator
sklearn.neighbors.KernelDensity
pyod.models.rod.ROD.fit
whiten_data
self.COF.super.__init__
pyod.models.feature_bagging.FeatureBagging.fit
self.n_random_cuts.pred_scores.ravel
self.scaler.transform.astype
self.network.parameters
pythresh.thresholds.qmcd.QMCD
iforest.IForest
combo.models.score_comb.average
tensorflow.keras.backend.random_normal
pythresh.thresholds.decomp.DECOMP
len
ac_dist.append
inner_autoencoder
pyod.models.cblof.CBLOF
divmod
self.activation_hidden_disc.l_dim.Dense.tfa.layers.SpectralNormalization
pyod.models.lunar.LUNAR.fit
sys.path.append
fig.add_subplot.set_xlabel
sys.path.insert
sklearn.neighbors.KDTree
p.n.contam.n.int.n.binom.cdf.np.vectorize
self.contamination.eval
xx.ravel
numpy.linalg.matrix_rank
matplotlib.pyplot.yticks
tensorflow.math.reduce_sum
self._get_decision_scores
self.kde_.score_samples
self.SUOD.super.__init__
self.encoder.add_module
self.detector_._score_samples
pythresh.thresholds.hist.HIST
float
threshold.y_pred.astype
pyod.models.ocsvm.OCSVM
numpy.greater_equal
matplotlib.pyplot.subplots_adjust
subspaces_scores.append
self.detector_.fit
self._fit_fast
abs
scipy.spatial.minkowski_distance
matplotlib.pyplot.title
tensorflow.keras.losses.BinaryCrossentropy
features.append
self.model.load_state_dict
matplotlib.font_manager.FontProperties
multiprocessing.Pool.close
self.disc_xx.optimizer.apply_gradients
sklearn.decomposition.PCA
pyod.models.gmm.GMM.fit
pyod.models.so_gaal.SO_GAAL.decision_function
first_ind.append
numpy.exp
self.decision_function
self._cof_fast
start_ind.self.hist_loss_generator.pd.Series.rolling
sum
pyod.models.mcd.MCD
matplotlib.pyplot.subplot.scatter
np.where
numba.prange
torch.nn.Tanh
utility.precision_n_scores
numpy.fill_diagonal
multiprocessing.Pool.join
torch.device
pyod.models.iforest.IForest.decision_function
numpy.isnan
self.scaler_.transform
torch.nn.Sigmoid
sklearn.preprocessing.MinMaxScaler.fit
hbos.HBOS
torch.no_grad
numpy.random.randn
torch.squeeze
X_train.torch.from_numpy.float.cuda
max_s.min_s.second.np.array.reshape
pyod.models.lof.LOF.decision_function
sklearn.neighbors.BallTree
pythresh.thresholds.all.ALL
numpy.random.seed
pyod.models.alad.ALAD.decision_function
numpy.linalg.norm
numpy.round
pyod.models.auto_encoder_torch.AutoEncoder
pyod.models.copod.COPOD.decision_function
pyod.models.combination.maximization
fig.add_subplot.plot
sklearn.neighbors.DistanceMetric.get_metric
numpy.column_stack
pyod.models.abod.ABOD.keys
encoder
X.np.linalg.pinv.T.X.sum
self.model_.fit
inner_autoencoder.to.parameters
pyod.models.inne.INNE.predict
tensorflow.keras.Model.compile
sklearn.utils.check_X_y
detector.fit
i.str.names.predict
self.l2_regularizer.l2.self.hidden_activation.neurons.Dense
torch.float32.train_y.torch.tensor.to
matplotlib.pyplot.subplots
torch.nn.Sequential
ocsvm.OCSVM
numpy.count_nonzero
self.train_step
pyod.utils.example.data_visualize
pyod.models.loda.LODA
data_cuda.self.model.cpu.numpy
setattr
self._predict_decision_scores
numpy.absolute
x.min
numpy.random.choice
generate_indices
multiprocessing.Pool
joblib.dump
Cooks_dist
sklearn.utils.validation.check_random_state
matplotlib.pyplot.figure.add_subplot
numpy.random.RandomState.shuffle
pyod.utils.stat_models.pearsonr
self._x2d
super
self.loss_fn
sklearn.utils.validation.check_array
self.threshold_.pred_score.astype.ravel
self.SO_GAAL.super.__init__
sklearn.preprocessing.StandardScaler.fit.transform
sklearn.preprocessing.RobustScaler.fit_transform
self.X_train_new_.self.clf_.predict.ravel
pyod.utils.data.evaluate_print
get_losses
model.predict
max
numpy.random.RandomState.seed
self.hidden_activation.neurons.Dense
axs.bar
self.model
self.activation_hidden_gen.l_dim.Dense
numpy.reshape
numpy.less
sklearn.utils.estimator_checks.check_estimator
self.network
self.SOD.super.__init__
decoder
matplotlib.pyplot.subplot.legend
window_smoothening.start_ind.self.hist_loss_generator.pd.Series.rolling.mean
angles_scalers1.append
sklearn.utils.check_random_state.shuffle
pyod.models.cd.CD.fit
num_index.X.astype
numpy.finfo
i.X.reshape
pyod.utils.utility.check_detector
detector.decision_function
self.device.data.to.float.cuda
time_mat.np.mean.tolist
pyod.models.sampling.Sampling.predict
total_loss_generator.numpy
numpy.square
tensorflow.keras.layers.Dense
self._get_competent_detectors
numpy.meshgrid
pyod.models.lmdd.LMDD.predict
params.items
self.network.load_state_dict
sklearn.preprocessing.MinMaxScaler
pyod.models.lmdd.LMDD
count_instances
pyod.models.inne.INNE
sklearn.preprocessing.MinMaxScaler.transform
pyod.utils.data.generate_data_clusters
joblib.load.decision_function
numpy.copy
tensorflow.keras.Model.summary
sklearn.decomposition.sparse_encode
pythresh.thresholds.fwfm.FWFM
torch.utils.data.DataLoader
self._sod
yy.ravel
alpha_list.append
self.latent_dim_G.Dense
pyod.models.iforest.IForest.predict
self._set_cluster_centers
self.__dis
self.elastic_net_subspace_clustering.toarray
numpy.cumsum.tolist
pyod.models.cof.COF.decision_function
joblib.parallel.cpu_count
self.inner_autoencoder.super.__init__
torch.nn.Linear
pyod.models.auto_encoder.AutoEncoder
pyod.models.xgbod.XGBOD.predict
pythresh.thresholds.clust.CLUST
roc_list.pd.DataFrame.transpose
matplotlib.pyplot.xlim
TypeError
pyod.models.loci.LOCI
cross_entropy
pyod.models.xgbod.XGBOD
sklearn.cluster.KMeans
pyod.models.auto_encoder.AutoEncoder.decision_function
pyod.models.so_gaal.SO_GAAL
arff.load
pyod.models.mo_gaal.MO_GAAL
window_smoothening.start_ind.self.hist_loss_disc.pd.Series.rolling.mean
tensorflow.keras.backend.abs
numpy.maximum
self.neigh.fit
y.astype.astype
numpy.identity
pyod.models.ecod.ECOD.decision_function
pyod.models.combination.moa
warnings.filters.pop
X_enc.self.dec.numpy
i.str.names.train_on_batch
numpy.sign
AutoEncoder.fit
sklearn.utils.validation.check_is_fitted
WEIGHT_MODEL
pyod.models.kde.KDE.decision_function
self._snn
tensorflow.keras.backend.shape
inner_autoencoder.to.eval
euclidean_sq.np.sum.np.sqrt.ravel
pyod.models.rgraph.RGraph.decision_function
euclidean
combo.models.score_comb.maximization
_wcos
get_label_n
matplotlib.pyplot.subplot
joblib.load.predict
pyod.models.cof.COF.predict
torch.optim.Adam.zero_grad
self.histograms_.append
sklearn.neighbors.NearestNeighbors.kneighbors
pyod.models.knn.KNN.fit
time_list.pd.DataFrame.transpose
AttributeError
pythresh.thresholds.karch.KARCH
enumerate
self._decision_function_default
numpy.prod
pyod.models.vae.VAE.predict
pyod.models.lunar.LUNAR.predict
X_norm.self.enc.numpy
sklearn_base._pprint
self.COPOD.super.__init__
sklearn.base.clone.set_params
pyod.models.kde.KDE
str.split
matplotlib.pyplot.ylabel
_set_random_states
numpy.percentile
joblib.parallel.delayed
self.WEIGHT_MODEL.super.__init__
tensorflow.keras.models.Sequential
pyod.models.mcd.MCD.fit
numpy.random.shuffle
collections.defaultdict
pyod.models.hbos.HBOS.decision_function
sorted
pyod.models.hbos.HBOS
numpy.nanargmin
numpy.float64
pyod.models.loci.LOCI.predict
potential.np.array.squeeze
pythresh.thresholds.yj.YJ
self.query_model.optimizer.apply_gradients
value.get_params.items
self._validate_estimator
pyod.models.alad.ALAD.fit
InnerAutoencoder
i.str.names
numpy.amax
torch.nn.ReLU.keys
pyod.models.hbos.HBOS.fit
self.l2_regularizer.l2.self.hidden_activation.reversed_neurons.Dense
self.model.to
scipy.stats.skew
key.valid_params.set_params
self._get_dist_by_method.ravel
criterion.backward
n_neighbours.SCORE_MODEL.to
pyod.models.auto_encoder.AutoEncoder.fit
tensorflow.concat
self.vae_loss
fig.add_subplot.set_title
self.MCD.super.__init__
matplotlib.pyplot.figure
numpy.mean
self.device.data.to.float
sklearn.utils.deprecated
numpy.zeros.append
self.RGraph.super.__init__
joblib.Parallel
j.ind.set.intersection
pyod.models.sod.SOD
data.best_model.cpu.numpy
torch.tensor
self.neigh_.kneighbors
matplotlib.pyplot.scatter
pyod.models.kpca.KPCA
threshold.pred_scores.astype
dec_tape.gradient
tensorflow.keras.Model.get_weights
pyod.models.ecod.ECOD.predict
pyod.models.vae.VAE.decision_function
f.read
pyod.models.xgbod.XGBOD.decision_function
self.disc_zz.optimizer.apply_gradients
pyod.models.so_gaal.SO_GAAL.fit
sklearn.base.clone.get_params
getattr.delayed
numpy.vstack
pyod.models.sod.SOD.predict
sklearn.utils.column_or_1d
matplotlib.pyplot.xticks
_add_sub_plot
pyod.utils.data.generate_data_categorical
scores.scores.max.ravel
numpy.dot
tensorflow.math.square
numpy.append
tensorflow.keras.backend.int_shape
random_state.uniform.tolist
pyod.models.sampling.Sampling.decision_function
self.combine_model.train_on_batch
tensorflow.keras.layers.Lambda
self.discriminator
pyod.models.ocsvm.OCSVM.decision_function
self.clustering_estimator_.fit
self.hist_loss_gen.append
pyod.models.iforest.IForest
iter
tensorflow.keras.initializers.VarianceScaling
join
six.iteritems
pythresh.thresholds.clf.CLF
pythresh.thresholds.meta.META
test_scores.reshape.scaler.transform.ravel
self.dist.pairwise
numpy.apply_along_axis
tensorflow.keras.models.Sequential.compile
numpy.sin
self.model.state_dict
numpy.random.RandomState.get_state
pythresh.thresholds.eb.EB
torch.optim.Adam.state_dict
_calculate_outlier_scores_auto
self.loss_fn.item
numpy.flatnonzero
sigmoid
self._process_decision_scores
sklearn.covariance.MinCovDet
self.kde_.fit
list
sklearn.neighbors.LocalOutlierFactor
criterion
data.best_model.cpu
torch.cuda.is_available
disc_tape.gradient
self._cof_memory
pyod.models.gmm.GMM.decision_function
self.SOS.super.__init__
tensorflow.keras.backend.sum
inner_autoencoder.to.state_dict
self.model.zero_grad
pyod.models.sos.SOS.decision_function
kernel
y.astype.astype.ravel.sum
pyod.utils.utility.standardizer
pyod.models.kpca.KPCA.fit
prn_list.pd.DataFrame.transpose
pyod.models.sod.SOD.fit
tensorflow.keras.backend.mean
matplotlib.pyplot.legend
numpy.intersect1d
y_test.reshape
numpy.random.rand
torch.float32.val_dist.torch.tensor.to
set
sklearn.neighbors.NearestNeighbors.fit
clf.decision_function
pyod.models.auto_encoder.AutoEncoder.predict
self.detector_.score_samples
tensorflow.keras.regularizers.l2
train_y.out.criterion.sum.backward
self.dec
self._decision_function.extend
numpy.hstack.append
sklearn.utils.validation.check_random_state.choice
pyod.models.suod.SUOD
axs.plot
self.LUNAR.super.__init__
numpy.average
disc_xx_tape.gradient
self.discriminator.compile
sklearn.datasets.make_blobs
self.LMDD.super.__init__
self._get_dist_by_method
combination.average
math.ceil
self.KNN.super.__init__
scores.append
scipy.stats.pearsonr
_check_params
matplotlib.lines.Line2D
scores.np.isnan.any
numpy.random.RandomState
pythresh.thresholds.aucp.AUCP
scipy.io.loadmat
locals
self.score_scalar_.transform
filter
numpy.asarray
disc_xz_tape.gradient
numpy.nanstd
self.disc_xz.optimizer.apply_gradients
first.np.array.reshape.scaler1.transform.reshape.append
sklearn.utils.multiclass.check_classification_targets
X_train.torch.from_numpy.float
self._d2a
pyod.models.feature_bagging.FeatureBagging
numpy.linspace
self.FeatureBagging.super.__init__
tensorflow.keras.optimizers.legacy.SGD
self.PCA.super.__init__
self.discriminator.optimizer.apply_gradients
self.tree_.query
scipy.io.loadmat.ravel
self.enc.compile
sklearn.preprocessing.StandardScaler
self.final_norm
self.ECOD.super.__init__
_plot
numpy.asarray.sum
second.np.array.reshape.scaler2.transform.reshape.append
pyod.models.auto_encoder_torch.AutoEncoder.predict
np.full
best_model
matplotlib.pyplot.subplot.set_xlabel
numpy.ravel
check_parameter
torch.float32.train_y.torch.tensor.to.cpu
gaal_base.create_generator
pyod.models.loci.LOCI.decision_function
pyod.models.pca.PCA.predict
matplotlib.pyplot.subplot.axis
sklearn.base.clone
pyod.models.gmm.GMM
self.hidden_neurons_.pop
numpy.nonzero
numpy.var
pythresh.thresholds.ocsvm.OCSVM
self._init_c
pythresh.thresholds.fgd.FGD
torch.from_numpy
pyod.utils.example.visualize
sklearn.metrics.euclidean_distances
self.CBLOF.super.__init__
numpy.squeeze
sklearn.metrics.precision_score
self.model_.approximate
key.endswith
numpy.hstack
pandas.Series
j.local_region_list.collections.Counter.items
_offset._r.np.add.n_inliers.np.multiply.np.round.astype
PyODKernelPCA
_get_critical_values
os.path.join
sklearn.utils.check_random_state.uniform
super.__init__
prn_mat.np.mean.tolist
Z.reshape.min
test_local_region.self.training_pseudo_label_.ravel
utils.torch_utility.get_activation_by_name
start_ind.self.hist_loss_discriminator.pd.Series.rolling
pyod.models.cblof.CBLOF.decision_function
pyod.models.inne.INNE.decision_function
ValueError
pyod.models.lof.LOF.predict
pyod.models.knn.KNN
pythresh.thresholds.mad.MAD
self._b2o
torch.optim.Adam.step
subspaces_scores.np.array.T.np.average.reshape
self.dec.optimizer.apply_gradients
standardization_flag_list.append
self.decoder
n_outliers_.append
self.ALAD.super.__init__
warnings.filterwarnings
sklearn.utils.validation.check_X_y
self.output_activation.self.n_features_.Dense
roc_mat.np.mean.tolist
min
RuntimeError
beta_list.append
self.LOCI.super.__init__
sklearn.model_selection.train_test_split
int
pca.transform
X.clf.fit.predict
i.str.names.evaluate
numpy.array.append
warnings.simplefilter
numpy.random.permutation
self.disc_xz.compile
pyod.models.mcd.MCD.predict
idx.tolist.tolist
pythresh.thresholds.iqr.IQR
self._init_detectors
clf.predict
pyod.models.rod.ROD
joblib.delayed
f.read.splitlines
self.detector_.mahalanobis
max_f.min_f.first.np.array.reshape
_snn_imp
self.output_activation.self.latent_dim.Dense
pyod.models.lmdd.LMDD.decision_function
matplotlib.pyplot.plot
self.model.eval
_calculate_wocs
loss_disc.numpy
matplotlib.pyplot.xlabel
first.np.array.reshape
pyod.models.combination.median
clf.fit
sklearn.utils.column_or_1d.ravel
torch.sum
SCORE_MODEL
numpy.nan_to_num
pyod.models.auto_encoder_torch.AutoEncoder.decision_function
tape.gradient
utils.utility._get_sklearn_version
pyod.models.ecod.ECOD.fit
printer
sklearn.svm.OneClassSVM
_generate_data
self.random_state.choice
pyod.models.cd.CD.predict
numpy.put
scale_angles
pythresh.thresholds.gesd.GESD
self.AutoEncoder.super.__init__
tensorflow_addons.layers.SpectralNormalization
dis
_get_perplexity
joblib.load.fit
anomaly_scores.cpu.detach
pyod.models.lscp.LSCP
self.limits_.append
numpy.full.append
self._score_samples
self.enc
numpy.add
self.DeepSVDD.super.__init__
tensorflow.GradientTape
self.n_bins.lower
act_layer_xx.numpy.numpy
numba.njit
pythresh.thresholds.filter.FILTER
pandas.concat.to_csv
tensorflow.ones_like
collections.Counter
self.SCORE_MODEL.super.__init__
data.get_outliers_inliers
self.O.sum.ravel
numpy.multiply
self.model.parameters
numpy.linalg.solve
sklearn.linear_model.LinearRegression
n_row.n_row.np.full.astype
type
pyod.models.copod.COPOD
self._set_n_classes
self.activation_hidden_disc.l_dim.Dense
self.random_state_.get_state
utils.utility.precision_n_scores
combination.maximization.ravel
window_smoothening.start_ind.self.hist_loss_discriminator.pd.Series.rolling.mean
utils.utility.check_detector
tensorflow.keras.Model.add_loss
torch.float32.val_y.torch.tensor.to
window_smoothening.start_ind.self.hist_loss_gen.pd.Series.rolling.mean
i.format.self.train_history.append
self.mean.any
pyod.models.ecod.ECOD
self.scaler_.fit_transform
numpy.random.normal
_parallel_ecdf.delayed
tensorflow.keras.models.Model.compile
os.path.abspath
pyod.models.lof.LOF
numpy.random.uniform
self.MO_GAAL.super.__init__
self.generator.predict
joblib.load
matplotlib.pyplot.tight_layout
torch.is_tensor
data_cuda.self.model.cpu
io.open
loss_gen.numpy
self.pca.fit
self.PyODDataset.super.__init__
pyod.models.cof.COF.fit
sklearn.utils.check_random_state.randint
pyod.models.loda.LODA.fit
self.generator.optimizer.apply_gradients
pyod.utils.utility.precision_n_scores
pyod.models.mo_gaal.MO_GAAL.decision_function
pyod.models.mo_gaal.MO_GAAL.predict
second.np.array.reshape.scaler2.transform.reshape
self.device.data.to.float.to
utils.utility.check_parameter
matplotlib.pyplot.subplot.contourf
pyod.models.deep_svdd.DeepSVDD.decision_function
scipy.special.erf
_get_sampling_N
self.elastic_net_subspace_clustering
self.threshold_.pred_score.astype
pyod.models.auto_encoder_torch.AutoEncoder.fit
self.IForest.super.__init__
numpy.histogram
ModuleNotFoundError
self.ABOD.super.__init__
pyod.models.suod.SUOD.predict
PyODDataset
numpy.full
pyod.models.lunar.LUNAR
math.sqrt
pandas.concat
numpy.array
pyod.utils.utility.generate_bagging_indices
self.ROD.super.__init__
dict
ord
pythresh.thresholds.mtt.MTT
pyod.utils.stat_models.pairwise_distances_no_broadcast
pyod.models.xgbod.XGBOD.fit
self.hidden_activation.self.n_features_.Dense
pyod.models.copod.COPOD.fit
pyod.models.gmm.GMM.predict
numpy.vectorize
sklearn.preprocessing.StandardScaler.fit
self.scaler.transform
pyod.models.loci.LOCI.fit
scipy.spatial.distance.squareform
self.dis_measure_
pyod.models.abod.ABOD.items
self.l2_regularizer.l2.self.output_activation.self.n_features_.Dense
pyod.models.vae.VAE.fit
pyod.models.feature_bagging.FeatureBagging.decision_function
pyod.models.lmdd.LMDD.fit
ground_truth.y_pred.sum
sklearn.neighbors.NearestNeighbors
pyod.models.alad.ALAD.predict
dict.update
self.threshold_.self.decision_scores_.astype.ravel
self._scalar.transform
pyod.models.knn.KNN.predict
sklearn.metrics.roc_auc_score
tensorflow.keras.models.Sequential.add
self._calculate_decision_score
_check_dim
pyod.models.lunar.LUNAR.decision_function
numpy.log2
self.disc_xx
self.network.train
pythresh.thresholds.boot.BOOT
tensorflow.keras.backend.exp
pyod.models.abod.ABOD.decision_function
self.activation_hidden.l_dim.Dense
pyod.utils.utility.argmaxn
tensorflow.math.reduce_mean
tensorflow.keras.models.Model
setuptools.find_packages
pyod.models.loda.LODA.decision_function
tensorflow.keras.initializers.Identity
self.network.cpu
pyod.models.kde.KDE.fit
x.max
n.x.np.vectorize
pyod.models.ocsvm.OCSVM.fit
numpy.nan_to_num.reshape
self.combine_model.evaluate
metric.lower
inspect.signature
tensorflow.keras.Model
cover_radius.np.isnan.all
pyod.models.sos.SOS.predict
chr
pyod.models.rgraph.RGraph.fit
enc_tape.gradient
self.latent_dim.self.sampling.Lambda
self.method.lower
numpy.log
numpy.place
matplotlib.pyplot.subplot.set_xlim
self.l2_regularizer.l2.self.hidden_activation.self.hidden_neurons_.Dense
self.MAD.super.__init__
pythresh.thresholds.zscore.ZSCORE
scipy.stats.binom.cdf
j.ind_arr.tolist
self._fit_default
columns.self.O.max.max
numpy.arccos
self.model_.decision_function
generate_negative_samples
self._check_subset_size
pyod.models.cblof.CBLOF.fit
pyod.models.abod.ABOD.predict
pyod.models.mad.MAD
pyod.models.abod.ABOD
tensorflow.random.set_seed
n_neighbours.WEIGHT_MODEL.to
self.VAE.super.__init__
total_loss_discriminator.numpy
utils.stat_models.pairwise_distances_no_broadcast
pyod.models.rgraph.RGraph
pyod.models.kde.KDE.predict
pyod.models.so_gaal.SO_GAAL.predict
read_arff
sklearn.neighbors.KDTree.query
tensorflow.keras.models.Sequential.summary
sklearn.utils.validation.check_random_state.randint
numpy.quantile
self.loss
self.get_outlier_scores
combo.models.score_comb.moa
numpy.iinfo
numpy.median
open
init_signature.parameters.values
pyod.models.mad.MAD.predict
self.n_bins_.append
self.disc_zz.compile
get_color_codes
self.disc_zz
torch.nn.LayerNorm
geometric_median.append
collections.defaultdict.items
copod.COPOD
erf_score.clip.ravel
scipy.spatial.distance_matrix
pythresh.thresholds.mcst.MCST
pyod.models.iforest.IForest.fit
pyod.models.deep_svdd.DeepSVDD.predict
sklearn.utils.check_array
numpy.ones.astype
start_ind.self.hist_loss_gen.pd.Series.rolling
pyod.models.abod.ABOD.fit
self.encoder
numpy.zeros
numpy.arange
sklearn.utils.column_or_1d.max
list.remove
self._fit
six.add_metaclass
exec
sklearn.utils.random.sample_without_replacement
n_clusters.int._r._r.random_state.uniform.tolist
model
self.threshold_.test_scores.astype.ravel
pyod.models.kpca.KPCA.decision_function
pyod.models.knn.KNN.decision_function
pyod.models.cd.CD.decision_function
self.disc_xx.compile
self.model.fit
train_y.out.criterion.sum
base_dl._get_tensorflow_version
...
</pre>
</details>
@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.
|
open
|
2023-03-23T06:41:18Z
|
2023-03-23T06:41:18Z
|
https://github.com/yzhao062/pyod/issues/489
|
[] |
PyDeps
| 0
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,530
|
Can nginx interface with SocketIO using a unix socket
|
I am trying to configure Nginx to communicate with Flask-SocketIO via a Unix socket as outlined in the link below but the Nginx WS requests to uwsgi always time out. I would like to use this approach, if possible, because of the it seems that it would simplify deployment of multiple uwsgi's since it would not require allocation of instance specific ports as outlined in the 'Using nginx as a WebSocket Reverse Proxy' of the Flask-SocketIO documentation. I would appreciate it if you could comment as to whether Abrahmasen's should work and whether it would be a good approach for deploying multiple uwsgi instances under nginx?
Thank you very much for your great library and the comprehensive documentation.
https://michaelabrahamsen.com/posts/configuring-uwsgi-and-nginx-for-use-with-flask-socketio
|
closed
|
2021-04-21T21:01:24Z
|
2021-06-27T19:38:16Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1530
|
[
"question"
] |
eric-gilbertson
| 10
|
keras-team/keras
|
pytorch
| 20,876
|
Scikit-Learn API Wrappers don't work with Model input
|
When using the new Scikit-Learn API Wrappers with a compiled Model as input, the wrapper does not work, running into errors citing that the underlying model isn't compiled. The following code, adapted from the example in the [`SKLearnClassifier` documentation](https://keras.io/api/utils/sklearn_wrappers/#sklearnclassifier-class) to pass in a Model instance rather than a callable, runs into this issue. I also had to fix a couple bugs present in that code for it to work, and those couple fixes are noted in the code:
```python
from keras.src.layers import Dense, Input
from keras.src.models.model import Model # FIX: previously imported from keras.src.layers
def dynamic_model(X, y, loss, layers=[10]):
# Creates a basic MLP model dynamically choosing the input and
# output shapes.
n_features_in = X.shape[1]
inp = Input(shape=(n_features_in,))
hidden = inp
for layer_size in layers:
hidden = Dense(layer_size, activation="relu")(hidden)
n_outputs = y.shape[1] if len(y.shape) > 1 else 1
out = [Dense(n_outputs, activation="softmax")(hidden)]
model = Model(inp, out)
model.compile(loss=loss, optimizer="rmsprop")
return model
from sklearn.datasets import make_classification
from keras.wrappers import SKLearnClassifier
X, y = make_classification(n_samples=1000, n_features=10, n_classes=2) # FIX: n_classes 3 -> 2
est = SKLearnClassifier(
model=dynamic_model(X, y, loss="categorical_crossentropy", layers=[20, 20, 20]) # pass in compiled Model instance instead of callable
)
est.fit(X, y, epochs=5)
```
The error arises when fitting the model in that last line and is reproduced below. I believe this is from the fact that the model is cloned by default in `self._get_model()`, and `clone_model()` does not recompile the model.
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-24-c9dcff454e13>](https://localhost:8080/#) in <cell line: 0>()
27 )
28
---> 29 est.fit(X, y, epochs=5)
1 frames
[/usr/local/lib/python3.11/dist-packages/keras/src/wrappers/sklearn_wrapper.py](https://localhost:8080/#) in fit(self, X, y, **kwargs)
162 y = self._process_target(y, reset=True)
163 model = self._get_model(X, y)
--> 164 _check_model(model)
165
166 fit_kwargs = self.fit_kwargs or {}
[/usr/local/lib/python3.11/dist-packages/keras/src/wrappers/utils.py](https://localhost:8080/#) in _check_model(model)
25 # compile model if user gave us an un-compiled model
26 if not model.compiled or not model.loss or not model.optimizer:
---> 27 raise RuntimeError(
28 "Given model needs to be compiled, and have a loss and an "
29 "optimizer."
RuntimeError: Given model needs to be compiled, and have a loss and an optimizer.
```
|
closed
|
2025-02-07T21:21:33Z
|
2025-03-13T02:05:56Z
|
https://github.com/keras-team/keras/issues/20876
|
[
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] |
swbedoya
| 3
|
AutoViML/AutoViz
|
scikit-learn
| 55
|
HTMl and BOKEH not output all!
|
The autoviz work well for chart_format svg, but BOKEH and HTML not all the dataset work well, when running I encounter:
~\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3361 return self._engine.get_loc(casted_key)
3362 except KeyError as err:
-> 3363 raise KeyError(key) from err
3364
3365 if is_scalar(key) and isna(key) and not self.hasnans:
KeyError: ''
After searching through the internet, it seems like the problems is with the pandas!
Sorry but I do not know how to fix that!
Kind regard,
[autoviz_test.zip](https://github.com/AutoViML/AutoViz/files/7698983/autoviz_test.zip)
|
closed
|
2021-12-12T15:17:59Z
|
2021-12-13T00:44:36Z
|
https://github.com/AutoViML/AutoViz/issues/55
|
[] |
nhatvietnam
| 2
|
darrenburns/posting
|
rest-api
| 51
|
Duplicate/clone an API request
|
Thank you for creating a TUI API client and sharing your work with the world.
I have a feature request I'd like to share. When creating API requests it's often easier to clone an existing one and tweak a few things like the request payload. It'd be convenient if posting supported this.
|
closed
|
2024-07-20T09:17:19Z
|
2024-08-02T22:57:50Z
|
https://github.com/darrenburns/posting/issues/51
|
[] |
pieterdd
| 0
|
ansible/ansible
|
python
| 84,433
|
template error while templating string:
|
### Summary
We first try to do a lookup vault to fetch the password, the password has characters "{%!" the below is how the rendered template file looks like
```db:
create: true
admin_user: "user1"
admin_password: !unsafe "4{%!<----"
users:
"pgsqladmin":
password: !unsafe "4{%!<----"
"applicationscourtorders":
context_user: true
password: "password"
```
during a combine step
```
- name: Combine db variables from item-specific service values
set_fact:
merged_db_vars: "{{ merged_db_vars | combine(specific_service_vars.ansible_facts.db, recursive=True) }}"
when: specific_service_vars.ansible_facts.db is defined
```
we get the following error
" Error was a <class 'ansible.errors.AnsibleError'>, original message: template error while templating string: unexpected char '!' at 3. String: 4{%!<-ka---\n\"
Have tried multiple options using to_yaml, to_json, quote
Can someone please help in getting this problem solved?
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.13]
config file = None
configured module search path = ['/home/azureadmin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib64/python3.8/site-packages/ansible
ansible collection location = /home/azureadmin/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible
python version = 3.8.8 (default, Aug 25 2021, 16:13:02) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
-
```
### OS / Environment
centos8
### Steps to Reproduce
We first try to do a lookup vault to fetch the password, the password has characters "{%!" the below is how the rendered template file looks like
```db:
create: true
admin_user: "user1"
admin_password: !unsafe "4{%!<----"
users:
"pgsqladmin":
password: !unsafe "4{%!<----"
"applicationscourtorders":
context_user: true
password: "password"
```
during a combine step
```
- name: Combine db variables from item-specific service values
set_fact:
merged_db_vars: "{{ merged_db_vars | combine(specific_service_vars.ansible_facts.db, recursive=True) }}"
when: specific_service_vars.ansible_facts.db is defined
```
### Expected Results
I expected this to combine
### Actual Results
```console
" Error was a <class 'ansible.errors.AnsibleError'>, original message: template error while templating string: unexpected char '!' at 3. String: 4{%!<-ka---\n\"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
closed
|
2024-12-05T16:17:28Z
|
2025-02-18T14:00:11Z
|
https://github.com/ansible/ansible/issues/84433
|
[
"needs_info",
"bug",
"bot_closed",
"affects_2.13"
] |
jyothi-balla
| 6
|
HIT-SCIR/ltp
|
nlp
| 198
|
生成的release中语义角色标注问题
|
我在windows下使用ltp_test处理的结果如下:
sent id="0" cont="特朗普 是 在 接受 福克斯 电视台 “ 福克斯 周日 新闻 ” 栏目 采访 时 做出 上述 表态 的 。"
word id="0" cont="特朗普" pos="nh" ne="S-Nh" parent="1" relate="SBV"
word id="1" cont="是" pos="v" ne="O" parent="-1" relate="HED"
arg id="0" type="ۦ#x0F;" beg="0" end="0"
这个结果的语义角色标注有乱码,而且和语言云的xml结果比较,缺少sem。使用的是3.3.1的模型文件,最新的release。求解答。
语言云结果:
sent id="0" cont="特朗普是在接受福克斯电视台“福克斯周日新闻”栏目采访时做出上述表态的。"
word id="0" cont="特朗普" pos="nh" ne="S-Nh" parent="1" relate="SBV" semparent="3" semrelate="Agt"
sem id="0" parent="3" relate="Agt"
word id="1" cont="是" pos="v" ne="O" parent="-1" relate="HED" semparent="3" semrelate="mMod"
arg id="0" type="A0" beg="0" end="0"
sem id="0" parent="3" relate="mMod"
|
closed
|
2016-12-14T03:21:43Z
|
2016-12-14T10:59:55Z
|
https://github.com/HIT-SCIR/ltp/issues/198
|
[] |
Icomming
| 4
|
google-research/bert
|
tensorflow
| 885
|
How many "num_tpu_cores" be set ?
|
I try to pretraining with **run_pretraining.py** using **tpu-v2-32**
How many "num_tpu_cores" be set ?
When tested with tpu-v2-8 worked fine(num_tpu_cores=8).
python3 run_pretraining.py \
--input_file=gs://... \
--output_dir=gs://... \
--do_train=True \
--do_eval=True \
--bert_config_file=/data/workspace/bert/bert_config.json \
--train_batch_size=64 \
--max_seq_length=128 \
--max_predictions_per_seq=19 \
--num_train_steps=100 \
--num_warmup_steps=70 \
--learning_rate=1e-4 \
--use_tpu=True \
--num_tpu_cores=32 \
--tpu_name=grpc://ip:8470 \
--tpu_zone=us-central1-a \
--gcp_project=myproject
**This are parameters to run. Is that correct? When i do this, i got an error like this :**
ValueError: TPUConfig.num_shards is not set correctly. According to TPU system metadata for Tensorflow master (grpc://...:8470): num_replicas should be (8), got (32). For non-model-parallelism, num_replicas should be the total num of TPU cores in the system. For model-parallelism, the total number of TPU cores should be num_cores_per_replica * num_replicas. Please set it accordingly or leave it as `None`
**When i set "num_tpu_cores=8", I got the following error :**
I1025 05:22:42.688320 140065835681600 tpu_estimator.py:557] Init TPU system
ERROR:tensorflow:Error recorded from evaluation_loop: From /job:worker/replica:0/task:0:
Cloud TPU: Invalid TPU configuration, ensure ClusterResolver is passed to tpu.RunConfig
[[{{node configure_distributed_tpu/_0}}]]
Am I missing something else? Or which one should I set?
|
closed
|
2019-10-25T05:27:02Z
|
2020-08-08T12:39:57Z
|
https://github.com/google-research/bert/issues/885
|
[] |
jwkim912
| 2
|
oegedijk/explainerdashboard
|
plotly
| 240
|
Retrieving the Original Values from Sklearn Pipeline
|
Hello,
I'm trying to incorporate sklearn pipelines into Explainerdashboard, as below:
```python
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from explainerdashboard import ExplainerDashboard, ClassifierExplainer
from explainerdashboard.dashboard_components import *
from explainerdashboard.custom import *
from explainerdashboard.datasets import titanic_survive, feature_descriptions
X_train, y_train, X_test, y_test = titanic_survive()
model = RandomForestClassifier(n_estimators=50, max_depth=5) # .fit(X_train, y_train)
pipeline = make_pipeline(StandardScaler(), model)
pipeline.fit(X_train, y_train)
explainer = ClassifierExplainer(
pipeline, # model,
X_test,
y_test,
# cats=["Sex", "Deck", "Embarked"],
labels=["Not Survived", "Survived"],
descriptions=feature_descriptions,
)
ExplainerDashboard(
explainer,
tabs=[
IndividualPredictionsComposite,
],
).run(port=9050, debug=True)
```
I expected to see the pre-scaled data in the dashboard (e.g. sex_male=0 or 1). However, it seems the values I see on the dashboard are the data that has gone through the StandardScalar step (e.g. sex_male=0.7, 1.3).
Is there any way to achieve my goal?
Thank you very much for an incredible open source work!
|
closed
|
2022-11-30T06:48:59Z
|
2022-11-30T10:26:16Z
|
https://github.com/oegedijk/explainerdashboard/issues/240
|
[] |
woochan-jang
| 1
|
pydata/pandas-datareader
|
pandas
| 568
|
Not able to load AAPL stock data
|
Hi I am trying to load AAPL stock data using import pandas_datareader as pdr
but i am not able to import data in my envirnment
|
closed
|
2018-08-21T17:48:26Z
|
2023-10-31T16:41:23Z
|
https://github.com/pydata/pandas-datareader/issues/568
|
[] |
Preerahul
| 23
|
seleniumbase/SeleniumBase
|
web-scraping
| 2,793
|
CloudFlare verification not working under VPN
|
The following code works on a direct connection (no verification asked), however when using VPN (or an http proxy, NordVPN in my case) clicking on the verification box doesn't let the verification go through:
```python
URL = 'https://rateyourmusic.com/artist/pink-floyd/'
with SB(uc=True) as sb:
sb.driver.uc_open(URL)
if sb.is_element_visible('iframe[src*="challenge"]'):
iframe = sb.find_element('iframe[src*="challenge"]')
sb.driver.switch_to.frame(iframe)
confirm_input = sb.driver.find_element(By.CSS_SELECTOR, 'input')
confirm_input.click()
sb.sleep(2)
```
I have tried other libraries with undected capabilities through VPN, and while most didn't go through I was able to go past the verification box with https://github.com/kaliiiiiiiiii/Selenium-Driverless.
Anyone else experienced a similar behaviour?
|
closed
|
2024-05-21T16:26:09Z
|
2024-05-22T13:10:15Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2793
|
[
"invalid usage",
"UC Mode / CDP Mode"
] |
bjornkarlsson
| 10
|
AUTOMATIC1111/stable-diffusion-webui
|
deep-learning
| 16,558
|
[Bug]: AttributeError: 'ImageDraw' object has no attribute 'multiline_textsize'
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I am using this google colab notebook https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb which install the latest version of your webui.
I try to do img2img => inpaint upload, then I select X/Y/Z Plot, I set the X type to Prompt S/R from the list, then I enter a value for example mall,bedroom and finally click in generate, it generates the image but in the end fails to display it with this error [Bug]: AttributeError: 'ImageDraw' object has no attribute 'multiline_textsize'
### Steps to reproduce the problem
1. Go to img2img
2. Select Input Upload and upload files
3. input prompt and negative prompt
4. choose the script X/Y/Z Plot
5. For X Type choose from the list Prompt S/R
6. Input X Type values "mall,bedroom"
7. Click on generate
### What should have happened?
Generated image should be displayed properly and no error
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-10-17-13-23.json](https://github.com/user-attachments/files/17413181/sysinfo-2024-10-17-13-23.json)
### Console logs
```Shell
Loading weights [88967f03f2] from /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion/juggernaut_final.safetensors
Creating model from config: /content/gdrive/MyDrive/sd/stable-diffusion-webui/configs/v1-inference.yaml
Running on public URL: https://5427cc1b69b8f33387.gradio.live
✔ Connected
Startup time: 15.1s (import torch: 8.6s, import gradio: 0.8s, setup paths: 0.9s, other imports: 0.5s, load scripts: 0.6s, create ui: 0.8s, gradio launch: 1.6s, add APIs: 1.2s).
Applying attention optimization: xformers... done.
Model loaded in 4.9s (load weights from disk: 1.2s, create model: 0.5s, apply weights to model: 2.1s, load textual inversion embeddings: 0.7s, calculate empty prompt: 0.2s).
100% 20/20 [00:09<00:00, 2.20it/s]
X/Y/Z plot will create 2 images on 1 2x1 grid. (Total steps to process: 40)
100% 20/20 [00:09<00:00, 2.18it/s]
100% 20/20 [00:09<00:00, 2.12it/s]
*** Error completing request
*** Arguments: ('task(n9dp069tm1booho)', <gradio.routes.Request object at 0x7e4bfdc11fc0>, 4, 'Photograph of cinematic photo realistic skin texture, photorealistic, raw portrait photo of 20 year old Ukrainina girl wearing white dress, big breast, neutral, diamond and angular face, grey eyes, straight and high nose, (with long purple pin curly hairstyle:1.5), (blemish pale skin, skin flaws:1.6), (freckles:1.7), 8k, realistic beautiful, gorgeous insanely detailed octane render, 35mgraph, film, bokeh, ultramodern, vibrant, professional, 4k, highly detailed background of mall, front view and side view', '(worst quality, low quality:1.4), (deformed, distorted, disfigured:1.2), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, blurry, amputation. tattoo, watermark, text, black and white photo', [], None, None, None, None, None, <PIL.Image.Image image mode=RGB size=1536x768 at 0x7E4BFDD8A1D0>, <PIL.Image.Image image mode=RGBA size=1536x768 at 0x7E4BFDD8AFB0>, 4, 0, 2, 1, 1, 9, 1.5, 0.95, 0.0, 768, 768, 1, 0, 1, 0, 0, '', '', '', [], False, [], '', 'upload', None, 8, False, 1, 0.5, 4, 0, 0.5, 2, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 7, 'white dress,sport cloth', [], 0, '', [], 0, '', [], True, False, False, True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/img2img.py", line 240, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 780, in run
processed = script.run(p, *script_args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/scripts/xyz_grid.py", line 769, in run
processed = draw_xyz_grid(
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/scripts/xyz_grid.py", line 380, in draw_xyz_grid
grid = images.draw_grid_annotations(grid, grid_max_w, grid_max_h, hor_texts, ver_texts, margin_size)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/images.py", line 228, in draw_grid_annotations
draw_texts(d, x, y, hor_texts[col], fnt, fontsize)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/images.py", line 171, in draw_texts
while drawing.multiline_textsize(line.text, font=fnt)[0] > line.allowed_width and fontsize > 0:
AttributeError: 'ImageDraw' object has no attribute 'multiline_textsize'
---
X/Y/Z plot will create 2 images on 1 2x1 grid. (Total steps to process: 40)
100% 20/20 [00:09<00:00, 2.07it/s]
100% 20/20 [00:10<00:00, 2.00it/s]
*** Error completing request
*** Arguments: ('task(fru5mdqk4k0ohne)', <gradio.routes.Request object at 0x7e4bfddd1c90>, 4, 'Photograph of cinematic photo realistic skin texture, photorealistic, raw portrait photo of 20 year old Ukrainina girl wearing white dress, big breast, neutral, diamond and angular face, grey eyes, straight and high nose, (with long purple pin curly hairstyle:1.5), (blemish pale skin, skin flaws:1.6), (freckles:1.7), 8k, realistic beautiful, gorgeous insanely detailed octane render, 35mgraph, film, bokeh, ultramodern, vibrant, professional, 4k, highly detailed background of mall, front view and side view', '(worst quality, low quality:1.4), (deformed, distorted, disfigured:1.2), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, blurry, amputation. tattoo, watermark, text, black and white photo', [], None, None, None, None, None, <PIL.Image.Image image mode=RGB size=1536x768 at 0x7E4BFDDD1FC0>, <PIL.Image.Image image mode=RGBA size=1536x768 at 0x7E4BFDDD1720>, 4, 0, 2, 1, 1, 9, 1.5, 0.95, 0.0, 768, 768, 1, 0, 1, 0, 0, '', '', '', [], False, [], '', 'upload', None, 8, False, 1, 0.5, 4, 0, 0.5, 2, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 7, 'white dress,sport cloth', [], 0, '', [], 0, '', [], True, False, False, True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/img2img.py", line 240, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 780, in run
processed = script.run(p, *script_args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/scripts/xyz_grid.py", line 769, in run
processed = draw_xyz_grid(
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/scripts/xyz_grid.py", line 380, in draw_xyz_grid
grid = images.draw_grid_annotations(grid, grid_max_w, grid_max_h, hor_texts, ver_texts, margin_size)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/images.py", line 228, in draw_grid_annotations
draw_texts(d, x, y, hor_texts[col], fnt, fontsize)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/images.py", line 171, in draw_texts
while drawing.multiline_textsize(line.text, font=fnt)[0] > line.allowed_width and fontsize > 0:
AttributeError: 'ImageDraw' object has no attribute 'multiline_textsize'
```
### Additional information
_No response_
|
open
|
2024-10-17T13:28:31Z
|
2024-10-17T21:44:08Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16558
|
[
"bug-report"
] |
david-chainreactdev
| 1
|
igorbenav/fastcrud
|
sqlalchemy
| 144
|
disable specific crud methods?
|
**Is your feature request related to a problem? Please describe.**
How to disable `create`/`delete`, for example? i don't see it in docs.
**Describe the solution you'd like**
crud_router constructor, set `create_schema=None`, `delete_schema=None`,
but doesn't work. docs still show such methods.
|
closed
|
2024-08-03T13:44:31Z
|
2024-08-04T00:48:20Z
|
https://github.com/igorbenav/fastcrud/issues/144
|
[
"enhancement"
] |
LeiYangGH
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.