repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
recommenders-team/recommenders
|
data-science
| 1,969
|
[BUG] Wide and Deep model raise error --- no attribute 'NanLossDuringTrainingError'
|
### Description
<!--- Describe your issue/bug/request in detail -->
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
https://github.com/microsoft/recommenders/blob/main/examples/00_quick_start/wide_deep_movielens.ipynb
Following the notebook here, change my own dataset, but it runs into this error
```
AttributeError: module 'tensorflow._api.v2.train' has no attribute 'NanLossDuringTrainingError'
```
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
The dataset is very similar, wondering how different dataset would casue the problem
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
|
closed
|
2023-08-16T16:20:12Z
|
2023-08-17T18:17:00Z
|
https://github.com/recommenders-team/recommenders/issues/1969
|
[
"bug"
] |
Lulu20220
| 1
|
xlwings/xlwings
|
automation
| 1,727
|
executing python code
|
#### OS (Windows 10 )
#### Versions of xlwings0.24.9
#### Describe your issue (incl. Traceback!)
```python
# Your traceback here
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
import xlwings as xw
# def world():
# wb = xw.Book.caller()
# wb.sheets[0].range('A11').value = 'Hello World!'
x=2
y=45*x
wb = xw.Book.caller()
wb.sheets[0].range('A13').value = y
```
This is a very basic question...sorry if it doesn't make sense. Following your example, I can run the python function World() using RunPython "import hello; hello.world()". I have a python script which when executes runs some actions (like turning on an instrument in the lab etc..). How to run this script directly from excel vba with xlwings. For example my above code doesn't have the function world(). Can I run this python script from excel vba using xlwings to get a value of 90 in range(A13).
|
closed
|
2021-10-05T01:54:49Z
|
2022-02-05T20:08:30Z
|
https://github.com/xlwings/xlwings/issues/1727
|
[] |
leyojoseph
| 1
|
raphaelvallat/pingouin
|
pandas
| 266
|
python3.7 unable to install
|
[lazy_loader](https://pypi.org/project/lazy_loader/)
[pingouin](https://pypi.org/project/pingouin/)

`lazy_loader`>=3.8
Reason: `lazy_loader` not supported 3.7
|
closed
|
2022-05-15T04:54:38Z
|
2022-12-17T23:34:40Z
|
https://github.com/raphaelvallat/pingouin/issues/266
|
[
"invalid :triangular_flag_on_post:"
] |
mochazi
| 1
|
autokey/autokey
|
automation
| 720
|
All custom defined keybindings stopped working in "sticky keys" mode.
|
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
_No response_
### Which Linux distribution did you use?
Debian.
### Which AutoKey GUI did you use?
_No response_
### Which AutoKey version did you use?
AutoKey-gtk
### How did you install AutoKey?
apt install autokey-gtk
### Can you briefly describe the issue?
All custom defined keybindings stopped working in "sticky keys" mode. For example, if I have defined control+f to be right_arrow, but it stops working once I am "sticky keys" mode after activated "Treat a sequence of modifier keys as a combination" in Debian's "Accessibility" -> "Typing assistance" tab.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
Enable "sticky keys:Treat a sequence of modifier keys as a combination" in Debian's "Accessibility" -> "Typing assistance" tab.
Type any keybinding defined in AutoKey.
### What should have happened?
The keybinding should remain the same.
### What actually happened?
Nothing as if the keybinding isn't triggered.
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_
|
closed
|
2022-08-04T04:02:09Z
|
2022-08-05T20:54:47Z
|
https://github.com/autokey/autokey/issues/720
|
[
"autokey triggers",
"invalid"
] |
genehwung
| 3
|
AutoGPTQ/AutoGPTQ
|
nlp
| 176
|
[BUG] list index out of range for arch_list when building AutoGPTQ
|
**Describe the bug**
The int4 quantization implementation of [baichuan-7B-GPTQ](https://huggingface.co/TheBloke/baichuan-7B-GPTQ) is based on this project. In an attempt to tetst that LLM, I tried building custom container for that purpose, where `AutoGPTQ` failed upon compiling.
The `Dockerfile`
```
FROM pytorch/pytorch:1.13.1-cuda11.6-cudnn8-devel
RUN apt update && apt install -y build-essential git && rm -rf /var/lib/apt/lists/*
WORKDIR /build
RUN git clone https://github.com/PanQiWei/AutoGPTQ
WORKDIR /build/AutoGPTQ
# RUN GITHUB_ACTIONS=true pip3 install .
RUN GITHUB_ACTIONS=true pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple .
# RUN pip3 install transformers
WORKDIR /workspace
```
**Hardware details**
Information about CPU and GPU, such as RAM, number, etc.
**Software version**
Ubuntu 22.04 on WSL2
Docker Desktop 4.20.1
Version of relevant software such as operation system, cuda toolkit, python, auto-gptq, pytorch, transformers, accelerate, etc.
**To Reproduce**
Steps to reproduce the behavior:
1. build the image with `Dockerfile` above: `docker build -t autogptq:latest .`
Full log:
```
#0 16.59 Building wheels for collected packages: auto-gptq
#0 16.59 Building wheel for auto-gptq (setup.py): started
#0 41.98 Building wheel for auto-gptq (setup.py): finished with status 'error'
#0 41.99 error: subprocess-exited-with-error
#0 41.99
#0 41.99 × python setup.py bdist_wheel did not run successfully.
#0 41.99 │ exit code: 1
#0 41.99 ╰─> [123 lines of output]
#0 41.99 No CUDA runtime is found, using CUDA_HOME='/opt/conda'
#0 41.99 running bdist_wheel
#0 41.99 /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:476: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
#0 41.99 warnings.warn(msg.format('we could not find ninja.'))
#0 41.99 running build
#0 41.99 running build_py
#0 41.99 creating build
#0 41.99 creating build/lib.linux-x86_64-cpython-310
#0 41.99 creating build/lib.linux-x86_64-cpython-310/auto_gptq
#0 41.99 copying auto_gptq/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq
#0 41.99 creating build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 41.99 copying auto_gptq/eval_tasks/text_summarization_task.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 41.99 copying auto_gptq/eval_tasks/sequence_classification_task.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 41.99 copying auto_gptq/eval_tasks/_base.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 41.99 copying auto_gptq/eval_tasks/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 41.99 copying auto_gptq/eval_tasks/language_modeling_task.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 41.99 creating build/lib.linux-x86_64-cpython-310/auto_gptq/quantization
#0 41.99 copying auto_gptq/quantization/quantizer.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/quantization
#0 41.99 copying auto_gptq/quantization/gptq.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/quantization
#0 41.99 copying auto_gptq/quantization/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/quantization
#0 41.99 creating build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 41.99 copying auto_gptq/nn_modules/fused_llama_mlp.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 41.99 copying auto_gptq/nn_modules/fused_llama_attn.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 41.99 copying auto_gptq/nn_modules/fused_gptj_attn.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 41.99 copying auto_gptq/nn_modules/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 41.99 copying auto_gptq/nn_modules/_fused_base.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 41.99 creating build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/moss.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/gpt_neox.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/gptj.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/baichuan.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/gpt2.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/auto.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/_const.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/codegen.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/opt.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/gpt_bigcode.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/rw.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/_base.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/llama.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 copying auto_gptq/modeling/bloom.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 41.99 creating build/lib.linux-x86_64-cpython-310/auto_gptq/utils
#0 41.99 copying auto_gptq/utils/peft_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/utils
#0 41.99 copying auto_gptq/utils/import_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/utils
#0 41.99 copying auto_gptq/utils/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/utils
#0 41.99 copying auto_gptq/utils/data_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/utils
#0 41.99 creating build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks/_utils
#0 41.99 copying auto_gptq/eval_tasks/_utils/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks/_utils
#0 41.99 copying auto_gptq/eval_tasks/_utils/classification_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks/_utils
#0 41.99 copying auto_gptq/eval_tasks/_utils/generation_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks/_utils
#0 41.99 creating build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/qlinear
#0 41.99 copying auto_gptq/nn_modules/qlinear/qlinear_triton.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/qlinear
#0 41.99 copying auto_gptq/nn_modules/qlinear/qlinear_cuda_old.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/qlinear
#0 41.99 copying auto_gptq/nn_modules/qlinear/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/qlinear
#0 41.99 copying auto_gptq/nn_modules/qlinear/qlinear_cuda.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/qlinear
#0 41.99 creating build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
#0 41.99 copying auto_gptq/nn_modules/triton_utils/custom_autotune.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
#0 41.99 copying auto_gptq/nn_modules/triton_utils/mixin.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
#0 41.99 copying auto_gptq/nn_modules/triton_utils/kernels.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
#0 41.99 copying auto_gptq/nn_modules/triton_utils/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
#0 41.99 running build_ext
#0 41.99 building 'autogptq_cuda_64' extension
#0 41.99 creating build/temp.linux-x86_64-cpython-310
#0 41.99 creating build/temp.linux-x86_64-cpython-310/autogptq_cuda
#0 41.99 gcc -pthread -B /opt/conda/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/include -fPIC -O2 -isystem /opt/conda/include -fPIC -I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include -Iautogptq_cuda -I/opt/conda/include/python3.10 -c autogptq_cuda/autogptq_cuda_64.cpp -o build/temp.linux-x86_64-cpython-310/autogptq_cuda/autogptq_cuda_64.o -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=autogptq_cuda_64 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
#0 41.99 Traceback (most recent call last):
#0 41.99 File "<string>", line 2, in <module>
#0 41.99 File "<pip-setuptools-caller>", line 34, in <module>
#0 41.99 File "/build/AutoGPTQ/setup.py", line 98, in <module>
#0 41.99 setup(
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup
#0 41.99 return distutils.core.setup(**attrs)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup
#0 41.99 return run_commands(dist)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
#0 41.99 dist.run_commands()
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
#0 41.99 self.run_command(cmd)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
#0 41.99 super().run_command(command)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
#0 41.99 cmd_obj.run()
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/wheel/bdist_wheel.py", line 299, in run
#0 41.99 self.run_command('build')
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
#0 41.99 self.distribution.run_command(command)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
#0 41.99 super().run_command(command)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
#0 41.99 cmd_obj.run()
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/command/build.py", line 132, in run
#0 41.99 self.run_command(cmd_name)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
#0 41.99 self.distribution.run_command(command)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
#0 41.99 super().run_command(command)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
#0 41.99 cmd_obj.run()
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 84, in run
#0 41.99 _build_ext.run(self)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
#0 41.99 self.build_extensions()
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 843, in build_extensions
#0 41.99 build_ext.build_extensions(self)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 466, in build_extensions
#0 41.99 self._build_extensions_serial()
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 492, in _build_extensions_serial
#0 41.99 self.build_extension(ext)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
#0 41.99 _build_ext.build_extension(self, ext)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 547, in build_extension
#0 41.99 objects = self.compiler.compile(
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/ccompiler.py", line 599, in compile
#0 41.99 self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 581, in unix_wrap_single_compile
#0 41.99 cflags = unix_cuda_flags(cflags)
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 548, in unix_cuda_flags
#0 41.99 cflags + _get_cuda_arch_flags(cflags))
#0 41.99 File "/opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1780, in _get_cuda_arch_flags
#0 41.99 arch_list[-1] += '+PTX'
#0 41.99 IndexError: list index out of range
#0 41.99 [end of output]
#0 41.99
#0 41.99 note: This error originates from a subprocess, and is likely not a problem with pip.
#0 41.99 ERROR: Failed building wheel for auto-gptq
#0 41.99 Running setup.py clean for auto-gptq
#0 43.35 Failed to build auto-gptq
#0 44.28 Installing collected packages: tokenizers, safetensors, xxhash, tzdata, rouge, regex, python-dateutil, pyarrow, packaging, multidict, fsspec, frozenlist, dill, async-timeout, yarl, pandas, multiprocess, huggingface-hub, aiosignal, accelerate, transformers, aiohttp, peft, datasets, auto-gptq
#0 52.98 Running setup.py install for auto-gptq: started
#0 74.51 Running setup.py install for auto-gptq: finished with status 'error'
#0 74.52 error: subprocess-exited-with-error
#0 74.52
#0 74.52 × Running setup.py install for auto-gptq did not run successfully.
#0 74.52 │ exit code: 1
#0 74.52 ╰─> [127 lines of output]
#0 74.52 No CUDA runtime is found, using CUDA_HOME='/opt/conda'
#0 74.52 running install
#0 74.52 /opt/conda/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
#0 74.52 warnings.warn(
#0 74.52 running build
#0 74.52 running build_py
#0 74.52 creating build
#0 74.52 creating build/lib.linux-x86_64-cpython-310
#0 74.52 creating build/lib.linux-x86_64-cpython-310/auto_gptq
#0 74.52 copying auto_gptq/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq
#0 74.52 creating build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 74.52 copying auto_gptq/eval_tasks/text_summarization_task.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 74.52 copying auto_gptq/eval_tasks/sequence_classification_task.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 74.52 copying auto_gptq/eval_tasks/_base.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 74.52 copying auto_gptq/eval_tasks/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 74.52 copying auto_gptq/eval_tasks/language_modeling_task.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks
#0 74.52 creating build/lib.linux-x86_64-cpython-310/auto_gptq/quantization
#0 74.52 copying auto_gptq/quantization/quantizer.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/quantization
#0 74.52 copying auto_gptq/quantization/gptq.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/quantization
#0 74.52 copying auto_gptq/quantization/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/quantization
#0 74.52 creating build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 74.52 copying auto_gptq/nn_modules/fused_llama_mlp.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 74.52 copying auto_gptq/nn_modules/fused_llama_attn.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 74.52 copying auto_gptq/nn_modules/fused_gptj_attn.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 74.52 copying auto_gptq/nn_modules/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 74.52 copying auto_gptq/nn_modules/_fused_base.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules
#0 74.52 creating build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/moss.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/gpt_neox.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/gptj.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/baichuan.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/gpt2.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/auto.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/_const.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/codegen.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/opt.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/gpt_bigcode.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/rw.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/_base.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/llama.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 copying auto_gptq/modeling/bloom.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/modeling
#0 74.52 creating build/lib.linux-x86_64-cpython-310/auto_gptq/utils
#0 74.52 copying auto_gptq/utils/peft_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/utils
#0 74.52 copying auto_gptq/utils/import_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/utils
#0 74.52 copying auto_gptq/utils/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/utils
#0 74.52 copying auto_gptq/utils/data_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/utils
#0 74.52 creating build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks/_utils
#0 74.52 copying auto_gptq/eval_tasks/_utils/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks/_utils
#0 74.52 copying auto_gptq/eval_tasks/_utils/classification_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks/_utils
#0 74.52 copying auto_gptq/eval_tasks/_utils/generation_utils.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/eval_tasks/_utils
#0 74.52 creating build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/qlinear
#0 74.52 copying auto_gptq/nn_modules/qlinear/qlinear_triton.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/qlinear
#0 74.52 copying auto_gptq/nn_modules/qlinear/qlinear_cuda_old.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/qlinear
#0 74.52 copying auto_gptq/nn_modules/qlinear/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/qlinear
#0 74.52 copying auto_gptq/nn_modules/qlinear/qlinear_cuda.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/qlinear
#0 74.52 creating build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
#0 74.52 copying auto_gptq/nn_modules/triton_utils/custom_autotune.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
#0 74.52 copying auto_gptq/nn_modules/triton_utils/mixin.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
#0 74.52 copying auto_gptq/nn_modules/triton_utils/kernels.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
#0 74.52 copying auto_gptq/nn_modules/triton_utils/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
#0 74.52 running build_ext
#0 74.52 /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:476: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
#0 74.52 warnings.warn(msg.format('we could not find ninja.'))
#0 74.52 building 'autogptq_cuda_64' extension
#0 74.52 creating build/temp.linux-x86_64-cpython-310
#0 74.52 creating build/temp.linux-x86_64-cpython-310/autogptq_cuda
#0 74.52 gcc -pthread -B /opt/conda/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/include -fPIC -O2 -isystem /opt/conda/include -fPIC -I/opt/conda/lib/python3.10/site-packages/torch/include -I/opt/conda/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/lib/python3.10/site-packages/torch/include/THC -I/opt/conda/include -Iautogptq_cuda -I/opt/conda/include/python3.10 -c autogptq_cuda/autogptq_cuda_64.cpp -o build/temp.linux-x86_64-cpython-310/autogptq_cuda/autogptq_cuda_64.o -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=autogptq_cuda_64 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
#0 74.52 Traceback (most recent call last):
#0 74.52 File "<string>", line 2, in <module>
#0 74.52 File "<pip-setuptools-caller>", line 34, in <module>
#0 74.52 File "/build/AutoGPTQ/setup.py", line 98, in <module>
#0 74.52 setup(
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup
#0 74.52 return distutils.core.setup(**attrs)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup
#0 74.52 return run_commands(dist)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
#0 74.52 dist.run_commands()
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
#0 74.52 self.run_command(cmd)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
#0 74.52 super().run_command(command)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
#0 74.52 cmd_obj.run()
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/command/install.py", line 68, in run
#0 74.52 return orig.install.run(self)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/command/install.py", line 698, in run
#0 74.52 self.run_command('build')
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
#0 74.52 self.distribution.run_command(command)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
#0 74.52 super().run_command(command)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
#0 74.52 cmd_obj.run()
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/command/build.py", line 132, in run
#0 74.52 self.run_command(cmd_name)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
#0 74.52 self.distribution.run_command(command)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
#0 74.52 super().run_command(command)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
#0 74.52 cmd_obj.run()
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 84, in run
#0 74.52 _build_ext.run(self)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
#0 74.52 self.build_extensions()
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 843, in build_extensions
#0 74.52 build_ext.build_extensions(self)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 466, in build_extensions
#0 74.52 self._build_extensions_serial()
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 492, in _build_extensions_serial
#0 74.52 self.build_extension(ext)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
#0 74.52 _build_ext.build_extension(self, ext)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 547, in build_extension
#0 74.52 objects = self.compiler.compile(
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/setuptools/_distutils/ccompiler.py", line 599, in compile
#0 74.52 self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 581, in unix_wrap_single_compile
#0 74.52 cflags = unix_cuda_flags(cflags)
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 548, in unix_cuda_flags
#0 74.52 cflags + _get_cuda_arch_flags(cflags))
#0 74.52 File "/opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1780, in _get_cuda_arch_flags
#0 74.52 arch_list[-1] += '+PTX'
#0 74.52 IndexError: list index out of range
#0 74.52 [end of output]
#0 74.52
#0 74.52 note: This error originates from a subprocess, and is likely not a problem with pip.
#0 74.52 error: legacy-install-failure
#0 74.52
#0 74.52 × Encountered error while trying to install package.
#0 74.52 ╰─> auto-gptq
#0 74.52
#0 74.52 note: This is an issue with the package mentioned above, not pip.
#0 74.52 hint: See above for output from the failure.
------
Dockerfile:10
--------------------
8 | WORKDIR /build/AutoGPTQ
9 | # RUN GITHUB_ACTIONS=true pip3 install .
10 | >>> RUN GITHUB_ACTIONS=true pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple .
11 | # RUN pip3 install transformers
12 | WORKDIR /workspace
--------------------
ERROR: failed to solve: process "/bin/sh -c GITHUB_ACTIONS=true pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple ." did not complete successfully: exit code: 1
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
|
closed
|
2023-06-25T11:30:38Z
|
2023-06-26T08:19:30Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/176
|
[
"bug"
] |
BorisPolonsky
| 1
|
MaartenGr/BERTopic
|
nlp
| 1,290
|
TypeError: cannot unpack non-iterable BERTopic object
|
Greeting MaartenGr,
could you please explain for me this type of error?
TypeError: cannot unpack non-iterable BERTopic object
Thanks, in advance.
|
closed
|
2023-05-22T22:35:35Z
|
2023-09-27T08:59:37Z
|
https://github.com/MaartenGr/BERTopic/issues/1290
|
[] |
Keamww2021
| 2
|
RobertCraigie/prisma-client-py
|
pydantic
| 19
|
Add support for selecting fields
|
## Problem
A crucial part of modern and performant ORMs is the ability to choose what fields are returned, Prisma Client Python is currently missing this feature.
## Mypy solution
As we have a mypy plugin we can dynamically modify types on the fly, this means we would be able to make use of a more ergonomic solution.
```py
class Model(BaseModel):
id: str
name: str
points: Optional[int]
class SelectedModel(BaseModel):
id: Optional[str]
name: Optional[str]
points: Optional[int]
ModelSelect = Iterable[Literal['id', 'name', 'points']]
@overload
def action(
...
) -> Model:
...
@overload
def action(
...
select: ModelSelect
) -> SelectedModel:
...
model = action(select={'id', 'name'})
```
The mypy plugin would then dynamically remove the `Optional` from the model for every field that is selected, we might also be able to remove the fields that aren't selected although I don't know if this is possible.
The downside to a solution like this is that unreachable code will not trigger an error when type checking with a type checker other than mypy, e.g.
```py
user = await client.user.find_first(select={'name'})
if user.id is not None:
print(user.id)
```
Will pass type checks although the if block will never be ran.
EDIT: A potential solution for the above would be to not use optional and instead use our own custom type, e.g. maybe something like `PrismaMaybeUnset`. This has its own downsides though.
EDIT: I also think we may also want to support setting a "default include" value so that relations will always be fetched unless explicitly given `False`. This will not change the generated types and they will still be `Optional[T]`.
## Type checker agnostic solution
After #59 is implemented the query builder should only select the fields that are present on the given `BaseModel`.
This would mean that users could generate partial types and then easily use them to select certain fields.
```py
User.create_partial('UserOnlyName', include={'name'})
```
```py
from prisma.partials import UserOnlyName
user = await UserOnlyName.prisma().find_unique(where={'id': 'abc'})
```
Or create models by themselves
```py
class User(BaseUser):
name: str
user = await User.prisma().find_unique(where={'id': 'abc'})
```
This will make typing generic functions to process models more difficult, for example, the following function would not accept custom models.:
```py
def process_user(user: User) -> None:
...
```
It could however be modified to accept objects with the correct properties by using a `Protocol`.
```py
class UserWithID(Protocol):
id: str
def process_user(user: UserWithID):
...
```
|
closed
|
2021-06-14T18:59:57Z
|
2023-03-08T12:41:27Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/19
|
[
"kind/feature",
"level/advanced",
"priority/medium"
] |
RobertCraigie
| 4
|
K3D-tools/K3D-jupyter
|
jupyter
| 17
|
Plot objects persist
|
If one has a plot with e.g. single object, then it cannot be removed:
from k3d import K3D
plot = K3D()
plot += K3D.text("HEY",(1,1,1))
print("Expect one object",plot.objects )
plot = K3D()
print("Shold be empty list:",plot.objects)
|
closed
|
2017-05-04T10:18:59Z
|
2017-05-07T06:44:28Z
|
https://github.com/K3D-tools/K3D-jupyter/issues/17
|
[] |
marcinofulus
| 0
|
nalepae/pandarallel
|
pandas
| 97
|
Bug with the progressbar
|
Hi!
Thanks for the nice tool! :)
I have an error only if I use the progress bar
`TypeError: ("argument of type 'int' is not iterable", 'occurred at index M05218:191:000000000-D7R5H:1:1102:14482:19336')`
It works well without the bar...
Any idea?
Cheers,
Mathieu
|
closed
|
2020-06-10T12:22:35Z
|
2022-03-14T20:36:46Z
|
https://github.com/nalepae/pandarallel/issues/97
|
[] |
mbahin
| 1
|
mwaskom/seaborn
|
data-visualization
| 3,268
|
Wrong ylim using pairplot
|
`pairplot` seems not to support `sharey=True` like `FacetGrid`. This may cause unnecessary problems if I have a draw a plot with different ylims using `pairplot`. Just take the iris dataset for an example.
```python
import matplotlib.pyplot as plt
import seaborn as sns
iris = sns.load_dataset('iris')
# pairplot() example
g = sns.pairplot(iris, kind='scatter', diag_kind='hist', grid_kws=dict('sepal_length'))
plt.show()
```

Though passing `'sepal_length'` to the function, the ylim of histogram is not the right value I want. If I use `sns.hisplot` to draw the histogram of sepal_length in iris dataset, the ylim is (0,25).
```pthon
sns.histplot(iris,x='sepal_length')
```

|
closed
|
2023-02-19T05:34:46Z
|
2023-02-19T17:31:01Z
|
https://github.com/mwaskom/seaborn/issues/3268
|
[] |
kebuAAA
| 3
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 451
|
基于LLAMA基础模型,预训练adapter_config怎么产生?
|
*提示:将[ ]中填入x,表示打对钩。提问时删除这行。只保留符合的选项。*
### 详细描述问题
1. 基础模型是llama-7b。
2. tokenizer 是通过 merge_tokenizers (基础模型+中文sp)
3. 通过预训练脚本训练,训练出目录下

4. [wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/%E9%A2%84%E8%AE%AD%E7%BB%83%E8%84%9A%E6%9C%AC#%E8%AE%AD%E7%BB%83%E5%90%8E%E6%96%87%E4%BB%B6%E6%95%B4%E7%90%86) 这里训练后文件整理,adapter_config 这个从哪里生成?
### 运行截图或日志

### 必查项目(前三项只保留你要问的)
- [x ] **基础模型**:LLaMA / Alpaca / LLaMA-Plus / Alpaca-Plus
- [ ] **运行系统**:Windows / MacOS / Linux
- [ ] **问题分类**:下载问题 / 模型转换和合并 / 模型训练与精调 / 模型推理问题(🤗 transformers) / 模型量化和部署问题(llama.cpp、text-generation-webui、LlamaChat) / 效果问题 / 其他问题
- [x ] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [ ] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [ ] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
|
closed
|
2023-05-29T09:20:06Z
|
2023-05-29T10:39:35Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/451
|
[] |
richardkelly2014
| 5
|
miguelgrinberg/microblog
|
flask
| 228
|
Problem in Chapter 3 using flask-wtf
|
Hi, This is not really an issue but FYI, in working through chapter 3, I had trouble using the current version of flask-wtf or, technically, werkzeug. There was apparently a change to the way werkzeug handled url encoding as documented in [this link](https://github.com/pallets/flask/issues/3481). The solution (albeit temporary I guess) is to downgrade to a prior version of werkzeug as noted in that link.
Here is my stack trace:
[werkzeug_error.txt](https://github.com/miguelgrinberg/microblog/files/4536379/werkzeug_error.txt)
|
closed
|
2020-04-26T20:07:39Z
|
2020-06-30T22:48:50Z
|
https://github.com/miguelgrinberg/microblog/issues/228
|
[
"question"
] |
TriumphTodd
| 1
|
plotly/dash-core-components
|
dash
| 756
|
Mapbox Graph wrapped in dcc.Loading lags one update behind
|
Hi there,
when I wrap a Mapbox scatter plot in a dcc.Loading component, updates to this figure seem to be delayed by one update. The same logic with normal scatter plots works fine. I could reproduce this behavior with Dash Core Components 1.8.0 and previous versions.
Here's a demo:


and here's the code:
```python
import dash
import dash_html_components as html
import dash_core_components as dcc
import plotly.graph_objects as go
from dash.dependencies import Output, Input
app = dash.Dash(__name__)
lat = [10, 10, 10, 10]
lon = [10, 20, 30, 40]
app.layout = html.Div([
html.Div('Select a point on the map:'),
dcc.Slider(min=0, max=3, step=1, value=0, marks=[0, 1, 2, 3], id='slider'),
dcc.Loading(dcc.Graph(id='graphContainer'))
])
@app.callback(Output('graphContainer', 'figure'),
[Input('slider', 'value')])
def UpdateGraph(value):
return {'data': [go.Scattermapbox(lat=lat, lon=lon, selectedpoints=[value])],
'layout': go.Layout(mapbox_style='carto-positron')}
app.run_server()
```
|
closed
|
2020-02-15T14:19:45Z
|
2020-05-12T20:57:30Z
|
https://github.com/plotly/dash-core-components/issues/756
|
[] |
ghost
| 6
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 550
|
Custom weights
|
Is there a way to pass custom weights for models other than the ones mentioned in the table?
|
closed
|
2022-01-31T05:38:49Z
|
2022-01-31T08:26:31Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/550
|
[] |
pranavsinghps1
| 2
|
aidlearning/AidLearning-FrameWork
|
jupyter
| 204
|
关于网络安全问题的考虑。Considerations on network security
|
cloud_ip功能是一个极好用的功能,但是他通过局域网http协议明文传输密码,这在大局域网例如校园是十分不安全的,别人通过抓包可以获取你的密码(特别是弱密码),并获得访问你个人手机数据的权限。我的建议是增加ip访问白名单机制,对所有请求ip列表进行临时授权。这个建议是否可行,或实用,如果有更好的方法请告知我。cloud_ IP is an excellent function, but it transmits passwords in plaintext through the LAN HTTP protocol, which is very unsafe in large LAN such as campus. Others can obtain your password (especially weak password) by capturing packets, and access your personal mobile phone data. My suggestion is to add IP access whitelist mechanism to temporarily authorize all request IP lists. Whether this suggestion is feasible or practical, please let me know if there is a better method.(Lazy translator from Baidu)
|
closed
|
2022-01-04T02:59:38Z
|
2022-12-05T12:23:46Z
|
https://github.com/aidlearning/AidLearning-FrameWork/issues/204
|
[] |
LY1806620741
| 5
|
ResidentMario/missingno
|
data-visualization
| 2
|
Option to remove the sparkline
|
Hi,
Many thanks for the awesome work! When the number of rows is large, the sparkline looks less useful (more difficult) to visually understand the #features available just looking at it. Wondering if an option to toggle the sparkline off could be added.
|
closed
|
2016-03-30T05:46:47Z
|
2016-04-08T05:29:41Z
|
https://github.com/ResidentMario/missingno/issues/2
|
[
"enhancement"
] |
nipunbatra
| 3
|
andrew-hossack/dash-tools
|
plotly
| 105
|
https://github.com/badges/shields/issues/8671
|
closed
|
2023-02-02T15:08:15Z
|
2023-10-17T15:51:56Z
|
https://github.com/andrew-hossack/dash-tools/issues/105
|
[] |
andrew-hossack
| 0
|
|
apache/airflow
|
data-science
| 48,026
|
KPO mapped task failing
|
### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Kpo Mapped task failing
**Error**
```
scheduler [2025-03-20T17:28:28.774+0000] {dagrun.py:994} INFO - Marking run <DagRun kpo_override_resource_negative_case @ 2025-03-20 17:15:52.320478+00:00: scheduled__2025-03-20T17:15:52.320478+00:00, state:running, queued_at: 2025-03-20 17:15:57.320782+00:00. run_type: scheduled> successful
scheduler Dag run in success state
scheduler Dag run start:2025-03-20 17:15:57.356238+00:00 end:2025-03-20 17:28:28.775063+00:00
scheduler [2025-03-20T17:28:28.780+0000] {dagrun.py:1041} INFO - DagRun Finished: dag_id=kpo_override_resource_negative_case, logical_date=2025-03-20 17:15:52.320478+00:00, run_id=scheduled__2025-03-20T17:15:52.320478+00:00, run_start_date=2025-03-20 17:15:57.356238+00:00, run_end_date=2025-03-20 17:28:28.775063+00:00, run_duration=751.418825, state=success, run_type=scheduled, data_interval_start=2025-03-20 17:15:52.320478+00:00, data_interval_end=2025-03-20 17:15:52.320478+00:00,
scheduler [2025-03-20T17:28:28.792+0000] {adapter.py:412} WARNING - Failed to emit DAG success event:
scheduler Traceback (most recent call last):
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1910, in _execute_context
scheduler self.dialect.do_execute(
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
scheduler cursor.execute(statement, parameters)
scheduler psycopg2.OperationalError: lost synchronization with server: got message type "r", length 1919509605
scheduler
scheduler
scheduler The above exception was the direct cause of the following exception:
scheduler
scheduler Traceback (most recent call last):
scheduler File "/usr/local/lib/python3.12/site-packages/airflow/providers/openlineage/plugins/adapter.py", line 397, in dag_success
scheduler **get_airflow_state_run_facet(dag_id, run_id, task_ids, dag_run_state),
scheduler ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
scheduler File "/usr/local/lib/python3.12/site-packages/airflow/providers/openlineage/utils/utils.py", line 561, in get_airflow_state_run_facet
scheduler tis = DagRun.fetch_task_instances(dag_id=dag_id, run_id=run_id, task_ids=task_ids)
scheduler ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
scheduler File "/usr/local/lib/python3.12/site-packages/airflow/utils/session.py", line 101, in wrapper
scheduler return func(*args, session=session, **kwargs)
scheduler ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
scheduler File "/usr/local/lib/python3.12/site-packages/airflow/models/dagrun.py", line 708, in fetch_task_instances
scheduler return session.scalars(tis).all()
scheduler ^^^^^^^^^^^^^^^^^^^^
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 1778, in scalars
scheduler return self.execute(
scheduler ^^^^^^^^^^^^^
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 1717, in execute
scheduler result = conn._execute_20(statement, params or {}, execution_options)
scheduler ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1710, in _execute_20
scheduler return meth(self, args_10style, kwargs_10style, execution_options)
scheduler ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
scheduler return connection._execute_clauseelement(
scheduler ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1577, in _execute_clauseelement
scheduler ret = self._execute_context(
scheduler ^^^^^^^^^^^^^^^^^^^^^^
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1953, in _execute_context
scheduler self._handle_dbapi_exception(
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2134, in _handle_dbapi_exception
scheduler util.raise_(
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
scheduler raise exception
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1910, in _execute_context
scheduler self.dialect.do_execute(
scheduler File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
scheduler cursor.execute(statement, parameters)
scheduler sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) lost synchronization with server: got message type "r", length 1919509605
scheduler
scheduler [SQL: SELECT task_instance.rendered_map_index, task_instance.task_display_name, task_instance.id, task_instance.task_id, task_instance.dag_id, task_instance.run_id, task_instance.map_index, task_instance.start_date, task_instance.end_date, task_instance.duration, task_instance.state, task_instance.try_id, task_instance.try_number, task_instance.max_tries, task_instance.hostname, task_instance.unixname, task_instance.pool, task_instance.pool_slots, task_instance.queue, task_instance.priority_weight, task_instance.operator, task_instance.custom_operator_name, task_instance.queued_dttm, task_instance.scheduled_dttm, task_instance.queued_by_job_id, task_instance.last_heartbeat_at, task_instance.pid, task_instance.executor, task_instance.executor_config, task_instance.updated_at, task_instance.external_executor_id, task_instance.trigger_id, task_instance.trigger_timeout, task_instance.next_method, task_instance.next_kwargs, task_instance.dag_version_id, dag_run_1.state AS state_1, dag_run_1.id AS id_1, dag_run_1.dag_id AS dag_id_1, dag_run_1.queued_at, dag_run_1.logical_date, dag_run_1.start_date AS start_date_1, dag_run_1.end_date AS end_date_1, dag_run_1.run_id AS run_id_1, dag_run_1.creating_job_id, dag_run_1.run_type, dag_run_1.triggered_by, dag_run_1.conf, dag_run_1.data_interval_start, dag_run_1.data_interval_end, dag_run_1.run_after, dag_run_1.last_scheduling_decision, dag_run_1.log_template_id, dag_run_1.updated_at AS updated_at_1, dag_run_1.clear_number, dag_run_1.backfill_id, dag_run_1.bundle_version
scheduler FROM task_instance JOIN dag_run AS dag_run_1 ON dag_run_1.dag_id = task_instance.dag_id AND dag_run_1.run_id = task_instance.run_id
scheduler WHERE task_instance.dag_id = %(dag_id_2)s AND task_instance.run_id = %(run_id_2)s]
scheduler [parameters: {'dag_id_2': 'kpo_override_resource_negative_case', 'run_id_2': 'scheduled__2025-03-20T17:15:52.320478+00:00'}]
scheduler (Background on this error at: https://sqlalche.me/e/14/e3q8)
scheduler
scheduler-gc Trimming airflow logs to 1 days.
scheduler-gc Trimming airflow logs to 1 days.
scheduler-gc Trimming airflow logs to 1 days.
scheduler-gc Trimming airflow logs to 1 days.
scheduler-gc Trimming airflow logs to 1 days.
```
### What you think should happen instead?
_No response_
### How to reproduce
Try running below DAG
```
from datetime import datetime
from airflow import DAG
from airflow.providers.cncf.kubernetes.operators.pod import (
KubernetesPodOperator,
)
from airflow.configuration import conf
namespace = conf.get("kubernetes_executor", "NAMESPACE")
with DAG(
dag_id="kpo_mapped",
start_date=datetime(1970, 1, 1),
schedule=None,
tags=["taskmap"]
# render_template_as_native_obj=True,
) as dag:
KubernetesPodOperator(
task_id="cowsay_static",
name="cowsay_statc",
namespace=namespace,
image="docker.io/rancher/cowsay",
cmds=["cowsay"],
arguments=["moo"],
log_events_on_failure=True,
)
KubernetesPodOperator.partial(
task_id="cowsay_mapped",
name="cowsay_mapped",
namespace=namespace,
image="docker.io/rancher/cowsay",
cmds=["cowsay"],
log_events_on_failure=True,
).expand(arguments=[["mooooove"], ["cow"], ["get out the way"]])
```
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
open
|
2025-03-20T17:41:56Z
|
2025-03-24T13:06:03Z
|
https://github.com/apache/airflow/issues/48026
|
[
"kind:bug",
"priority:high",
"area:core",
"area:dynamic-task-mapping",
"provider:openlineage",
"affected_version:3.0.0beta"
] |
vatsrahul1001
| 0
|
blacklanternsecurity/bbot
|
automation
| 1,953
|
Is this a fork from SpiderFoot ?
|
**Question**
Question: Is this a fork from Spiderfoot ?
|
closed
|
2024-11-11T12:25:48Z
|
2024-11-12T02:34:56Z
|
https://github.com/blacklanternsecurity/bbot/issues/1953
|
[
"enhancement"
] |
izdrail
| 5
|
521xueweihan/HelloGitHub
|
python
| 2,718
|
【开源自荐】LunarLink - 接口自动化测试平台,帮助测试工程师快速编写接口自动化测试用例
|
- 项目地址:https://github.com/tahitimoon/LunarLink
- 类别:Python
- 项目标题:一个基于 Web 的接口自动化测试平台,可以快速编写和运行接口自动化测试用例
- 项目描述:基于HttpRunner + Django + Vue + Element UI 的接口自动化测试平台,生产可用。
- 项目文档:https://lunar-link-docs.fun
- 亮点:
- [x] 支持同步 YAPI(间接支持 Swagger,Postman,Har),无需手动录入接口
- [x] 继承[Requests](https://requests.readthedocs.io/projects/cn/zh_CN/latest/index.html)的全部特性,轻松实现 HTTP(S)的各种测试需求
- [x] 借助驱动代码(debugtalk.py),在测试脚本中轻松实现请求参数签名,加密和解密响应等
- [x] 支持完善的 hook 机制,通过请求前置和后置函数,完美解决单接口的 token 依赖和多个接口的参数传递
- [x] 支持录制HTTP(S)请求,简单操作即可生成测试用例
- [x] 类 crontab 的定时任务, 无需额外学习成本
- [x] 测试用例支持参数化和数据驱动机制
- [x] 测试结果统计报告简洁清晰,附带详尽统计信息和日志记录
- [x] 测试报告推送飞书,钉钉,企业微信等
- 截图:




- 后续更新计划:
添加操作日志、优化接口调式页面、批量执行用例交互等
|
open
|
2024-03-29T01:53:22Z
|
2024-04-24T12:08:18Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2718
|
[
"Python 项目"
] |
tahitimoon
| 0
|
vimalloc/flask-jwt-extended
|
flask
| 281
|
How to call @admin_required like @jwt_required
|
I followed in this docs [custom_decorators](https://flask-jwt-extended.readthedocs.io/en/latest/custom_decorators.html)
I have question. How to call @admin_required decorator in another file, namespace ... like @jwt_required
Look like this
```
from flask_jwt_extended import (jwt_required admin_required)
````
Thank!
|
closed
|
2019-10-18T10:47:46Z
|
2019-10-18T14:33:34Z
|
https://github.com/vimalloc/flask-jwt-extended/issues/281
|
[] |
tatdatpham
| 1
|
deeppavlov/DeepPavlov
|
nlp
| 1,194
|
Multi-Lingual Sentence Embedding
|
Can someone help me to get an example of multi-lingual sentence embedding? I want to extract the sentence embedding for Hindi using the Multi-lingual sentence embedding model.
Thanks.
|
closed
|
2020-04-30T12:35:52Z
|
2020-05-14T05:07:47Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1194
|
[] |
ashispapu
| 3
|
s3rius/FastAPI-template
|
asyncio
| 172
|
How to use DAO in a websocket router?
|
I have a websocket router like this
`@router.websocket(
path="/ws",
)
async def websocket(
websocket: WebSocket,
):
await websocket.accept()
...`
and I want to use DAO to save the message in websocket,but if I use
`async def websocket(
websocket: WebSocket,
dao: MessageDAO = Depends(),
):`
when client connect to the websocket, I got a error
File "/Users/xxls/Desktop/Project/db/dependencies.py", line 17, in get_db_session
session: AsyncSession = request.app.state.db_session_factory()
└ <taskiq_dependencies.dependency.Dependency object at 0x111e2bd10>
AttributeError: 'Dependency' object has no attribute 'app'
|
closed
|
2023-06-19T15:56:52Z
|
2024-02-08T17:25:19Z
|
https://github.com/s3rius/FastAPI-template/issues/172
|
[] |
eggb4by
| 15
|
ading2210/poe-api
|
graphql
| 35
|
Using custom bots
|
Can it be implementeD?
|
closed
|
2023-04-11T12:09:54Z
|
2023-04-12T00:40:45Z
|
https://github.com/ading2210/poe-api/issues/35
|
[
"invalid"
] |
ghost
| 2
|
zappa/Zappa
|
django
| 509
|
[Migrated] Enhancement request: async execution for a non-defined function
|
Originally from: https://github.com/Miserlou/Zappa/issues/1332 by [michelorengo](https://github.com/michelorengo)
## Context
My use case is to be able to execute a function ("task") using the async execution in a different lambda. That lambda has a different code base than the calling lambda. In other words, the function ("task") to be executed is not defined in the calling lambda.
The async execution lets you specify a remote lambda and remote region but the function (to be executed) has to be defined in the code. The request is to be able to simply provide a function name as a string in the form of <module_name>.<function_name>.
This obviously does not work for the decorator. It works only using "zappa.async.run".
## Expected Behavior
The below should work:
`
from zappa.async import run
run(func="my_module.my_function", remote_aws_lambda_function_name="my_remote_lambda", remote_aws_region='us-east-1', kwargs=kwargs)
`
## Actual Behavior
The function/task path is retrieved via inspection (hence requires a function type) by "get_func_task_path"
## Possible Fix
This is a bit hackish but is the least intrusive. I'll make a PR but I'm thinking of:
`
def get_func_task_path(func):
"""
Format the modular task path for a function via inspection if param is
a function. If the param is of type string, it will simply return it.
"""
if isinstance(func , (str, unicode)):
return func
module_path = inspect.getmodule(func).__name__
task_path = '{module_path}.{func_name}'.format(
module_path=module_path,
func_name=func.__name__
)
return task_path
`
|
closed
|
2021-02-20T09:43:41Z
|
2024-04-13T16:36:46Z
|
https://github.com/zappa/Zappa/issues/509
|
[
"enhancement",
"feature-request",
"good-idea",
"has-pr",
"no-activity",
"auto-closed"
] |
jneves
| 2
|
AutoViML/AutoViz
|
scikit-learn
| 101
|
ValueError
|
Hi,
stumbled upon autoviz and failed to run even a minimal example.
1) Had to change 'seaborn' to 'seaborn-v0_8' in AutoViz_Class.py and AutoViz_Utils.py
2) After that i got the error below, where i don't know how to proceed
Any suggestions or fixes?
Thanks in advance.
FYI: Here i used python 3.11.6. Had the same Error in python 3.11.4. Could not even install autoviz in python 3.12
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[10], line 2
1 import pandas as pd
----> 2 from autoviz.AutoViz_Class import AutoViz_Class
4 get_ipython().run_line_magic('matplotlib', 'inline')
File ~\AppData\Roaming\Python\Python311\site-packages\autoviz\__init__.py:3
1 name = "autoviz"
2 from .__version__ import __version__, __holo_version__
----> 3 from .AutoViz_Class import AutoViz_Class
4 from .AutoViz_Class import data_cleaning_suggestions
5 from .AutoViz_Class import FixDQ
File ~\AppData\Roaming\Python\Python311\site-packages\autoviz\AutoViz_Class.py:61
59 from sklearn.model_selection import train_test_split
60 ##########################################################################################
---> 61 from autoviz.AutoViz_Holo import AutoViz_Holo
62 from autoviz.AutoViz_Utils import save_image_data, save_html_data, analyze_problem_type, draw_pivot_tables, draw_scatters
63 from autoviz.AutoViz_Utils import draw_pair_scatters, plot_fast_average_num_by_cat, draw_barplots, draw_heatmap
File ~\AppData\Roaming\Python\Python311\site-packages\autoviz\AutoViz_Holo.py:5
3 import pandas as pd
4 ############# Import from autoviz.AutoViz_Class the following libraries #######
----> 5 from autoviz.AutoViz_Utils import *
6 ############## make sure you use: conda install -c pyviz hvplot ###############
7 import hvplot.pandas # noqa
File ~\AppData\Roaming\Python\Python311\site-packages\autoviz\AutoViz_Utils.py:61
59 from sklearn.model_selection import train_test_split
60 ######## This is where we import HoloViews related libraries #########
---> 61 import hvplot.pandas
62 import holoviews as hv
63 from holoviews import opts
File ~\AppData\Roaming\Python\Python311\site-packages\hvplot\__init__.py:12
8 import holoviews as _hv
10 from holoviews import Store
---> 12 from .converter import HoloViewsConverter
13 from .util import get_ipy
14 from .utilities import save, show # noqa
File ~\AppData\Roaming\Python\Python311\site-packages\hvplot\converter.py:25
18 from holoviews.core.util import max_range, basestring
19 from holoviews.element import (
20 Curve, Scatter, Area, Bars, BoxWhisker, Dataset, Distribution,
21 Table, HeatMap, Image, HexTiles, QuadMesh, Bivariate, Histogram,
22 Violin, Contours, Polygons, Points, Path, Labels, RGB, ErrorBars,
23 VectorField, Rectangles, Segments
24 )
---> 25 from holoviews.plotting.bokeh import OverlayPlot, colormap_generator
26 from holoviews.plotting.util import process_cmap
27 from holoviews.operation import histogram
File ~\AppData\Roaming\Python\Python311\site-packages\holoviews\plotting\bokeh\__init__.py:40
38 from .graphs import GraphPlot, NodePlot, TriMeshPlot, ChordPlot
39 from .heatmap import HeatMapPlot, RadialHeatMapPlot
---> 40 from .hex_tiles import HexTilesPlot
41 from .path import PathPlot, PolygonPlot, ContourPlot
42 from .plot import GridPlot, LayoutPlot, AdjointLayoutPlot
File ~\AppData\Roaming\Python\Python311\site-packages\holoviews\plotting\bokeh\hex_tiles.py:22
18 from .selection import BokehOverlaySelectionDisplay
19 from .styles import base_properties, line_properties, fill_properties
---> 22 class hex_binning(Operation):
23 """
24 Applies hex binning by computing aggregates on a hexagonal grid.
25
26 Should not be user facing as the returned element is not directly
27 useable.
28 """
30 aggregator = param.ClassSelector(
31 default=np.size, class_=(types.FunctionType, tuple), doc="""
32 Aggregation function or dimension transform used to compute bin
33 values. Defaults to np.size to count the number of values
34 in each bin.""")
File ~\AppData\Roaming\Python\Python311\site-packages\holoviews\plotting\bokeh\hex_tiles.py:30, in hex_binning()
22 class hex_binning(Operation):
23 """
24 Applies hex binning by computing aggregates on a hexagonal grid.
25
26 Should not be user facing as the returned element is not directly
27 useable.
28 """
---> 30 aggregator = param.ClassSelector(
31 default=np.size, class_=(types.FunctionType, tuple), doc="""
32 Aggregation function or dimension transform used to compute bin
33 values. Defaults to np.size to count the number of values
34 in each bin.""")
36 gridsize = param.ClassSelector(default=50, class_=(int, tuple))
38 invert_axes = param.Boolean(default=False)
File ~\AppData\Roaming\Python\Python311\site-packages\param\__init__.py:1367, in ClassSelector.__init__(self, class_, default, instantiate, is_instance, **params)
1365 self.is_instance = is_instance
1366 super(ClassSelector,self).__init__(default=default,instantiate=instantiate,**params)
-> 1367 self._validate(default)
File ~\AppData\Roaming\Python\Python311\site-packages\param\__init__.py:1371, in ClassSelector._validate(self, val)
1369 def _validate(self, val):
1370 super(ClassSelector, self)._validate(val)
-> 1371 self._validate_class_(val, self.class_, self.is_instance)
File ~\AppData\Roaming\Python\Python311\site-packages\param\__init__.py:1383, in ClassSelector._validate_class_(self, val, class_, is_instance)
1381 if is_instance:
1382 if not (isinstance(val, class_)):
-> 1383 raise ValueError(
1384 "%s parameter %r value must be an instance of %s, not %r." %
1385 (param_cls, self.name, class_name, val))
1386 else:
1387 if not (issubclass(val, class_)):
**ValueError: ClassSelector parameter None value must be an instance of (function, tuple), not <function size at 0x0000019E7F3B3DF0>**.`
|
closed
|
2023-11-11T12:10:51Z
|
2023-12-24T12:14:44Z
|
https://github.com/AutoViML/AutoViz/issues/101
|
[] |
DirtyStreetCoder
| 5
|
sqlalchemy/alembic
|
sqlalchemy
| 611
|
Allow to run statement before creating `alembic_version` table in Postgres
|
I need to run `SET ROLE writer_role;` before creating a table in Postgres, because the user role I use to login does not have `CREATE` permission in the schema. And it is not possible to login with `writer_role`.
This causes the auto generation of the initial revision (`alembic revision --autogenerate -m "initial") to fail when it tries to create the `alembic_version` table and it does not generate the initial migration file.
The output of the command mentioned above is:
```
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1249, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/default.py", line 580, in do_execute
cursor.execute(statement, parameters)
psycopg2.ProgrammingError: permission denied for schema public
LINE 2: CREATE TABLE alembic_version (
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/alembic", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python3.6/dist-packages/alembic/config.py", line 573, in main
CommandLine(prog=prog).main(argv=argv)
File "/usr/local/lib/python3.6/dist-packages/alembic/config.py", line 567, in main
self.run_cmd(cfg, options)
File "/usr/local/lib/python3.6/dist-packages/alembic/config.py", line 547, in run_cmd
**dict((k, getattr(options, k, None)) for k in kwarg)
File "/usr/local/lib/python3.6/dist-packages/alembic/command.py", line 214, in revision
script_directory.run_env()
File "/usr/local/lib/python3.6/dist-packages/alembic/script/base.py", line 489, in run_env
util.load_python_file(self.dir, "env.py")
File "/usr/local/lib/python3.6/dist-packages/alembic/util/pyfiles.py", line 98, in load_python_file
module = load_module_py(module_id, path)
File "/usr/local/lib/python3.6/dist-packages/alembic/util/compat.py", line 173, in load_module_py spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "alembic/env.py", line 89, in <module>
run_migrations_online()
File "alembic/env.py", line 83, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/usr/local/lib/python3.6/dist-packages/alembic/runtime/environment.py", line 846, in run_migrations
self.get_context().run_migrations(**kw)
File "/usr/local/lib/python3.6/dist-packages/alembic/runtime/migration.py", line 499, in run_migrations
self._ensure_version_table()
File "/usr/local/lib/python3.6/dist-packages/alembic/runtime/migration.py", line 440, in _ensure_version_table
self._version.create(self.connection, checkfirst=True)
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/sql/schema.py", line 870, in create
bind._run_visitor(ddl.SchemaGenerator, self, checkfirst=checkfirst)
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1615, in _run_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/sql/visitors.py", line 138, in traverse_single
return meth(obj, **kw)
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/sql/ddl.py", line 826, in visit_table
include_foreign_key_constraints,
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 988, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1050, in _execute_ddl
compiled,
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1253, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1473, in _handle_dbapi_exception
util.raise_from_cause(sqlalchemy_exception, exc_info)
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/util/compat.py", line 152, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/base.py", line 1249, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/engine/default.py", line 580, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) permission denied for schema public
LINE 2: CREATE TABLE alembic_version (
^
[SQL:
CREATE TABLE alembic_version (
version_num VARCHAR(32) NOT NULL,
CONSTRAINT alembic_version_pkc PRIMARY KEY (version_num)
)
]
(Background on this error at: http://sqlalche.me/e/f405)
```
|
closed
|
2019-10-22T11:14:58Z
|
2023-10-18T17:58:59Z
|
https://github.com/sqlalchemy/alembic/issues/611
|
[
"question"
] |
notrev
| 7
|
PrefectHQ/prefect
|
automation
| 16,810
|
Automation not working with basic auth enabled
|
### Bug summary
I have set up automation for canceling long-running jobs and sending out notifications.
I noticed that they do no longer work as expected.
When a condition for cancellation is met (in running over 5 minutes) the flow is not cancelled, and I see an error message in the log:
`| WARNING | prefect.server.events.actions - Action failed: "Unexpected status from 'cancel-flow-run' action: 403"`
When I then try to cancel the flow manually, it should trigger the notification automation. This also doesn't work either.
The following message is shown:
`| WARNING | prefect.server.events.actions - Action failed: "Unexpected status from 'send-notification' action: 401"`
I traced this error back to when I enabled basic authentication, so I assume it has something to do with that.
### Version info
```Text
Version: 3.1.13
API version: 0.8.4
Python version: 3.12.8
Git commit: 16e85ce3
Built: Fri, Jan 17, 2025 8:46 AM
OS/Arch: linux/x86_64
Profile: ephemeral
Server type: server
Pydantic version: 2.10.5
```
### Additional context
This is the only line logged on the server even with DEBUG_MODE enabled.
Server is hosted on azure in a docker container
|
closed
|
2025-01-22T10:28:16Z
|
2025-01-23T19:13:51Z
|
https://github.com/PrefectHQ/prefect/issues/16810
|
[
"bug"
] |
dominik-eai
| 1
|
seleniumbase/SeleniumBase
|
web-scraping
| 3,434
|
Headless Mode refactoring for the removal of old Headless Mode
|
## Headless Mode refactoring for the removal of old Headless Mode
As detailed in https://developer.chrome.com/blog/removing-headless-old-from-chrome, Chrome's old headless mode has been removed in Chrome 132. The SeleniumBase `headless1` option mapped to it (`--headless=old`). Before Chrome's newer headless mode appeared, it was just `headless`. After the newer `--headless=new` option appeared, the SeleniumBase `headless2` option mapped to it.
Now that the old headless mode is gone, a few changes will occur to maintain backwards compatibility:
* If the Chrome version is 132 (or newer), `headless1` will automatically remap to `--headless` / `--headless=new`.
* If the Chrome version is less than 132, then `headless1` will continue mapping to `--headless=old`.
So in summary, on Chrome 132 or newer (once this ticket is complete), `headless1` is the same as `headless` is the same as `headless2`. All those will map to the same (new) headless mode on Chrome.
In the meantime, attempting to use the old headless mode on Chrome 132 (or newer) causes the following error to occur:
``selenium.common.exceptions.SessionNotCreatedException: Message: session not created: probably user data directory is already in use, please specify a unique value for --user-data-dir argument, or don't use --user-data-dir``
(Not sure where that error is coming from, but the message is misleading because the real cause of the error is the removal of Chrome's old headless mode, and not some problem with the user data directory.)
|
closed
|
2025-01-19T03:57:59Z
|
2025-01-21T23:42:39Z
|
https://github.com/seleniumbase/SeleniumBase/issues/3434
|
[
"enhancement"
] |
mdmintz
| 1
|
docarray/docarray
|
fastapi
| 1,665
|
chore: Copyedit draft release note v0.34.0
|
# Release Note
This release contains 2 breaking changes, 3 new features, 11 bug fixes, and 2 documentation improvements.
## :bomb: Breaking Changes
### Terminate Python 3.7 support
:warning: :warning: DocArray will now require Python 3.8. We can no longer assure compatibility with Python 3.7.
We decided to drop it for two reasons:
* Several dependencies of DocArray require Python 3.8.
* Python [long-term support for 3.7 is ending](https://endoflife.date/python) this week. This means there will no longer
be security updates for Python 3.7, making this a good time for us to change our requirements.
### Changes to `DocVec` Protobuf definition (#1639)
In order to fix a bug in the `DocVec` protobuf serialization described in [#1561](https://github.com/docarray/docarray/issues/1561),
we have changed the `DocVec` .proto definition.
This means that **`DocVec` objects serialized with DocArray v0.33.0 or earlier cannot be deserialized with DocArray
v.0.34.0 or later, and vice versa**.
:warning: :warning: **We strongly recommend** that everyone using Protobuf with `DocVec` upgrade to DocArray v0.34.0 or
later.
## 🆕 Features
### Allow users to check if a Document is already indexed in a DocIndex (#1633)
You can now check if a Document has already been indexed by using the `in` keyword:
```python
from docarray.index import InMemoryExactNNIndex
from docarray import BaseDoc, DocList
from docarray.typing import NdArray
import numpy as np
class MyDoc(BaseDoc):
text: str
embedding: NdArray[128]
docs = DocList[MyDoc](
[MyDoc(text="Example text", embedding=np.random.rand(128))
for _ in range(2000)])
index = InMemoryExactNNIndex[MyDoc](docs)
assert docs[0] in index
assert MyDoc(text='New text', embedding=np.random.rand(128)) not in index
```
### Support subindexes in `InMemoryExactNNIndex` (#1617)
You can now use the [find_subindex](https://docs.docarray.org/user_guide/storing/docindex/#nested-data-with-subindex)
method with the ExactNNSearch DocIndex.
```python
from docarray.index import InMemoryExactNNIndex
from docarray import BaseDoc, DocList
from docarray.typing import NdArray
import numpy as np
class MyDoc(BaseDoc):
text: str
embedding: NdArray[128]
docs = DocList[MyDoc](
[MyDoc(text="Example text", embedding=np.random.rand(128))
for _ in range(2000)])
index = InMemoryExactNNIndex[MyDoc](docs)
assert docs[0] in index
assert MyDoc(text='New text', embedding=np.random.rand(128)) not in index
```
### Flexible tensor types for protobuf deserialization (#1645)
You can deserialize any `DocVec` protobuf message to any tensor type,
by passing the `tensor_type` parameter to `from_protobuf`.
This means that you can choose at deserialization time if you are working with numpy, PyTorch, or TensorFlow tensors.
```python
class MyDoc(BaseDoc):
tensor: TensorFlowTensor
da = DocVec[MyDoc](...) # doesn't matter what tensor_type is here
proto = da.to_protobuf()
da_after = DocVec[MyDoc].from_protobuf(proto, tensor_type=TensorFlowTensor)
assert isinstance(da_after.tensor, TensorFlowTensor)
```
## ⚙ Refactoring
### Add `DBConfig` to `InMemoryExactNNSearch`
`InMemoryExactNNsearch` used to get a single parameter `index_file_path` as a constructor parameter, unlike the rest of
the Indexers who accepted their own `DBConfig`. Now `index_file_path` is part of the `DBConfig` which allows to
initialize from it.
This will allow us to extend this config if more parameters are needed.
The parameters of `DBConfig` can be passed at construction time as `**kwargs` making this change compatible with old
usage.
These two initializations are equivalent.
```python
from docarray.index import InMemoryExactNNIndex
db_config = InMemoryExactNNIndex.DBConfig(index_file_path='index.bin')
index = InMemoryExactNNIndex[MyDoc](db_config=db_config)
index = InMemoryExactNNIndex[MyDoc](index_file_path='index.bin')
```
## 🐞 Bug Fixes
### Allow protobuf deserialization of `BaseDoc` with `Union` type (#1655)
Serialization of `BaseDoc` types who have `Union` types parameter of Python native types is supported.
```python
from docarray import BaseDoc
from typing import Union
class MyDoc(BaseDoc):
union_field: Union[int, str]
docs1 = DocList[MyDoc]([MyDoc(union_field="hello")])
docs2 = DocList[BasisUnion].from_dataframe(docs_basic.to_dataframe())
assert docs1 == docs2
```
When these `Union` types involve other `BaseDoc` types, an exception is thrown.
```python
class CustomDoc(BaseDoc):
ud: Union[TextDoc, ImageDoc] = TextDoc(text='union type')
docs = DocList[CustomDoc]([CustomDoc(ud=TextDoc(text='union type'))])
# raises an Exception
DocList[CustomDoc].from_dataframe(docs.to_dataframe())
```
### Cast limit to integer when passed to `HNSWDocumentIndex` (#1657, #1656)
If you call `find` or `find_batched` on an `HNSWDocumentIndex`, the `limit` parameter will automatically be cast to
`integer`.
### Moved `default_column_config` from `RuntimeConfig` to `DBconfig` (#1648)
`default_column_config` contains specific configuration information about the columns and tables inside the backend's
database. This was previously put inside `RuntimeConfig` which caused an error because this information is required at
initialization time. This information has been moved inside `DBConfig` so you can edit it there.
```python
from docarray.index import HNSWDocumentIndex
import numpy as np
db_config = HNSWDocumentIndex.DBConfig()
db_conf.default_column_config.get(np.ndarray).update({'ef': 2500})
index = HNSWDocumentIndex[MyDoc](db_config=db_config)
```
### Fix issue with Protobuf (de)serialization for DocVec (#1639)
This bug caused raw Protobuf objects to be stored as DocVec columns after they were deserialized from Protobuf, making the
data essentially inaccessible. This has now been fixed, and `DocVec` objects are identical before and after (de)serialization.
### Fix order of returned matches when `find` and `filter` combination used in `InMemoryExactNNIndex` (#1642)
Hybrid search (find+filter) for `InMemoryExactNNIndex` was prioritizing low similarities (lower scores) for returned
matches. Fixed by adding an option to sort matches in a reverse order based on their scores.
```python
# prepare a query
q_doc = MyDoc(embedding=np.random.rand(128), text='query')
query = (
db.build_query()
.find(query=q_doc, search_field='embedding')
.filter(filter_query={'text': {'$exists': True}})
.build()
)
results = db.execute_query(query)
# Before: results was sorted from worst to best matches
# Now: It's sorted in the correct order, showing better matches first
```
### Working with external Qdrant collections (#1632)
When using `QdrandDocumentIndex` to connect to a Qdrant DB initialized outside of `docarray` raised a `KeyError`.
This has been fixed, and now you can use `QdrantDocumentIndex` to connect to externally initialized collections.
## Other bug fixes
- Update text search to match Weaviate client's new sig (#1654)
- Fix `DocVec` equality (#1641, #1663)
- Fix exception when `summary()` called for `LegacyDocument`. (#1637)
- Fix `DocList` and `DocVec` coersion. (#1568)
- Fix `update()` on `BaseDoc` with tensors fields (#1628)
## 📗 Documentation Improvements
- Enhance DocVec section (#1658)
- Qdrant in memory usage (#1634)
## 🤟 Contributors
We would like to thank all contributors to this release:
- Johannes Messner (@JohannesMessner)
- Nikolas Pitsillos (@npitsillos)
- Shukri (@hsm207)
- Kacper Łukawski (@kacperlukawski)
- Aman Agarwal (@agaraman0)
- maxwelljin (@maxwelljin)
- samsja (@samsja)
- Saba Sturua (@jupyterjazz)
- Joan Fontanals (@JoanFM)
|
closed
|
2023-06-20T18:10:10Z
|
2023-06-21T08:24:54Z
|
https://github.com/docarray/docarray/issues/1665
|
[] |
scott-martens
| 0
|
quokkaproject/quokka
|
flask
| 623
|
cli: ensure and write test for installation (tox)
|
Write tests for cli init
https://github.com/rochacbruno/quokka_ng/issues/48
|
open
|
2018-02-07T01:51:28Z
|
2018-02-07T01:51:28Z
|
https://github.com/quokkaproject/quokka/issues/623
|
[
"1.0.0",
"hacktoberfest"
] |
rochacbruno
| 0
|
huggingface/datasets
|
pytorch
| 7,305
|
Build Documentation Test Fails Due to "Bad Credentials" Error
|
### Describe the bug
The `Build documentation / build / build_main_documentation (push)` job is consistently failing during the "Syncing repository" step. The error occurs when attempting to determine the default branch name, resulting in "Bad credentials" errors.
### Steps to reproduce the bug
1. Trigger the `build_main_documentation` job.
2. Observe the logs during the "Syncing repository" step.
### Expected behavior
The workflow should be able to retrieve the default branch name without encountering credential issues.
### Environment info
```plaintext
Syncing repository: huggingface/notebooks
Getting Git version info
Temporarily overriding HOME='/home/runner/work/_temp/00e62748-9940-4a4f-bbbc-eb2cda6d7ed6' before making global git config changes
Adding repository directory to the temporary git global config as a safe directory
/usr/bin/git config --global --add safe.directory /home/runner/work/datasets/datasets/notebooks
Initializing the repository
Disabling automatic garbage collection
Setting up auth
Determining the default branch
Retrieving the default branch name
Bad credentials - https://docs.github.com/rest
Waiting 20 seconds before trying again
Retrieving the default branch name
Bad credentials - https://docs.github.com/rest
Waiting 19 seconds before trying again
Retrieving the default branch name
Error: Bad credentials - https://docs.github.com/rest
```
|
open
|
2024-12-03T20:22:54Z
|
2025-01-08T22:38:14Z
|
https://github.com/huggingface/datasets/issues/7305
|
[] |
ruidazeng
| 2
|
ray-project/ray
|
deep-learning
| 51,349
|
Release test map_groups.many_groups (sort_shuffle_pull_based) failed
|
Release test **map_groups.many_groups (sort_shuffle_pull_based)** failed. See https://buildkite.com/ray-project/release/builds/35758#0195916e-c154-473b-9806-e922721e0873 for more details.
Managed by OSS Test Policy
|
closed
|
2025-03-13T22:07:28Z
|
2025-03-18T16:57:56Z
|
https://github.com/ray-project/ray/issues/51349
|
[
"bug",
"P0",
"triage",
"data",
"release-test",
"jailed-test",
"ray-test-bot",
"weekly-release-blocker",
"stability"
] |
can-anyscale
| 1
|
huggingface/peft
|
pytorch
| 1,893
|
LORA finetuning gradients are scaled by a unknown constant factor
|
### System Info
torch: 2.3.0+cu121
transformers: 4.41.2
peft: 0.11.1
datasets: 2.20.0
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
You can run the following Colab notebook: https://colab.research.google.com/drive/1lgFyKZaZ3ySXWRcfImsry92X7dhrVgZz?usp=sharing
There are two sections in the linked Collab doc.
- "Run finetuning" contains the code to fine-tune for two steps and save the weights and gradients to a file.
- "Check optimizer" loads the saved weights/gradients from file and compares the updated weights with the expected values, printing the constant mismatch factor when there is one.
### Expected behavior
I'm trying to integrate the `peft` library in our framework, but I am running into an unexplained behavior when performing LORA finetuning. I've noticed that an unidentified factor is scaling the gradients before they are used to update the weights in each optimization step.
For example, when using the [SGD optimizer](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html) with parameters `{lr: 1.0, maximize: False, momentum: 0, nesterov: False, weight_decay: 0.0}` and a constant learning rate scheduler, you would expect the weights to be updated as follows at each step:
```
updated_weight = original_weight - lr * weight_gradient
```
However, weights are instead updated as follows (note the `c` constant factor):
```
updated_weight = original_weight - lr * c * weight_gradient
```
Where does `c` come from, and what is its formula? With rank=lora_alpha=16, I'd expect a scaling of $16/16=1.0$. I have already looked through the code, and printed any scaling constants, such as this one: https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/layer.py#L122, which is always 1.0 as expected. I have also checked, and the learning rate at each optimizer stage is 1.0 as I've set it.
|
closed
|
2024-06-28T05:23:31Z
|
2024-06-29T18:19:22Z
|
https://github.com/huggingface/peft/issues/1893
|
[] |
goliaro
| 2
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 964
|
Not sure if my Directory is wrong or i'm missing something but i get hit by this.What am i doing wrong? Thanks in advance :)
|
ERROR: Command errored out with exit status 1:
command: 'D:\python\python.exe' 'D:\python\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\yasin\AppData\Local\Temp\tmp3hdlg1od'
cwd: C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4
Complete output (135 lines):
setup.py:66: RuntimeWarning: NumPy 1.20.3 may not yet support Python 3.10.
warnings.warn(
Running from numpy source directory.
setup.py:485: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
run_build = parse_setuppy_commands()
Processing numpy/random\_bounded_integers.pxd.in
Processing numpy/random\bit_generator.pyx
Processing numpy/random\mtrand.pyx
Processing numpy/random\_bounded_integers.pyx.in
Processing numpy/random\_common.pyx
Processing numpy/random\_generator.pyx
Processing numpy/random\_mt19937.pyx
Processing numpy/random\_pcg64.pyx
Processing numpy/random\_philox.pyx
Processing numpy/random\_sfc64.pyx
Cythonizing sources
Could not locate executable g77
Could not locate executable f77
Could not locate executable ifort
Could not locate executable ifl
Could not locate executable f90
Could not locate executable DF
Could not locate executable efl
Could not locate executable gfortran
Could not locate executable f95
Could not locate executable g95
Could not locate executable efort
Could not locate executable efc
Could not locate executable flang
don't know how to compile Fortran code on platform 'nt'
C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\system_info.py:1989: UserWarning:
Optimized (vendor) Blas libraries are not found.
Falls back to netlib Blas library which has worse performance.
A better performance should be easily gained by switching
Blas library.
if self._calc_info(blas):
C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\system_info.py:1989: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
if self._calc_info(blas):
C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\system_info.py:1989: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
if self._calc_info(blas):
C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\system_info.py:1849: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\system_info.py:1849: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\_distutils\dist.py:275: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
non-existing path in 'numpy\\distutils': 'site.cfg'
running dist_info
running build_src
creating build
creating build\src.win-amd64-3.10
creating build\src.win-amd64-3.10\numpy
creating build\src.win-amd64-3.10\numpy\distutils
Traceback (most recent call last):
File "D:\python\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 363, in <module>
main()
File "D:\python\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "D:\python\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 164, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\build_meta.py", line 157, in prepare_metadata_for_build_wheel
self.run_setup()
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\build_meta.py", line 248, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\build_meta.py", line 142, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 513, in <module>
setup_package()
File "setup.py", line 505, in setup_package
setup(**metadata)
File "C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\core.py", line 169, in setup
return old_setup(**new_attr)
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\__init__.py", line 165, in setup
return distutils.core.setup(**attrs)
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 148, in setup
dist.run_commands()
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 967, in run_commands
self.run_command(cmd)
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 986, in run_command
cmd_obj.run()
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\command\dist_info.py", line 31, in run
egg_info.run()
File "C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\command\egg_info.py", line 24, in run
self.run_command("build_src")
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 986, in run_command
cmd_obj.run()
File "C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\command\build_src.py", line 144, in run
self.build_sources()
File "C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\command\build_src.py", line 155, in build_sources
self.build_library_sources(*libname_info)
File "C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\command\build_src.py", line 288, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
File "C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\command\build_src.py", line 378, in generate_sources
source = func(extension, build_dir)
File "numpy\core\setup.py", line 671, in get_mathlib_info
st = config_cmd.try_link('int main(void) { return 0;}')
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 243, in try_link
self._link(body, headers, include_dirs,
File "C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\command\config.py", line 162, in _link
return self._wrap_method(old_config._link, lang,
File "C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\command\config.py", line 96, in _wrap_method
ret = mth(*((self,)+args))
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 137, in _link
(src, obj) = self._compile(body, headers, include_dirs, lang)
File "C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\command\config.py", line 105, in _compile
src, obj = self._wrap_method(old_config._compile, lang,
File "C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\command\config.py", line 96, in _wrap_method
ret = mth(*((self,)+args))
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 132, in _compile
self.compiler.compile([src], include_dirs=include_dirs)
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 401, in compile
self.spawn(args)
File "C:\Users\yasin\AppData\Local\Temp\pip-build-env-6pkcd6kp\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 505, in spawn
return super().spawn(cmd, env=env)
File "C:\Users\yasin\AppData\Local\Temp\pip-install-594l2te2\numpy_6377a46b864645e683fccdd25c76f1f4\numpy\distutils\ccompiler.py", line 90, in <lambda>
m = lambda self, *args, **kw: func(self, *args, **kw)
TypeError: CCompiler_spawn() got an unexpected keyword argument 'env'
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/f3/1f/fe9459e39335e7d0e372b5e5dcd60f4381d3d1b42f0b9c8222102ff29ded/numpy-1.20.3.zip#sha256=e55185e51b18d788e49fe8305fd73ef4470596b33fc2c1ceb304566b99c71a69 (from https://pypi.org/simple/numpy/) (requires-python:>=3.7). Command errored out with exit status 1: 'D:\python\python.exe' 'D:\python\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\yasin\AppData\Local\Temp\tmp3hdlg1od' Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement numpy==1.20.3 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0rc1, 1.15.0rc2, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0rc1, 1.16.0rc2, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0rc1, 1.17.0rc2, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0rc1, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0rc1, 1.19.0rc2, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0rc1, 1.20.0rc2, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0rc1, 1.21.0rc2, 1.21.0, 1.21.1, 1.21.2, 1.21.3, 1.21.4, 1.21.5, 1.22.0rc1, 1.22.0rc2, 1.22.0rc3)
ERROR: No matching distribution found for numpy==1.20.3
|
closed
|
2021-12-28T18:35:06Z
|
2021-12-28T19:52:35Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/964
|
[] |
santaonholidays
| 1
|
assafelovic/gpt-researcher
|
automation
| 1,081
|
Cloudflare Protection Blocking Crunchbase Results
|
**Is your feature request related to a problem? Please describe.**
When trying to scrape Crunchbase pages through Tavily, the results are being blocked by Cloudflare's security protection. This affects the quality of company research as Crunchbase is a crucial source for company information.
#### Current Behavior
- Tavily attempts to scrape Crunchbase URLs (e.g., "https://www.crunchbase.com/organization/XXXXX")
- Instead of getting company data, we receive Cloudflare's block page:
```
"please enable cookies. sorry, you have been blocked you are unable to access crunchbase.com..."
```
- This causes our similarity check to fail as the content doesn't contain company information
**Describe the solution you'd like**
- Tavily should be able to bypass Cloudflare protection
- Successfully retrieve company information from Crunchbase pages
**Describe alternatives you've considered**
- Implement headless browser support in Tavily (e.g., Playwright, Puppeteer)
- Add proper headers and cookie management
|
open
|
2025-01-17T20:11:35Z
|
2025-02-01T19:04:24Z
|
https://github.com/assafelovic/gpt-researcher/issues/1081
|
[] |
PatricioCabo
| 1
|
microsoft/nni
|
pytorch
| 5,601
|
invalid syntax
|
**Describe the issue**:
<img width="1116" alt="988a2d62f21801bbc49dafb253a1281c" src="https://github.com/microsoft/nni/assets/24861234/4c0c864c-d6d3-4456-ae7c-9c8f1d6d3e49">
<img width="1115" alt="95fad960da4a0ae0af53bdd5416e5e12" src="https://github.com/microsoft/nni/assets/24861234/34c64ca1-b5a1-4829-9d9f-5dca1ac75d2e">
**Environment**:
- NNI version: 2.10.1
- Training service (local|remote|pai|aml|etc): local
- Client OS: Linux
- Server OS (for remote mode only):
- Python version: 3.7.0
- PyTorch version: 1.12.1
- Is conda/virtualenv/venv used?: virtualenv
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
closed
|
2023-06-07T09:10:56Z
|
2023-06-28T02:07:01Z
|
https://github.com/microsoft/nni/issues/5601
|
[] |
sunjian2015
| 5
|
gradio-app/gradio
|
data-science
| 10,564
|
Misplaced Chat Avatar While Thinking
|
### Describe the bug
When the chatbot is thinking, the Avatar icon is misplaced. When it is actually inferencing or done inferencing, the avatar is fine.
Similar to https://github.com/gradio-app/gradio/issues/9655 I believe, but a special edge case. Also, I mostly notice the issue with rectangular images.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
from time import sleep
AVATAR = "./car.png"
# Define a simple chatbot function
def chatbot_response(message, hist):
sleep(10)
return f"Gradio is pretty cool!"
# Create a chat interface using gr.ChatInterface
chatbot = gr.ChatInterface(fn=chatbot_response,
chatbot=gr.Chatbot(
label="LLM",
elem_id="chatbot",
avatar_images=(
None,
AVATAR
),
)
)
# Launch the chatbot
chatbot.launch()
```
### Screenshot



### Logs
```shell
```
### System Info
```shell
(base) carter.yancey@Yancy-XPS:~$ gradio environment
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.13.1
gradio_client version: 1.6.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 3.7.1
audioop-lts is not installed.
fastapi: 0.115.7
ffmpy: 0.3.2
gradio-client==1.6.0 is not installed.
httpx: 0.25.1
huggingface-hub: 0.27.1
jinja2: 3.1.2
markupsafe: 2.1.3
numpy: 1.26.2
orjson: 3.9.10
packaging: 23.2
pandas: 1.5.3
pillow: 10.0.0
pydantic: 2.5.1
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.1
ruff: 0.2.2
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.12.0
typer: 0.15.1
typing-extensions: 4.8.0
urllib3: 2.3.0
uvicorn: 0.24.0.post1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2023.10.0
httpx: 0.25.1
huggingface-hub: 0.27.1
packaging: 23.2
typing-extensions: 4.8.0
websockets: 11.0.3
```
### Severity
I can work around it
|
closed
|
2025-02-11T18:31:28Z
|
2025-03-04T21:23:07Z
|
https://github.com/gradio-app/gradio/issues/10564
|
[
"bug",
"💬 Chatbot"
] |
CarterYancey
| 0
|
wandb/wandb
|
data-science
| 8,769
|
[Bug-App]: Histogram render not working
|
### Describe the bug
The web app doesn't display the beautiful histogram plots like it used to and renders rather ugly-looking teal rectangles instead.

OS: macOS Sequoia 15.1
Browser: Safari 18.1 and firefox 131.0.3 (tested both)
|
open
|
2024-11-05T08:08:57Z
|
2025-01-06T14:32:21Z
|
https://github.com/wandb/wandb/issues/8769
|
[
"ty:bug",
"a:app"
] |
ddonatien
| 15
|
scanapi/scanapi
|
rest-api
| 242
|
Publish Sphinx Documentation
|
## Publish Sphinx Documentation
The #230 implemented the auto-generated code documentation using [sphinx](http://sphinx-doc.org/).
We can run it locally by running
```shell
$ cd documentation
$ make html
```
And we can access it opening the file `scanapi/documentation/build/html/index.html` in a browser.
This is great, but it would be nice to have this documentation published somewhere else. One option would be to publish it inside our website [scanapi.dev](https://scanapi.dev), repository: https://github.com/scanapi/website
|
closed
|
2020-07-26T16:45:26Z
|
2021-08-01T13:47:19Z
|
https://github.com/scanapi/scanapi/issues/242
|
[
"Documentation"
] |
camilamaia
| 13
|
Farama-Foundation/PettingZoo
|
api
| 524
|
Misc Needed Cooperative Pong Maintenance
|
(I'm only going by documentation for all this, I didn't check the code for these)
-"max_cycles" should obviously be an argument by custom in any butterfly environment
-The -100 reward penalty for the ball going off the screen seems like way too much relative to the other rewards available, to the point where it even makes interpreting learning graphs difficult. It should be like -10 by default, and it should be an environment argument.
-The reward allocation between agents is confusing. According to the docs, "If the ball stays within bounds, both agents receive a combined reward of 100 / max_cycles (default 0.11)." It should per 100 per agent, this isn't how any other environment does it and makes reading the reward curves confusing.
(note to Ben: this is much less important than the pistonball/waterworld fixes)
|
closed
|
2021-10-26T03:08:45Z
|
2021-11-30T06:44:28Z
|
https://github.com/Farama-Foundation/PettingZoo/issues/524
|
[] |
jkterry1
| 0
|
StackStorm/st2
|
automation
| 6,146
|
Custom Scopes for Key-Value Store + RBAC for that Scopes
|
Hi Community,
Than a Company have different Teams that shares a st2 instance it would be nice if any Team could create there own scope. So all Team Members of Team1 can see all Key/Values of Team1 and all Team2 Members can se all key Values of Team 2. But Team 1 can not access Team2 and Team 2 can not access Team1 keys.
1) Have Custom Scopes.
2) Allow set a rbac rule on Scope.
3) Allow a Setup in Rule to set the Scope for KV that shoud use by System for that Rule.
What do you think about that Idea?
|
open
|
2024-02-21T07:07:15Z
|
2024-02-26T17:28:36Z
|
https://github.com/StackStorm/st2/issues/6146
|
[
"feature"
] |
philipphomberger
| 1
|
mljar/mercury
|
jupyter
| 262
|
Is there OAuth2 support for the Mercury web service?
|
Or if it is easy to integrate one from users sides hence I'm not sure Mercury is using a popular Python web server component or a self-made one.
|
closed
|
2023-04-28T02:58:34Z
|
2023-05-01T13:39:03Z
|
https://github.com/mljar/mercury/issues/262
|
[] |
xiamubobby
| 2
|
aio-libs-abandoned/aioredis-py
|
asyncio
| 505
|
Use alternate default "latest ID" for xread_group
|
When skipping the optional `latest_ids` kwarg to the `xread_group` command, the following error gets generated by the Redis server:
> `ERR The $ ID is meaningless in the context of XREADGROUP`
That's because both xread and xread_group share the `_xread` command builder which assumes `$` if latest ID isn't specified:
```py
if latest_ids is None:
latest_ids = ['$'] * len(streams)
```
A fix could be to make the default latest ID a parameter to the internal _xread method.
|
closed
|
2018-12-18T16:18:37Z
|
2021-03-18T23:53:44Z
|
https://github.com/aio-libs-abandoned/aioredis-py/issues/505
|
[
"resolved-via-latest"
] |
dcecile
| 0
|
hzwer/ECCV2022-RIFE
|
computer-vision
| 363
|
No way to specify arbitrary timestep?
|
Looking at the RIFE codebase/examples, it seems the number of interpolated frames created by RIFE is always a power of 2, as it just recursively splits the remaining frame pairs in half as it performs its interpolation. Is this correct? What if I want to interpolate 2 images with 7 intermediate steps between them, instead of just 2/4/8/etc. Is that possible?
|
closed
|
2024-05-28T05:21:40Z
|
2024-05-28T05:33:04Z
|
https://github.com/hzwer/ECCV2022-RIFE/issues/363
|
[] |
tyDiffusion
| 1
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 4,107
|
Implementation of a functionality to integrate KeyCloak IDP provider
|
### Proposal
This ticket is about tracking research and development to integrate Keycloak as an identity provider.
### Motivation and context
Basic idea is to use the functionality to enable support for third party authentications via Keycloak and in general OpenID Connect providers to authorize to be used to implement internal users logins based on external corporate policies.
Requirement collected while working with:
- Italian National Authority for Anticorruption (ANAC) that aims at integrating GlobaLeaks with [Keycloak](https://www.keycloak.org/)
- Bank of Italy
- Spanish Ministry of Justice
|
open
|
2024-06-15T05:47:16Z
|
2025-03-24T13:46:25Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4107
|
[
"U: Admin",
"C: Client",
"C: Backend",
"T: Feature"
] |
evilaliv3
| 7
|
ydataai/ydata-profiling
|
data-science
| 1,264
|
Feature request: Add sample head and tail to minimal setting
|
### Missing functionality
It would be good If we provide sample head and tail with minimal True.
### Proposed feature
With minimal True we still able to see the head and tail of data,
### Alternatives considered
_No response_
### Additional context
_No response_
|
closed
|
2023-02-08T13:20:19Z
|
2023-02-11T13:58:57Z
|
https://github.com/ydataai/ydata-profiling/issues/1264
|
[
"question/discussion ❓",
"needs-triage"
] |
monyoudom
| 2
|
dropbox/PyHive
|
sqlalchemy
| 434
|
Project is currently unsupported?
|
Hello,
could someone explain to me the "Project is currently unsupported" statement? I looked into the issues and commit message which added that message, but it is not explanatory.
**Does that mean the project is abandoned and there won't be new releases or fixes?**
I am looking into using the library in a new project, but I want to be sure I did not choose dead technology. If that is the case, could someone please point me to an alternative Hive API for python?
Best Martin!
|
open
|
2022-03-25T13:36:16Z
|
2022-04-11T05:39:01Z
|
https://github.com/dropbox/PyHive/issues/434
|
[] |
mamiksik
| 1
|
piskvorky/gensim
|
nlp
| 2,961
|
Documentation of strip_punctuation vs strip_punctuation2 in gensim.parsing.preprocessing
|
Thanks for all the hard work on this fantastic library. I found a small quirk today, not really a bug, just a bit of a rough edge:
In `gensim.parsing` [preprocessing.py ](https://github.com/RaRe-Technologies/gensim/blob/e210f73c42c5df5a511ca27166cbc7d10970eab2/gensim/parsing/preprocessing.py#L121) `strip_punctuation2` is defined: `strip_punctuation2 = strip_punctuation`.
In the [documentation](https://radimrehurek.com/gensim/parsing/preprocessing.html) the description of [`strip_punctuation2`](https://radimrehurek.com/gensim/parsing/preprocessing.html#gensim.parsing.preprocessing.strip_punctuation2) is a duplication of [`strip_punctuation`](https://radimrehurek.com/gensim/parsing/preprocessing.html#gensim.parsing.preprocessing.strip_punctuation) rather than a statement of equality.
I noticed this while reading the documentation and, assuming I was missing an obvious distinction, attempting to hand diff the the docs for the two functions. When I gave up and flipped to the source it became obvious how the two functions are related.
|
closed
|
2020-09-28T12:54:03Z
|
2021-06-29T01:44:31Z
|
https://github.com/piskvorky/gensim/issues/2961
|
[
"documentation"
] |
sciatro
| 4
|
pyg-team/pytorch_geometric
|
pytorch
| 9,437
|
TypeError: 'list' object is not callable
|
### 🐛 Describe the bug
```python
from torch.nn import Linear, ReLU, Dropout
from torch_geometric.nn import Sequential, GCNConv, JumpingKnowledge
from torch_geometric.nn import global_mean_pool
model = Sequential('x, edge_index, batch', [
(Dropout(p=0.5), 'x -> x'),
(GCNConv(2, 64), 'x, edge_index -> x1'),
ReLU(inplace=True),
(GCNConv(64, 64), 'x1, edge_index -> x2'),
ReLU(inplace=True),
(lambda x1, x2: [x1, x2], 'x1, x2 -> xs'),
(JumpingKnowledge("cat", 64, num_layers=2), 'xs -> x'),
(global_mean_pool, 'x, batch -> x'),
Linear(2 * 64, 3),
]).to('cpu')
```
It throws `TypeError: 'list' object is not callable`
### Versions
PyTorch version: 2.2.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.2
Libc version: glibc-2.35
Python version: 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090 Ti
GPU 1: NVIDIA GeForce RTX 3090 Ti
Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 3960X 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 3800.0000
CPU min MHz: 2200.0000
BogoMIPS: 7585.68
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] fast-pytorch-kmeans==0.2.0.1
[pip3] flake8==7.0.0
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] numpydoc==1.6.0
[pip3] paddle2onnx==1.0.6
[pip3] pytorchts==0.6.0
[pip3] reformer-pytorch==1.4.4
[pip3] torch==2.2.2
[pip3] torch_cluster==1.6.3+pt22cu121
[pip3] torch-ema==0.3
[pip3] torch-geometric==2.6.0
[pip3] torch_scatter==2.1.2+pt22cu121
[pip3] torch_sparse==0.6.18+pt22cu121
[pip3] torch_spline_conv==1.2.2+pt22cu121
[pip3] torchaudio==2.2.2
[pip3] torchmetrics==0.10.1
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.17.2
[pip3] triton==2.2.0
[conda] fast-pytorch-kmeans 0.2.0.1 pypi_0 pypi
[conda] nomkl 1.0 h5ca1d4c_0 conda-forge
[conda] numpy 1.24.4 pypi_0 pypi
[conda] numpydoc 1.6.0 pyhd8ed1ab_0 conda-forge
[conda] pytorchts 0.6.0 pypi_0 pypi
[conda] reformer-pytorch 1.4.4 pypi_0 pypi
[conda] torch 2.2.2 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt22cu121 pypi_0 pypi
[conda] torch-ema 0.3 pypi_0 pypi
[conda] torch-geometric 2.6.0 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt22cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt22cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt22cu121 pypi_0 pypi
[conda] torchaudio 2.2.2 pypi_0 pypi
[conda] torchmetrics 0.10.1 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.17.2 pypi_0 pypi
[conda] triton 2.2.0 pypi_0 pypi
|
closed
|
2024-06-19T11:58:12Z
|
2024-06-24T12:01:12Z
|
https://github.com/pyg-team/pytorch_geometric/issues/9437
|
[
"bug",
"nn"
] |
nowyouseemejoe
| 1
|
vastsa/FileCodeBox
|
fastapi
| 222
|
文件上传后提示超过最大保存时间
|
Docker latest 4257a45f985e
文件上传后提示超过最大保存时间
上传判断逻辑需要调整,需要在点击上传文件前判断最长保存时间,避免大文件上传的浪费

|
closed
|
2024-11-27T00:58:37Z
|
2025-03-02T07:27:14Z
|
https://github.com/vastsa/FileCodeBox/issues/222
|
[] |
wuwei-yu
| 2
|
matplotlib/matplotlib
|
data-science
| 29,063
|
[MNT]: Add a default `markevery` for lines in rcParams
|
### Summary
`markevery` is the only property regarding lines for which it is not possible to set a default value in `rcParams`.
This property would be convenient if a user wants to set a default value, rather then needing to specify for each plot.
### Proposed fix
Add `lines.markevery : None` to the defaut `matplotlibrc`
|
closed
|
2024-11-03T00:28:59Z
|
2024-11-03T02:20:45Z
|
https://github.com/matplotlib/matplotlib/issues/29063
|
[
"status: duplicate",
"Maintenance"
] |
LorenzoPeri17
| 1
|
proplot-dev/proplot
|
data-visualization
| 60
|
Cannot add text to projection axes
|
Currently, the `text` wrapper for `proplot` cannot handle projecting to geoaxes, making it impossible to accurately place text on a map projection. For `matplotlib` one just passes a `transform=ccrs.PlateCarree()` for example to the `ax.text()` to interpret the values as coordinates on the map:
```python
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
f, ax = plt.subplots(subplot_kw=dict(projection=ccrs.SouthPolarStereo(central_longitude=-70)))
ax.set_extent([-180, 180, -90, -40], ccrs.PlateCarree())
ax.add_feature(cfeature.LAND, color='#d3d3d3')
ax.text(120, -83, 'test', transform=ccrs.PlateCarree())
```
<img width="261" alt="Screen Shot 2019-10-31 at 2 39 36 PM" src="https://user-images.githubusercontent.com/8881170/67984429-4c272c00-fbec-11e9-8b44-00af26912eb2.png">
`proplot` breaks when one tries to place `ccrs.PlateCarree()` into the `transform` keyword for the text wrapper. And the options of `data`, `axes`, `figure` don't fix this problem. Perhaps an extra keyword such as `geoaxes` could handle this?
```python
import proplot as plot
f, ax = plot.subplots(proj='spstere', proj_kw={'central_longitude': -70})
ax.text(120, -83, 'test', color='w', fontname='Helvetica')
ax.format(land=True, boundinglat=-40)
```
<img width="241" alt="Screen Shot 2019-10-31 at 2 39 38 PM" src="https://user-images.githubusercontent.com/8881170/67984437-4fbab300-fbec-11e9-836f-04b046a64a90.png">
|
closed
|
2019-10-31T20:39:26Z
|
2019-11-01T14:55:20Z
|
https://github.com/proplot-dev/proplot/issues/60
|
[
"bug"
] |
bradyrx
| 1
|
onnx/onnx
|
tensorflow
| 5,794
|
What is the correct way to make a tensor with a dynamic dimension for a Reshape operator?
|
# Ask a Question
### Question
I am trying to add a Reshape node to a BERT onnx model that works with dynamic shapes. The reshape op should reshape a rank 3 tensor to a rank 2. The input to the reshape is of shape [unk__2,unk__3,768] and I need to collapse the first two dynamic dimensions into one and keep the last fixed dimension such as [[unk__2 * unk__3], 768]. How can I specify a dynamic dimension when making a tensor with the onnx helper?
### Further information
When running the code snippet I provided below, I get the following error:
```
raise TypeError(f"'{value}' is not an accepted attribute value.")
TypeError: 'name: "shape"
type {
tensor_type {
elem_type: 7
shape {
dim {
dim_value: -1
}
dim {
dim_value: 768
}
}
}
}
' is not an accepted attribute value.
```
- Is this issue related to a specific model?
**Model name**: bert-base
**Model opset**: 18
### Notes
Code snippet:
```
# Create a Constant node that contains the target shape
shape_tensor = helper.make_tensor_value_info(name='shape', elem_type=onnx.TensorProto.INT64, shape=(-1,768))
shape_node = helper.make_node(
'Constant',
inputs=[],
outputs=[f'shape_{i}_output'],
value=shape_tensor,
name=f'shape_{i}'
)
# Create a Reshape node
reshape_node = helper.make_node(
'Reshape',
inputs=[mm_node.input[0], f'shape_{i}_output'],
outputs=[f'reshaped_output_{i}'],
name=f'Reshape_{i}'
)
```
|
closed
|
2023-12-07T06:51:41Z
|
2025-01-02T06:44:37Z
|
https://github.com/onnx/onnx/issues/5794
|
[
"question",
"stale"
] |
ria143
| 1
|
robusta-dev/robusta
|
automation
| 1,176
|
Custom CA certificate cannot be set
|
**Describe the bug**
Once specify a custom CA certificate with
```
runner:
certificate:
```
The runner container will crash with:
```Matplotlib created a temporary cache directory at /tmp/matplotlib-d2tsy8fk because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
setting up colored logging
2023-11-17 22:29:20.541 INFO logger initialized using INFO log level
Traceback (most recent call last):
File "/usr/local/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/app/src/robusta/runner/main.py", line 56, in <module>
main()
File "/app/src/robusta/runner/main.py", line 26, in main
if add_custom_certificate(ADDITIONAL_CERTIFICATE):
File "/app/src/robusta/runner/ssl_utils.py", line 10, in add_custom_certificate
with open(certifi.where(), "ab") as outfile:
PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.9/site-packages/certifi/cacert.pem'
```
**To Reproduce**
Steps to reproduce the behavior:
1. Set `runner.certificate` in values
**Expected behavior**
`prometheus-k8s` connection successful.
|
closed
|
2023-11-17T22:32:25Z
|
2024-01-08T08:38:42Z
|
https://github.com/robusta-dev/robusta/issues/1176
|
[
"bug"
] |
bear-redhat
| 3
|
pytorch/vision
|
computer-vision
| 8,478
|
Put back MPS builds
|
(this is a follow-up and more up-to-date version of https://github.com/pytorch/vision/issues/8456)
The M1 CI jobs were broken for ~1 week (https://github.com/pytorch/vision/issues/8456) and it turns out the problem was caused by the MPS build. We deactivated the MPS builds in https://github.com/pytorch/vision/pull/8472 and the M1 jobs (all using `macos-m1-stable`) are now green.
We have to put back the MPS build before the release though, otherwise torchvision won't provide MPS-compatible custom ops.
In https://github.com/pytorch/vision/pull/8476 (macos-m1-stable), https://github.com/pytorch/vision/pull/8473 (macos-m1-13) and https://github.com/pytorch/vision/pull/8477 (macos-m1-14) I'm trying to add back those MPS builds, but they all fail with the same error as previously seen back in https://github.com/pytorch/vision/issues/8456:
```
File "/Users/ec2-user/runner/_work/vision/vision/pytorch/vision/test/smoke_test.py", line 7, in <module>
import torchvision
File "/Users/ec2-user/runner/_work/vision/vision/pytorch/vision/torchvision/__init__.py", line 10, in <module>
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
File "/Users/ec2-user/runner/_work/vision/vision/pytorch/vision/torchvision/_meta_registrations.py", line 164, in <module>
def meta_nms(dets, scores, iou_threshold):
File "/opt/homebrew/Caskroom/miniconda/base/envs/ci/lib/python3.10/site-packages/torch/library.py", line 653, in register
use_lib._register_fake(op_name, func, _stacklevel=stacklevel + 1)
File "/opt/homebrew/Caskroom/miniconda/base/envs/ci/lib/python3.10/site-packages/torch/library.py", line 153, in _register_fake
handle = entry.abstract_impl.register(func_to_register, source)
File "/opt/homebrew/Caskroom/miniconda/base/envs/ci/lib/python3.10/site-packages/torch/_library/abstract_impl.py", line 30, in register
if torch._C._dispatch_has_kernel_for_dispatch_key(self.qualname, "Meta"):
RuntimeError: operator torchvision::nms does not exist
```
CC @malfet @huydhn
|
closed
|
2024-06-07T08:39:54Z
|
2024-06-10T13:24:38Z
|
https://github.com/pytorch/vision/issues/8478
|
[] |
NicolasHug
| 6
|
mljar/mercury
|
data-visualization
| 281
|
Don't run the book at all until "Run" is pressed
|
Hi - is there a way you can prevent running the notebook until the green "Run" button is pressed?
The use case is that the notebook runs some expensive queries and we don't want to do that until the user has filled out the parameters and hits the Run button.
I've already set `continuous_update=False`, but that doesn't stop the initial run of the notebook.
There are hacky work-arounds, like maybe calling Stop() if the input is empty, but would be nice if this was supported out of the box. And possibly for this use case, the `app` declaration can be on the first line.
|
closed
|
2023-05-19T19:38:48Z
|
2023-05-22T07:47:55Z
|
https://github.com/mljar/mercury/issues/281
|
[] |
kapily
| 2
|
schemathesis/schemathesis
|
pytest
| 2,613
|
[BUG] '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)' with verify=False
|
### Checklist
- [x ] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [ x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x ] I am using the latest version of Schemathesis
### Describe the bug
I am trying to use schemathesis against a base_url of a real service listening for incoming requests - I need to use the https protocol, but I don't want to / can't fetch the CA cert every time.
In order to proceed with testing I've specified `response = case.call_and_validate(verify=False)` in my test case definition, but in the logs I'm still getting errors related to SSL cert verification.
I've also tried to split the call in two (i.e. `response = case.call(verify=False); case.validate_response(response)`), but I'm seeing the same issue and `validate_response` does not support the `verify` argument.
This is the openAPI spec I'm using: https://raw.githubusercontent.com/kubeflow/model-registry/main/api/openapi/model-registry.yaml
This is the full trace of one such failure:
```
self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x10fe47290>, conn = <urllib3.connection.HTTPSConnection object at 0x10fe44590>, method = 'GET'
url = '/api/model_registry/v1alpha3/serving_environments/0/inference_services?name=entity-name&externalId=10&pageSize=100&orderBy=ID&sortOrder=DESC', body = None
headers = {'User-Agent': 'schemathesis/3.38.9', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'X-Schemathesis-TestCaseId': 'BEwkUo'}
retries = Retry(total=0, connect=None, read=False, redirect=None, status=None), timeout = Timeout(connect=10.0, read=10.0, total=None), chunked = False, response_conn = <urllib3.connection.HTTPSConnection object at 0x10fe44590>
preload_content = False, decode_content = False, enforce_content_length = True
def _make_request(
self,
conn: BaseHTTPConnection,
method: str,
url: str,
body: _TYPE_BODY | None = None,
headers: typing.Mapping[str, str] | None = None,
retries: Retry | None = None,
timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT,
chunked: bool = False,
response_conn: BaseHTTPConnection | None = None,
preload_content: bool = True,
decode_content: bool = True,
enforce_content_length: bool = True,
) -> BaseHTTPResponse:
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param url:
The URL to perform the request on.
:param body:
Data to send in the request body, either :class:`str`, :class:`bytes`,
an iterable of :class:`str`/:class:`bytes`, or a file-like object.
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param response_conn:
Set this to ``None`` if you will handle releasing the connection or
set the connection to have the response release it.
:param preload_content:
If True, the response's body will be preloaded during construction.
:param decode_content:
If True, will attempt to decode the body based on the
'content-encoding' header.
:param enforce_content_length:
Enforce content length checking. Body returned by server must match
value of Content-Length header, if present. Otherwise, raise error.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout)
try:
# Trigger any extra validation we need to do.
try:
> self._validate_conn(conn)
.venv/lib/python3.11/site-packages/urllib3/connectionpool.py:466:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/lib/python3.11/site-packages/urllib3/connectionpool.py:1095: in _validate_conn
conn.connect()
.venv/lib/python3.11/site-packages/urllib3/connection.py:730: in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
.venv/lib/python3.11/site-packages/urllib3/connection.py:909: in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
.venv/lib/python3.11/site-packages/urllib3/util/ssl_.py:469: in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
.venv/lib/python3.11/site-packages/urllib3/util/ssl_.py:513: in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:517: in wrap_socket
return self.sslsocket_class._create(
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:1104: in _create
self.do_handshake()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ssl.SSLSocket [closed] fd=-1, family=2, type=1, proto=0>, block = False
@_sslcopydoc
def do_handshake(self, block=False):
self._check_connected()
timeout = self.gettimeout()
try:
if timeout == 0.0 and block:
self.settimeout(None)
> self._sslobj.do_handshake()
E ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:1382: SSLCertVerificationError
During handling of the above exception, another exception occurred:
self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x10fe47290>, method = 'GET', url = '/api/model_registry/v1alpha3/serving_environments/0/inference_services?name=entity-name&externalId=10&pageSize=100&orderBy=ID&sortOrder=DESC'
body = None, headers = {'User-Agent': 'schemathesis/3.38.9', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'X-Schemathesis-TestCaseId': 'BEwkUo'}
retries = Retry(total=0, connect=None, read=False, redirect=None, status=None), redirect = False, assert_same_host = False, timeout = Timeout(connect=10.0, read=10.0, total=None), pool_timeout = None, release_conn = False, chunked = False
body_pos = None, preload_content = False, decode_content = False, response_kw = {}
parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/model_registry/v1alpha3/serving_environments/0/inference_services', query='name=entity-name&externalId=10&pageSize=100&orderBy=ID&sortOrder=DESC', fragment=None)
destination_scheme = None, conn = None, release_this_conn = True, http_tunnel_required = False, err = None, clean_exit = False
def urlopen( # type: ignore[override]
self,
method: str,
url: str,
body: _TYPE_BODY | None = None,
headers: typing.Mapping[str, str] | None = None,
retries: Retry | bool | int | None = None,
redirect: bool = True,
assert_same_host: bool = True,
timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT,
pool_timeout: int | None = None,
release_conn: bool | None = None,
chunked: bool = False,
body_pos: _TYPE_BODY_POSITION | None = None,
preload_content: bool = True,
decode_content: bool = True,
**response_kw: typing.Any,
) -> BaseHTTPResponse:
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method
such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param url:
The URL to perform the request on.
:param body:
Data to send in the request body, either :class:`str`, :class:`bytes`,
an iterable of :class:`str`/:class:`bytes`, or a file-like object.
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
If ``None`` (default) will retry 3 times, see ``Retry.DEFAULT``. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When ``False``, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param bool preload_content:
If True, the response's body will be preloaded into memory.
:param bool decode_content:
If True, will attempt to decode the body based on the
'content-encoding' header.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of ``preload_content``
which defaults to ``True``.
:param bool chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
"""
parsed_url = parse_url(url)
destination_scheme = parsed_url.scheme
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = preload_content
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
# Ensure that the URL we're connecting to is properly encoded
if url.startswith("/"):
url = to_str(_encode_target(url))
else:
url = to_str(parsed_url.url)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/urllib3/urllib3/issues/651>
release_this_conn = release_conn
http_tunnel_required = connection_requires_http_tunnel(
self.proxy, self.proxy_config, destination_scheme
)
# Merge the proxy headers. Only done when not using HTTP CONNECT. We
# have to copy the headers dict so we can safely change it without those
# changes being reflected in anyone else's copy.
if not http_tunnel_required:
headers = headers.copy() # type: ignore[attr-defined]
headers.update(self.proxy_headers) # type: ignore[union-attr]
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment]
# Is this a closed/new connection that requires CONNECT tunnelling?
if self.proxy is not None and http_tunnel_required and conn.is_closed:
try:
self._prepare_proxy(conn)
except (BaseSSLError, OSError, SocketTimeout) as e:
self._raise_timeout(
err=e, url=self.proxy.url, timeout_value=conn.timeout
)
raise
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Make the request on the HTTPConnection object
> response = self._make_request(
conn,
method,
url,
timeout=timeout_obj,
body=body,
headers=headers,
chunked=chunked,
retries=retries,
response_conn=response_conn,
preload_content=preload_content,
decode_content=decode_content,
**response_kw,
)
.venv/lib/python3.11/site-packages/urllib3/connectionpool.py:789:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x10fe47290>, conn = <urllib3.connection.HTTPSConnection object at 0x10fe44590>, method = 'GET'
url = '/api/model_registry/v1alpha3/serving_environments/0/inference_services?name=entity-name&externalId=10&pageSize=100&orderBy=ID&sortOrder=DESC', body = None
headers = {'User-Agent': 'schemathesis/3.38.9', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Type': 'application/json', 'X-Schemathesis-TestCaseId': 'BEwkUo'}
retries = Retry(total=0, connect=None, read=False, redirect=None, status=None), timeout = Timeout(connect=10.0, read=10.0, total=None), chunked = False, response_conn = <urllib3.connection.HTTPSConnection object at 0x10fe44590>
preload_content = False, decode_content = False, enforce_content_length = True
def _make_request(
self,
conn: BaseHTTPConnection,
method: str,
url: str,
body: _TYPE_BODY | None = None,
headers: typing.Mapping[str, str] | None = None,
retries: Retry | None = None,
timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT,
chunked: bool = False,
response_conn: BaseHTTPConnection | None = None,
preload_content: bool = True,
decode_content: bool = True,
enforce_content_length: bool = True,
) -> BaseHTTPResponse:
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param url:
The URL to perform the request on.
:param body:
Data to send in the request body, either :class:`str`, :class:`bytes`,
an iterable of :class:`str`/:class:`bytes`, or a file-like object.
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param response_conn:
Set this to ``None`` if you will handle releasing the connection or
set the connection to have the response release it.
:param preload_content:
If True, the response's body will be preloaded during construction.
:param decode_content:
If True, will attempt to decode the body based on the
'content-encoding' header.
:param enforce_content_length:
Enforce content length checking. Body returned by server must match
value of Content-Length header, if present. Otherwise, raise error.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout)
try:
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# _validate_conn() starts the connection to an HTTPS proxy
# so we need to wrap errors with 'ProxyError' here too.
except (
OSError,
NewConnectionError,
TimeoutError,
BaseSSLError,
CertificateError,
SSLError,
) as e:
new_e: Exception = e
if isinstance(e, (BaseSSLError, CertificateError)):
new_e = SSLError(e)
# If the connection didn't successfully connect to it's proxy
# then there
if isinstance(
new_e, (OSError, NewConnectionError, TimeoutError, SSLError)
) and (conn and conn.proxy and not conn.has_connected_to_proxy):
new_e = _wrap_proxy_error(new_e, conn.proxy.scheme)
> raise new_e
E urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)
.venv/lib/python3.11/site-packages/urllib3/connectionpool.py:490: SSLError
The above exception was the direct cause of the following exception:
self = <requests.adapters.HTTPAdapter object at 0x10f494410>, request = <PreparedRequest [GET]>, stream = False, timeout = Timeout(connect=10.0, read=10.0, total=None), verify = True, cert = None, proxies = OrderedDict()
def send(
self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None
):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple or urllib3 Timeout object
:param verify: (optional) Either a boolean, in which case it controls whether
we verify the server's TLS certificate, or a string, in which case it
must be a path to a CA bundle to use
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
try:
conn = self.get_connection_with_tls_context(
request, verify, proxies=proxies, cert=cert
)
except LocationValueError as e:
raise InvalidURL(e, request=request)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(
request,
stream=stream,
timeout=timeout,
verify=verify,
cert=cert,
proxies=proxies,
)
chunked = not (request.body is None or "Content-Length" in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError:
raise ValueError(
f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, "
f"or a single float to set both timeouts to the same value."
)
elif isinstance(timeout, TimeoutSauce):
pass
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
> resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout,
chunked=chunked,
)
.venv/lib/python3.11/site-packages/requests/adapters.py:667:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/lib/python3.11/site-packages/urllib3/connectionpool.py:843: in urlopen
retries = retries.increment(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None, status=None), method = 'GET', url = '/api/model_registry/v1alpha3/serving_environments/0/inference_services?name=entity-name&externalId=10&pageSize=100&orderBy=ID&sortOrder=DESC'
response = None, error = SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)'))
_pool = <urllib3.connectionpool.HTTPSConnectionPool object at 0x10fe47290>, _stacktrace = <traceback object at 0x11a243e00>
def increment(
self,
method: str | None = None,
url: str | None = None,
response: BaseHTTPResponse | None = None,
error: Exception | None = None,
_pool: ConnectionPool | None = None,
_stacktrace: TracebackType | None = None,
) -> Self:
"""Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.BaseHTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
status_count = self.status
other = self.other
cause = "unknown"
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or method is None or not self._is_method_retryable(method):
raise reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif error:
# Other retry?
if other is not None:
other -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = "too many redirects"
response_redirect_location = response.get_redirect_location()
if response_redirect_location:
redirect_location = response_redirect_location
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and the given method is in the allowed_methods
cause = ResponseError.GENERIC_ERROR
if response and response.status:
if status_count is not None:
status_count -= 1
cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status)
status = response.status
history = self.history + (
RequestHistory(method, url, error, status, redirect_location),
)
new_retry = self.new(
total=total,
connect=connect,
read=read,
redirect=redirect,
status=status_count,
other=other,
history=history,
)
if new_retry.is_exhausted():
reason = error or ResponseError(cause)
> raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='<SNIP>', port=443): Max retries exceeded with url: /api/model_registry/v1alpha3/serving_environments/0/inference_services?name=entity-name&externalId=10&pageSize=100&orderBy=ID&sortOrder=DESC (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)')))
.venv/lib/python3.11/site-packages/urllib3/util/retry.py:519: MaxRetryError
During handling of the above exception, another exception occurred:
admin_client_token = '<SNIP>'
@wraps(test)
> def test_function(*args: Any, **kwargs: Any) -> Any:
.venv/lib/python3.11/site-packages/schemathesis/_hypothesis.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/lib/python3.11/site-packages/hypothesis/core.py:1431: in _raise_to_user
raise the_error_hypothesis_found
tests/model_registry/test_rest_api.py:13: in test_mr_api
response = case.call_and_validate(verify=False)
.venv/lib/python3.11/site-packages/schemathesis/specs/openapi/checks.py:407: in ignored_auth
new_response = case.operation.schema.transport.send(case)
.venv/lib/python3.11/site-packages/schemathesis/transports/__init__.py:169: in send
response = session.request(**data) # type: ignore
.venv/lib/python3.11/site-packages/requests/sessions.py:589: in request
resp = self.send(prep, **send_kwargs)
.venv/lib/python3.11/site-packages/requests/sessions.py:703: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0x10f494410>, request = <PreparedRequest [GET]>, stream = False, timeout = Timeout(connect=10.0, read=10.0, total=None), verify = True, cert = None, proxies = OrderedDict()
def send(
self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None
):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple or urllib3 Timeout object
:param verify: (optional) Either a boolean, in which case it controls whether
we verify the server's TLS certificate, or a string, in which case it
must be a path to a CA bundle to use
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
try:
conn = self.get_connection_with_tls_context(
request, verify, proxies=proxies, cert=cert
)
except LocationValueError as e:
raise InvalidURL(e, request=request)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(
request,
stream=stream,
timeout=timeout,
verify=verify,
cert=cert,
proxies=proxies,
)
chunked = not (request.body is None or "Content-Length" in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError:
raise ValueError(
f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, "
f"or a single float to set both timeouts to the same value."
)
elif isinstance(timeout, TimeoutSauce):
pass
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout,
chunked=chunked,
)
except (ProtocolError, OSError) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
if isinstance(e.reason, _SSLError):
# This branch is for urllib3 v1.22 and later.
> raise SSLError(e, request=request)
E requests.exceptions.SSLError: HTTPSConnectionPool(host='<SNIP>', port=443): Max retries exceeded with url: /api/model_registry/v1alpha3/serving_environments/0/inference_services?name=entity-name&externalId=10&pageSize=100&orderBy=ID&sortOrder=DESC (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)')))
E Falsifying explicit example: test_mr_api(
E admin_client_token='<SNIP>',
E case=,
E )
.venv/lib/python3.11/site-packages/requests/adapters.py:698: SSLError
```
Something else I notice that might be causing issues is that in some of these calls the url seems to be truncated to only be the API endpoint, rather than base_url+endpoint, e.g.:
```
self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x10fe47290>, method = 'GET', url = '/api/model_registry/v1alpha3/serving_environments/0/inference_services?name=entity-name&externalId=10&pageSize=100&orderBy=ID&sortOrder=DESC'
[...]
parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/api/model_registry/v1alpha3/[...]
```
|
closed
|
2024-12-10T10:18:21Z
|
2024-12-12T17:00:28Z
|
https://github.com/schemathesis/schemathesis/issues/2613
|
[
"Priority: High",
"Type: Bug"
] |
lugi0
| 11
|
databricks/koalas
|
pandas
| 1,285
|
Pandas accessor support
|
Can you extend this api like pandas accessor pattern and custom types?
|
closed
|
2020-02-16T03:13:39Z
|
2020-09-01T14:04:52Z
|
https://github.com/databricks/koalas/issues/1285
|
[
"discussions"
] |
achapkowski
| 3
|
521xueweihan/HelloGitHub
|
python
| 1,916
|
hellogit
|
## 项目推荐
- 项目地址:仅收录 GitHub 的开源项目,请填写 GitHub 的项目地址
- 类别:请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Swift、其它、书籍、机器学习)
- 项目后续更新计划:
- 项目描述:
- 必写:这是个什么项目、能用来干什么、有什么特点或解决了什么痛点
- 可选:适用于什么场景、能够让初学者学到什么
- 描述长度(不包含示例代码): 10 - 256 个字符
- 推荐理由:令人眼前一亮的点是什么?解决了什么痛点?
- 示例代码:(可选)长度:1-20 行
- 截图:(可选)gif/png/jpg
## 提示(提交时请删除以下内容)
> 点击上方 “Preview” 更方便地阅读以下内容,
提高项目收录的概率方法如下:
1. 到 HelloGitHub 网站首页:https://hellogithub.com 搜索要推荐的项目地址,查看准备推荐的项目是否被推荐过。
2. 根据 [项目审核标准说明](https://github.com/521xueweihan/HelloGitHub/issues/271) 修改项目
3. 如您推荐的项目收录到《HelloGitHub》月刊,您的 GitHub 帐号将展示在 [贡献人列表](https://github.com/521xueweihan/HelloGitHub/blob/master/content/contributors.md),**同时会在本 issues 中通知您**。
再次感谢您对 HelloGitHub 项目的支持!
|
closed
|
2021-10-03T05:32:22Z
|
2021-10-03T05:32:26Z
|
https://github.com/521xueweihan/HelloGitHub/issues/1916
|
[
"恶意issue"
] |
aeioui
| 1
|
pykaldi/pykaldi
|
numpy
| 260
|
kaldi Like Data Augmentation
|
How can we do Kaldi like data augmentation in the API only on acoustic data
|
open
|
2021-03-10T10:35:37Z
|
2021-03-10T10:35:37Z
|
https://github.com/pykaldi/pykaldi/issues/260
|
[] |
shakeel608
| 0
|
modoboa/modoboa
|
django
| 2,645
|
Cannot complete migrate after running makemigrations
|
Hi, running modoboa 2.0.2 on FReeBSD 11.4 and mysql 5.7.37.
I have tried to upgrade some extensions manually, I realised that running pip install extension does not upgrade to the latest version...so I started with trying to upgrade modboa-amavis to 1.4.0 I ran the following :-
> pip install modoboa-amavis==1.4.0
Then I ran the following :-
> python manage.py migrate
I then received the red text about running makemigrations :-
Operations to perform:
Apply all migrations: admin, auth, authtoken, contenttypes, core, dnstools, lib, limits, maillog, modoboa_amavis, modoboa_dmarc, modoboa_postfix_autoreply, modoboa_radicale, otp_static, otp_totp, relaydomains, reversion, sessions, sites, transport
Running migrations:
No migrations to apply.
Your models in app(s): 'admin', 'core', 'dnstools', 'limits', 'maillog', 'modoboa_dmarc', 'modoboa_postfix_autoreply', 'modoboa_radicale', 'relaydomains', 'transport' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
So I ran the following:-
> python manage.py makemigrations
Now if I try to complete teh migration, the first one succeeded (Applying admin.0021_auto_20221017_1109... OK) then the second failed:-
Applying core.0023_auto_20221017_1109...Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/mysql/base.py", line 73, in execute
return self.cursor.execute(query, args)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1833, "Cannot change column 'id': used in a foreign key constraint 'modoboa_contacts_category_user_id_4061c4f0_fk_core_user_id' of table 'mail.modoboa_contacts_category'")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/migrate.py", line 244, in handle
post_migrate_state = executor.migrate(
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 227, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/migration.py", line 126, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/operations/fields.py", line 244, in database_forwards
schema_editor.alter_field(from_model, from_field, to_field)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 608, in alter_field
self._alter_field(model, old_field, new_field, old_type, new_type,
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 765, in _alter_field
self.execute(
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 145, in execute
cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 98, in execute
return super().execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/mysql/base.py", line 73, in execute
return self.cursor.execute(query, args)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
django.db.utils.OperationalError: (1833, "Cannot change column 'id': used in a foreign key constraint 'modoboa_contacts_category_user_id_4061c4f0_fk_core_user_id' of table 'mail.modoboa_contacts_category'")
These are the changes that makemigrations has attempted to create:-
Migrations for 'admin':
/usr/local/lib/python3.8/site-packages/modoboa/admin/migrations/0021_auto_20221017_1109.py
- Alter field id on alarm
- Alter field id on alias
- Alter field id on aliasrecipient
- Alter field id on dnsblresult
- Alter field id on domain
- Alter field id on domainalias
- Alter field id on mailbox
- Alter field id on mailboxoperation
- Alter field id on mxrecord
- Alter field id on senderaddress
Migrations for 'core':
/usr/local/lib/python3.8/site-packages/modoboa/core/migrations/0023_auto_20221017_1109.py
- Alter field id on extensionupdatehistory
- Alter field id on localconfig
- Alter field id on log
- Alter field id on objectaccess
- Alter field first_name on user
- Alter field id on user
- Alter field language on user
Migrations for 'dnstools':
/usr/local/lib/python3.8/site-packages/modoboa/dnstools/migrations/0002_alter_dnsrecord_id.py
- Alter field id on dnsrecord
Migrations for 'limits':
/usr/local/lib/python3.8/site-packages/modoboa/limits/migrations/0007_auto_20221017_1109.py
- Alter field id on domainobjectlimit
- Alter field id on userobjectlimit
Migrations for 'maillog':
/usr/local/lib/python3.8/site-packages/modoboa/maillog/migrations/0004_alter_maillog_id.py
- Alter field id on maillog
Migrations for 'modoboa_dmarc':
/usr/local/lib/python3.8/site-packages/modoboa_dmarc/migrations/0004_auto_20221017_1109.py
- Alter field id on record
- Alter field id on report
- Alter field id on reporter
- Alter field id on result
Migrations for 'modoboa_postfix_autoreply':
/usr/local/lib/python3.8/site-packages/modoboa_postfix_autoreply/migrations/0009_auto_20221017_1109.py
- Alter field id on arhistoric
- Alter field id on armessage
Migrations for 'modoboa_radicale':
/usr/local/lib/python3.8/site-packages/modoboa_radicale/migrations/0006_auto_20221017_1109.py
- Alter field id on accessrule
- Alter field id on sharedcalendar
- Alter field id on usercalendar
Migrations for 'relaydomains':
/usr/local/lib/python3.8/site-packages/modoboa/relaydomains/migrations/0010_alter_recipientaccess_id.py
- Alter field id on recipientaccess
Migrations for 'transport':
/usr/local/lib/python3.8/site-packages/modoboa/transport/migrations/0003_alter_transport_id.py
- Alter field id on transport
|
closed
|
2022-10-17T11:26:11Z
|
2023-01-04T11:10:39Z
|
https://github.com/modoboa/modoboa/issues/2645
|
[
"bug"
] |
ndoody
| 4
|
public-apis/public-apis
|
api
| 3,891
|
Top Up
|
closed
|
2024-07-05T03:42:51Z
|
2024-07-07T19:29:47Z
|
https://github.com/public-apis/public-apis/issues/3891
|
[] |
MrQueen132
| 0
|
|
Johnserf-Seed/TikTokDownload
|
api
| 678
|
单个视频下载一直是丢失状态[BUG]
|


问题1:自定义配置文件中设置了cover、music都是‘no',但是依然下载封面和音乐;
问题2:音乐、封面都能下载,但是视频一直是丢失状态;
|
open
|
2024-03-13T16:13:16Z
|
2024-03-14T19:53:21Z
|
https://github.com/Johnserf-Seed/TikTokDownload/issues/678
|
[] |
lingyu5219
| 2
|
lepture/authlib
|
django
| 604
|
Allow the instance of ResourceProtector to be a decorator without an unnecessary call if we don't have any 'call' attribute, solution provided.
|
for example on this page of [documentation](https://docs.authlib.org/en/latest/flask/2/resource-server.html) we can see:
```python
@app.route('/user')
@require_oauth()
def user_profile():
user = current_token.user
return jsonify(user)
# or with None
@app.route('/user')
@require_oauth(None)
def user_profile():
user = current_token.user
return jsonify(user)
```
If we speak about transparency in coding, `@require_oauth()` is not an obvious practice.
More often, you can encounter `@require_oauth`.
Organizing the decorator for both cases — calling with and without attributes — is easy:
```python
def __call__(self, *args, **kwargs):
if args and callable(args[0]):
return super().__call__()(*args, **kwargs)
return super().__call__(*args, **kwargs)
```
|
open
|
2023-12-15T23:01:49Z
|
2025-02-20T20:21:45Z
|
https://github.com/lepture/authlib/issues/604
|
[
"good first issue",
"feature request",
"server"
] |
danilovmy
| 0
|
open-mmlab/mmdetection
|
pytorch
| 12,277
|
mmdet in orin : No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
|
I tried to deploy the mmdet framework on Orin. After installation, the version output is normal, but when executing the code to initialize the references, the following error occurs. However, it seems that the installation is not the issue, and it has already been successfully installed?
----------------------------------------------------------------------
check version:
python -c "import torch, torchvision, mmcv, mmdet; print(f'Torch Version: {torch.__version__}'); print(f'Torch CUDA Version: {torch.version.cuda}'); print(f'Torchvision Version: {torchvision.__version__}'); print(f'MMCV Version: {mmcv.__version__}'); print(f'MMDetection Version: {mmdet.__version__}')"
Torch Version: 2.1.0a0+41361538.nv23.06
Torch CUDA Version: 11.4
Torchvision Version: 0.16.1
MMCV Version: 2.0.0
MMDetection Version: 3.3.0
-----------------------------------------------------------------------
error:
ceback (most recent call last):
File "/home/nvidia/zd/wk/devel/lib/viplanner_node/viplanner_node.py", line 15, in <module>
exec(compile(fh.read(), python_script, 'exec'), context)
File "/home/nvidia/zd/wk/src/ros/planner/src/viplanner_node.py", line 41, in <module>
from src.m2f_inference import Mask2FormerInference
File "/home/nvidia/zd/wk/src/ros/planner/src/m2f_inference.py", line 12, in <module>
from mmdet.apis import inference_detector, init_detector
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmdet/apis/__init__.py", line 2, in <module>
from .det_inferencer import DetInferencer
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmdet/apis/det_inferencer.py", line 15, in <module>
from mmengine.infer.infer import BaseInferencer, ModelType
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/infer/__init__.py", line 2, in <module>
from .infer import BaseInferencer
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/infer/infer.py", line 25, in <module>
from mmengine.runner.checkpoint import (_load_checkpoint,
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/runner/__init__.py", line 2, in <module>
from ._flexible_runner import FlexibleRunner
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/runner/_flexible_runner.py", line 14, in <module>
from mmengine._strategy import BaseStrategy
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/_strategy/__init__.py", line 4, in <module>
from .base import BaseStrategy
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/_strategy/base.py", line 19, in <module>
from mmengine.model.wrappers import is_model_wrapper
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/model/__init__.py", line 6, in <module>
from .base_model import BaseDataPreprocessor, BaseModel, ImgDataPreprocessor
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/model/base_model/__init__.py", line 2, in <module>
from .base_model import BaseModel
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 12, in <module>
from ..base_module import BaseModule
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/model/base_module.py", line 14, in <module>
from .wrappers.utils import is_model_wrapper
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/model/wrappers/__init__.py", line 14, in <module>
from .fully_sharded_distributed import \
File "/home/nvidia/zd/miniconda3/envs/py3810/lib/python3.8/site-packages/mmengine/model/wrappers/fully_sharded_distributed.py", line 10, in <module>
from torch.distributed.fsdp.api import (FullStateDictConfig,
File "/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/fsdp/__init__.py", line 1, in <module>
from .flat_param import FlatParameter
File "/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/fsdp/flat_param.py", line 30, in <module>
from torch.distributed._tensor import DTensor
File "/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/_tensor/__init__.py", line 6, in <module>
import torch.distributed._tensor.ops
File "/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/_tensor/ops/__init__.py", line 2, in <module>
from .embedding_ops import * # noqa: F403
File "/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/_tensor/ops/embedding_ops.py", line 6, in <module>
from torch.distributed._tensor.api import _Partial, DTensorSpec, Replicate, Shard
File "/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/_tensor/api.py", line 8, in <module>
import torch.distributed._tensor.dispatch as op_dispatch
File "/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/_tensor/dispatch.py", line 10, in <module>
from torch.distributed._tensor.device_mesh import DeviceMesh
File "/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/_tensor/device_mesh.py", line 6, in <module>
import torch.distributed._functional_collectives as funcol
File "/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/_functional_collectives.py", line 7, in <module>
import torch.distributed.distributed_c10d as c10d
File "/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 17, in <module>
from torch._C._distributed_c10d import (
ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
|
open
|
2024-12-23T09:53:12Z
|
2024-12-23T09:53:26Z
|
https://github.com/open-mmlab/mmdetection/issues/12277
|
[
"reimplementation"
] |
AugWrite
| 0
|
HumanSignal/labelImg
|
deep-learning
| 674
|
Multiple Labels/attributes for a single rectangle
|
<!--
Hi! I want to assign several other attributes for instance color, gender,age etc to the rectangle, can you guide about it?
-->
- **OS:**
- **PyQt version:**
|
open
|
2020-11-17T12:38:57Z
|
2020-11-17T12:38:57Z
|
https://github.com/HumanSignal/labelImg/issues/674
|
[] |
ZahraAnam
| 0
|
apache/airflow
|
python
| 47,983
|
DatabricksNotebookOperator generating invalid dependency graph
|
### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.24.0
apache-airflow-providers-databricks==7.2.1
databricks-sql-connector==4.0.0
### Apache Airflow version
2.10.3
### Operating System
MacOs
### Deployment
Docker-Compose
### Deployment details
Testing using [AWS MWAA Local Runner](https://github.com/aws/aws-mwaa-local-runner)
### What happened
When upgraded the databricks provider to version 7.2.1 and tested the DAG with DatabricksNotebookOperator, DAG is failing with the below error:
```airflow.exceptions.AirflowException: Response: {"error_code":"INVALID_PARAMETER_VALUE","message":"Invalid dependency graph, task 'e771895875324e2902a93fdc2ff36326' can not reference itself."}, Status Code: 400```
<img width="1501" alt="Image" src="https://github.com/user-attachments/assets/e90a1526-f426-448f-a547-8319d2fa7369" />
### What you think should happen instead
Ideally, It should generate correct dependency between Databricks tasks and deploy the Databricks Job. Whereas it is generating the wrong dependency graph for databricks tasks (referencing the dependent task itself).
### How to reproduce
Below is the DAG code to regenerate the error:
```
import os
from airflow.models.dag import DAG
from airflow.providers.databricks.operators.databricks import DatabricksNotebookOperator
from airflow.providers.databricks.operators.databricks_workflow import DatabricksWorkflowTaskGroup
from airflow.utils.timezone import datetime
DATABRICKS_CONN_ID = os.getenv("DATABRICKS_CONN_ID", "databricks_default")
job_cluster_spec = [
{
"job_cluster_key": "Shared_job_cluster",
"new_cluster": {
"cluster_name": "",
"spark_version": "11.3.x-scala2.12",
"num_workers": 1,
"spark_conf": {},
"node_type_id": "r3.xlarge",
"ssh_public_keys": [],
"custom_tags": {},
"spark_env_vars": {"PYSPARK_PYTHON": "/databricks/python3/bin/python3"},
"cluster_source": "JOB",
"init_scripts": [],
},
}
]
dag = DAG(
dag_id="example_databricks_workflow",
start_date=datetime(2022, 1, 1),
schedule=None,
catchup=False,
)
with dag:
task_group = DatabricksWorkflowTaskGroup(
group_id=f"test_workflow",
databricks_conn_id=DATABRICKS_CONN_ID,
job_clusters=job_cluster_spec,
)
with task_group:
notebook_1 = DatabricksNotebookOperator(
task_id="workflow_notebook_1",
databricks_conn_id=DATABRICKS_CONN_ID,
notebook_path="/Shared/Notebook_1",
source="WORKSPACE",
job_cluster_key="Shared_job_cluster",
)
notebook_2 = DatabricksNotebookOperator(
task_id="workflow_notebook_2",
databricks_conn_id=DATABRICKS_CONN_ID,
notebook_path="/Shared/Notebook_2",
source="WORKSPACE",
job_cluster_key="Shared_job_cluster",
)
notebook_1 >> notebook_2
```
Code is from Databricks example dags: https://github.com/apache/airflow/blob/providers-databricks/7.2.1/providers/databricks/tests/system/databricks/example_databricks_workflow.py
### Anything else
It was working fine in apache-airflow-providers-databricks==6.12.0, when upgraded to 7.2.1 version, started getting the error.
Issue maybe related to this MR: https://github.com/apache/airflow/pull/44960
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
open
|
2025-03-20T02:48:47Z
|
2025-03-20T03:32:53Z
|
https://github.com/apache/airflow/issues/47983
|
[
"kind:bug",
"area:providers",
"provider:databricks",
"needs-triage"
] |
dheerajkumar-solanki
| 2
|
mljar/mercury
|
jupyter
| 150
|
make dashboard example with pyecharts
|
pyecharts repo https://github.com/pyecharts
|
closed
|
2022-07-28T13:05:10Z
|
2022-12-12T12:08:06Z
|
https://github.com/mljar/mercury/issues/150
|
[] |
pplonski
| 0
|
deepspeedai/DeepSpeed
|
deep-learning
| 6,917
|
[REQUEST] Deepspeed Inference Supports VL (vision language) model
|
**Is your feature request related to a problem? Please describe.**
We have been using `deepspeed.init_inference` API for speeding up inference for text only models (e.g. mistral, qwen 2.5 series) with success. Was hoping we can extend support for vision language models as well, e.g. qwen 2 vl, etc, which is currently not supported.
**Describe the solution you'd like**
- `deepspeed.init_inference` to work for vision language models (for both embedding use case as well as generation use case)
- and also make extending with our own model's tutorial clearer/cleaner.
**Describe alternatives you've considered**
N/A
**Additional context**
N/A
|
open
|
2024-12-26T17:17:13Z
|
2024-12-26T17:17:13Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6917
|
[
"enhancement"
] |
ethen8181
| 0
|
graphdeco-inria/gaussian-splatting
|
computer-vision
| 705
|
Questions about f_rest property in .ply file
|
Hello, I appreciate your excellent work. I have a few questions that I’d like to understand better. Could you please explain?
Q1) Could you please provide a detailed explanation of the f_rest property in the context of Gaussian Splatting?
Q2) Is it possible to generate or render an image without the f_rest property, and if so, will the quality of the image be affected?
Q3) I’ve noticed that some Gaussian Splatting repositories do not include the f_rest property in their PLY files. Could you explain why this is the case?
Q4) The paper mentions that at “30K iterations reaches about 200–500K Gaussians per scene”. Is there a method to cap the number of Gaussians to a specific limit, such as 150k or 200K?
Q5) Is there a strategy to accelerate the training process without compromising the Image resolution and render quality?
|
open
|
2024-03-12T05:31:33Z
|
2024-03-13T04:40:34Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/705
|
[] |
NithinJangid
| 2
|
davidteather/TikTok-Api
|
api
| 886
|
[FEATURE_REQUEST] - usage
|
in theory, based on this, you can create an account registrar with the ability to interact with live users?
|
closed
|
2022-05-12T10:17:23Z
|
2022-07-03T23:13:03Z
|
https://github.com/davidteather/TikTok-Api/issues/886
|
[
"feature_request"
] |
A4pro
| 4
|
mwaskom/seaborn
|
data-visualization
| 3,227
|
seaborn objects scale with two visualisations with same kwargs?
|
Hello,
I ran into a problem with scale, when I'm trying to display two visualizations with color mapped by column.
I'm trying to create bar plot with labels on bars. Position of labels and color of labels depends on column of dataframe. Also, I would like to color bars by column.
here is my question on stack overflow: https://stackoverflow.com/questions/75161245/how-to-use-seaborn-objects-scale-with-two-visualisations-with-same-kwargs
Is there a way to do this?
Thank you for answer
|
closed
|
2023-01-19T11:30:20Z
|
2023-01-20T00:30:28Z
|
https://github.com/mwaskom/seaborn/issues/3227
|
[] |
vorel99
| 1
|
collerek/ormar
|
sqlalchemy
| 744
|
model.save() with server_default is failing on refreshing server_default values
|
**Describe the bug**
model.save() is raising `NoMatch` when it tries to refresh a server_default that is a PK. I was able to fix this locally by just setting `self.pk = pk` in https://github.com/collerek/ormar/blob/master/ormar/models/model.py#L85-L96. happy to open a PR if the issue is valid.
**To Reproduce**
Steps to reproduce the behavior:
```py
import asyncio
import uuid
from datetime import date, datetime
from typing import Optional
import ormar
import sqlalchemy
from databases import Database
from sqlalchemy import func, text
database = Database(url="postgresql://postgres@0.0.0.0:5432/postgres", force_rollback=True)
engine = sqlalchemy.create_engine("postgresql://postgres@0.0.0.0:5432/postgres")
metadata = sqlalchemy.MetaData()
class BaseMeta:
metadata = metadata
database = database
class Jimmy(ormar.Model):
class Meta(BaseMeta):
tablename = "jimmy_rus"
id: uuid.UUID = ormar.UUID(
primary_key=True, server_default=text("gen_random_uuid()"), uuid_format="string"
)
async def main():
await database.connect()
metadata.drop_all(bind=engine)
metadata.create_all(bind=engine)
jimmy = Jimmy()
await jimmy.save()
if __name__ == '__main__':
asyncio.run(main())
```
**Expected behavior**
should not raise an exception after the item is already persisted to the db.
**Versions (please complete the following information):**
- Database backend used (mysql/sqlite/postgress) postgres
- Python version 3.9
- `ormar` version 0.11.2
- `pydantic` version 1.9
**Additional context**
I've tracked the offending code down:
https://github.com/collerek/ormar/blob/master/ormar/models/model.py#L85-L96
from what I can tell, the pk is correctly returned via insert expr, however `self.pk` never gets set. so when `self.load` is called it doesnt correctly select for item that was insert since `self.pk` is `None`.
|
open
|
2022-07-15T21:12:37Z
|
2022-07-18T16:56:33Z
|
https://github.com/collerek/ormar/issues/744
|
[
"bug"
] |
cmflynn
| 0
|
awesto/django-shop
|
django
| 172
|
don't allow going to CheckoutSelectionView if the cart is empty
|
knowing the url of CheckoutSelectionView might allow you to create empty orders
|
closed
|
2012-09-12T11:28:05Z
|
2012-09-14T15:37:04Z
|
https://github.com/awesto/django-shop/issues/172
|
[] |
alesdotio
| 0
|
Lightning-AI/pytorch-lightning
|
data-science
| 20,490
|
Loading checkpoint before fabric.setup(model) gets abnormal loss when using fabric.init_module()
|
### Bug description
Init model with `fabric.init_module(True)` and load checkpoint **after** `model = fabric.setup(model)`, the training loss is normal
```
with fabric.init_module(empty_init=(fabric.world_size > 1)):
model = GPT(config)
model = fabric.setup(model)
load_checkpoint(fabric, model, checkpoint_path)
step = 1 | loss train: 0.8448048233985901
step = 2 | loss train: 1.3229767084121704
step = 3 | loss train: 1.2647839784622192
step = 4 | loss train: 1.287076711654663
step = 5 | loss train: 1.0357563495635986
```
but when loading checkpoint **before** `model = fabric.setup(model)`, get loss much larger
```
with fabric.init_module(empty_init=(fabric.world_size > 1)):
model = GPT(config)
load_checkpoint(fabric, model, checkpoint_path)
model = fabric.setup(model)
step = 1 | loss train: 12.027938842773438
step = 2 | loss train: 12.051375389099121
step = 3 | loss train: 12.112957954406738
step = 4 | loss train: 12.08558177947998
step = 5 | loss train: 12.089488983154297
```
Another phenomenon is that, if not using `fabric.init_module()`, I can get normal loss when loading checkpoint before `fabric.setup(model)`,
```
# with fabric.init_module(empty_init=(fabric.world_size > 1)):
if True:
model = GPT(config)
load_checkpoint(fabric, model, checkpoint_path)
model = fabric.setup(model)
step = 1 | loss train: 0.8447667956352234
step = 2 | loss train: 1.3229438066482544
step = 3 | loss train: 1.2663335800170898
step = 4 | loss train: 1.2902932167053223
step = 5 | loss train: 1.035811185836792
```
So how to load hf models converted by `litgpt.scripts.convert_hf_checkpoint` in a correct way?
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
from pathlib import Path
import torch
import lightning as L
from lightning.fabric.strategies import FSDPStrategy
from litgpt.args import TrainArgs
from litgpt.config import Config
from litgpt.model import GPT, Block
from litgpt.data import Alpaca2k
from litgpt.tokenizer import Tokenizer
from litgpt.utils import (
chunked_cross_entropy,
load_checkpoint,
num_parameters,
get_default_supported_precision,
)
def get_lr_scheduler(optimizer, warmup_steps: int, max_steps: int):
# linear warmup followed by cosine annealing
scheduler1 = torch.optim.lr_scheduler.LambdaLR(optimizer, lambda step: step / warmup_steps)
scheduler2 = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=(max_steps - warmup_steps))
return torch.optim.lr_scheduler.SequentialLR(optimizer, [scheduler1, scheduler2], milestones=[warmup_steps])
def main(
checkpoint_dir: Path,
devices: int = 8,
num_nodes: int = 1,
precision: str = "bf16-true",
seed: int = 1337,
) -> None:
torch.set_float32_matmul_precision("high")
train_args = TrainArgs(
save_interval = 1000,
log_interval = 1,
global_batch_size = 64,
micro_batch_size = 4,
lr_warmup_steps = 1000,
epochs = 10,
max_steps = 10000,
)
strategy = FSDPStrategy(
auto_wrap_policy={Block},
activation_checkpointing_policy={Block},
state_dict_type="full",
limit_all_gathers=True,
cpu_offload=False,
)
fabric = L.Fabric(
accelerator="cuda",
devices=devices,
num_nodes=num_nodes,
strategy=strategy,
precision=precision,
)
fabric.launch()
fabric.seed_everything(seed) # same seed for every process to init model (FSDP)
dataset = Alpaca2k()
tokenizer = Tokenizer(str(checkpoint_dir))
dataset.connect(tokenizer, batch_size=train_args.micro_batch_size, max_seq_length=512)
with fabric.rank_zero_first():
dataset.prepare_data()
dataset.setup()
dataloader = dataset.train_dataloader()
dataloader = fabric.setup_dataloaders(dataloader)
checkpoint_path = str(checkpoint_dir / "lit_model.pth")
config = Config.from_file(checkpoint_dir / "model_config.yaml")
with fabric.init_module(empty_init=(fabric.world_size > 1)):
model = GPT(config)
fabric.print(f"Number of trainable parameters: {num_parameters(model, requires_grad=True):,}")
# load_checkpoint(fabric, model, checkpoint_path)
model = fabric.setup(model)
load_checkpoint(fabric, model, checkpoint_path)
optimizer = torch.optim.AdamW(model.parameters(), lr=0.0002, weight_decay=0.0, betas=(0.9, 0.95))
optimizer = fabric.setup_optimizers(optimizer)
scheduler = get_lr_scheduler(optimizer, warmup_steps=train_args.lr_warmup_steps, max_steps=train_args.max_steps)
model.train()
for epoch in range(train_args.epochs):
for step, batch in enumerate(dataloader, 1):
input, target = batch["input_ids"], batch["labels"]
logits = model(input)
loss = chunked_cross_entropy(logits[..., :-1, :], target[..., 1:])
fabric.backward(loss)
optimizer.step()
optimizer.zero_grad()
scheduler.step()
fabric.print(f"{step = } | loss train: {loss.detach().item()}")
if __name__ == "__main__":
checkpoint_dir = Path("./Qwen2.5-1.5B/")
main(checkpoint_dir)
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4.1):
#- Python version (e.g., 3.10):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:12.1
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source): pip
```
</details>
### More info
_No response_
|
open
|
2024-12-11T03:19:48Z
|
2024-12-12T03:47:08Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20490
|
[
"bug",
"ver: 2.4.x"
] |
kobenaxie
| 4
|
fastapi/sqlmodel
|
pydantic
| 405
|
Is it possible to filter data dynamically?
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
query = (
select(Employee)
)
names = ["joe", "bob", "mary"]
for name in names:
query = query.filter(Employee.name = name)
```
## want it to produce this type of SQL
```
Select * from Employee where name = joe OR name = bob OR name = mary
```
cant figure out how to do this via iterator,
it only works if you provide specific OR statement ie,
```
query = query.filter(or_(Employee.name == joe, Employee.name == mary, Employee.name == bob))
```
```
### Description
is it possible to do a query dynamically via a list of search values?
### Operating System
Linux
### Operating System Details
ubuntu 20
python 3.9
### SQLModel Version
0.0.6
### Python Version
3.9
### Additional Context
_No response_
|
closed
|
2022-08-22T21:58:06Z
|
2022-08-23T01:25:41Z
|
https://github.com/fastapi/sqlmodel/issues/405
|
[
"question"
] |
perfecto25
| 1
|
quokkaproject/quokka
|
flask
| 73
|
config, settings and current channel should be available in every context
|
Every context should have
{{config('group', 'key')}}
{{ current_channel }}
{{settings}}
|
closed
|
2013-11-01T22:10:38Z
|
2015-07-16T02:56:41Z
|
https://github.com/quokkaproject/quokka/issues/73
|
[
"enhancement"
] |
rochacbruno
| 1
|
sebp/scikit-survival
|
scikit-learn
| 235
|
Regularization parameter for ridge regression penalty in CoxPHSurvivalAnalysis
|
Thank you for the awesome package! I ran CoxPHSurvivalAnalysis multiple times with different choices of alpha (0, 0.01, 0.1) and didn't observe any differences in the resulting C-Index, so I checked the source code and found out that the penalty term was divided by n_samples, which doesn't look right to me. Perhaps you meant to divide it by n_features (which is also not common in my experience)? Apologies if I missed something!
**Snippet from scikit-survival/sksurv/linear_model/coxph.py, line 188**
```python
class CoxPHOptimizer:
def nlog_likelihood(self, w):
# add regularization term to log-likelihood
return loss + numpy.sum(self.alpha * numpy.square(w)) / (2. * n_samples)
```
**Versions**
```python
import sklearn; sklearn.show_versions()
import sksurv; print("sksurv:", sksurv.__version__)
# import cvxopt; print("cvxopt:", cvxopt.__version__)
# import cvxpy; print("cvxpy:", cvxpy.__version__)
import numexpr; print("numexpr:", numexpr.__version__)
import osqp; print("osqp:", osqp.OSQP().version())
```
System:
python: 3.7.6 | packaged by conda-forge | (default, Mar 23 2020, 23:03:20) [GCC 7.3.0]
executable: /opt/conda/bin/python
machine: Linux-5.10.47-linuxkit-x86_64-with-debian-buster-sid
Python dependencies:
pip: 20.1
setuptools: 46.1.3.post20200325
sklearn: 0.24.1
numpy: 1.18.4
scipy: 1.6.0
Cython: 0.29.17
pandas: 1.3.2
matplotlib: 3.2.1
joblib: 0.14.1
threadpoolctl: 2.1.0
Built with OpenMP: True
sksurv: 0.15.0.post0
numexpr: 2.7.1
osqp: 0.6.2
|
closed
|
2021-11-07T16:05:07Z
|
2021-11-08T13:47:44Z
|
https://github.com/sebp/scikit-survival/issues/235
|
[] |
chang-hu
| 1
|
nschloe/tikzplotlib
|
matplotlib
| 191
|
histogram with log scale has display issues for small values
|
It seems matplotlib can not properly render histograms, when log scale is active for the y-axis and the absolute frequency is smaller than ~2.7, ie. 1 and 2.
The Python code:
```py
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib2tikz import save as tikz_save
plot_values = pd.DataFrame(
{
'test0': pd.Series([0, 0, 0, .1, .1, .2]),
'test1': pd.Series([0, .1]),
}
)
plot_values.plot.hist(stacked=False) # Should not use stacked for log plot
plt.gcf().set_size_inches(4, 3)
plt.yscale('log')
plt.ylim(ymin=.1)
plt.savefig('/tmp/test_a.png')
tikz_save('/tmp/test_a.tikz')
```
The png looks fine:

However, from the pdf it seems histogram bars are plotted with their lower bound starting at ~2.7 (instead of -inf). This will especially cause issues when the frequency is 1 or 2.

|
open
|
2017-07-02T09:39:57Z
|
2024-01-25T13:55:31Z
|
https://github.com/nschloe/tikzplotlib/issues/191
|
[] |
maflcko
| 3
|
benbusby/whoogle-search
|
flask
| 410
|
[QUESTION] The README advertises "no JavaScript", but the code includes JavaScript
|
According to the README: "No javascript". It reiterates this a few times throughout the README and elsewhere. And yet:
https://github.com/benbusby/whoogle-search/blob/68fdd554825f981a24ba3b3f1d728ec5ef260005/app/templates/display.html#L43-L45
https://github.com/benbusby/whoogle-search/blob/be3714f074c0807983148c6ffa51f1287e5f465d/app/templates/index.html#L20-L21
Perhaps we could make the promise true by removing this and changing the CSP JS policy to `none` instead of `self`.
ref:
https://github.com/benbusby/whoogle-search/blob/9f84a8ad832a130690f6a9524558522665e0c7b8/app/__init__.py#L76
Just wondering, am newbie, sorry if I'm mistaken!
|
closed
|
2021-09-01T04:17:09Z
|
2021-09-15T21:44:23Z
|
https://github.com/benbusby/whoogle-search/issues/410
|
[
"question"
] |
mariavillosa
| 3
|
Netflix/metaflow
|
data-science
| 1,838
|
Question on Executing Metaflow Workflow from Python Script Without 'run' Argument
|
Hello Metaflow Team,
I am exploring ways to automate Metaflow workflows and have a query regarding the initial execution of these workflows via a Python script. Specifically, I'm interested in whether it is possible to execute a Metaflow workflow directly from a script without explicitly using the run argument for the first time.
Could you provide guidance or confirm if there's a recommended approach for initializing and running workflows programmatically without the run command? Any insights on setting up the environment or script adjustments to handle this use case would be greatly appreciated.
```
from metaflow import FlowSpec, step
class ExampleFlow(FlowSpec):
@step
def start(self):
print("This is the start step.")
self.next(self.end)
@step
def end(self):
print("This is the end step.")
if __name__ == '__main__':
# How to initiate this flow without using 'ExampleFlow().run()'?
# HelloFlow 인스턴스 생성
flow = ExampleFlow()
graph = flow._graph
current_steps = ['start']
while current_steps:
next_steps = []
for current_step in current_steps:
print(f"Running step: {current_step}")
run_step(flow, current_step)
next_steps.extend(step.__name__ for step in flow._next_steps)
print("Next steps:", next_steps)
current_steps = next_steps
```
Thank you for your assistance!
|
closed
|
2024-05-16T01:05:45Z
|
2024-08-16T06:10:50Z
|
https://github.com/Netflix/metaflow/issues/1838
|
[] |
sungreong
| 4
|
NullArray/AutoSploit
|
automation
| 1,301
|
Unhandled Exception (6f24d3e9f)
|
Autosploit version: `3.1`
OS information: `Linux-3.10.0-1160.31.1.el7.x86_64-x86_64-with-Ubuntu-18.04-bionic`
Running context: `autosploit.py`
Error meesage: `object of type 'NoneType' has no len()`
Error traceback:
```
Traceback (most recent call):
File "/AutoSploit/autosploit/main.py", line 116, in main
terminal.terminal_main_display(loaded_tokens)
File "/AutoSploit/lib/term/terminal.py", line 494, in terminal_main_display
if len(choice_data_list) < 4:
TypeError: object of type 'NoneType' has no len()
```
Metasploit launched: `False`
|
open
|
2021-07-31T13:53:28Z
|
2021-07-31T13:53:28Z
|
https://github.com/NullArray/AutoSploit/issues/1301
|
[] |
AutosploitReporter
| 0
|
paperless-ngx/paperless-ngx
|
machine-learning
| 7,322
|
[BUG] Filter documents by more than one owner
|
### Description
When I want to add more than on owner to the filter the added owner is not listed but is recognized by the filter.
If I change e.g. the tag-filter, the owners only filter by the first on.

### Steps to reproduce
Go to documents
Open the permissions filter
Add more than on owner
Change e.g. the tag filter
### Webserver logs
```bash
none
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.0
### Host OS
unraid / docker
### Installation method
Docker - official image
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description.
|
closed
|
2024-07-25T17:15:45Z
|
2024-08-26T03:05:00Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/7322
|
[
"not a bug"
] |
bjoernpoettker
| 4
|
flairNLP/flair
|
nlp
| 2,979
|
Adding link to result classes
|
This repository is great work but it would be better to add description or link to description of result classes(for ner and pos)
|
closed
|
2022-11-04T16:45:16Z
|
2023-05-21T15:36:47Z
|
https://github.com/flairNLP/flair/issues/2979
|
[
"question",
"wontfix"
] |
11Alexei11
| 3
|
davidsandberg/facenet
|
tensorflow
| 814
|
About pretrained model accuracy on LFW
|
The accuracy for the model 20180402-114759 is about 0.99550,my parameter setting is as follows:
detection and alignment:
python align_dataset_mtcnn.py \
/media/zheng/02FCF89DFCF88BE31/face_dataset/LFW/lfw \
/media/zheng/02FCF89DFCF88BE31/face_dataset/LFW/aligned_lfw_tf \
--image_size 160 \
--margin 32 \
--random_order \
--gpu_memory_fraction 0.25
embedding extraction:
python facenet_tf_extractor.py \
/media/zheng/02FCF89DFCF88BE31/face_dataset/LFW/aligned_lfw_tf \
pretrained_models/20180402-114759 \
--lfw_batch_size 64 \
--image_size 160 \
--lfw_pairs ../../pairs.txt \
--use_flipped_images \
--subtract_mean \
--use_fixed_image_standardization
validate result:
Accuracy: 0.99550+-0.00342
Validation rate: 0.98600+-0.00952 @ FAR=0.00100
Area Under Curve (AUC): 1.000
Equal Error Rate (EER): 0.004
why can't I reproduce the same result, my tensorflow version is 1.5,is this the problem?
|
open
|
2018-07-16T05:09:30Z
|
2018-11-19T23:43:09Z
|
https://github.com/davidsandberg/facenet/issues/814
|
[] |
zyt1378
| 3
|
tensorpack/tensorpack
|
tensorflow
| 1,209
|
lmdb.Error There is not enough space on the disk.
|
If you're asking about an unexpected problem which you do not know the root cause,
use this template. __PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__:
If you already know the root cause to your problem,
feel free to delete everything in this template.
### 1. What you did:
(1) **If you're using examples, what's the command you run:**
(2) **If you're using examples, have you made any changes to the examples? Paste `git status; git diff` here:**
(3) **If not using examples, tell us what you did:**
It's always better to copy-paste what you did than to describe them.
Please try to provide enough information to let other __reproduce__ your issues.
Without reproducing the issue, we may not be able to investigate it.
I'm trying to create an LMDB file by following the "Efficient DataFlow" tutorial in Tensorpack.
I was given a dataset with a CSV file with columns in [frame, xmin, xmax, ymin, ymax, class_id] for training an object detection model.
Initially, I was using a reduced version of the file with 300 entries (extracted from a large number of entries) for internal development and debugging. But when I tried to create an LMDB file with LMDBSerializer.save(), following the tensorpack turorial, I got an error saying "lmdb.Error: train_small.lmdb: There is not enough space on the disk".
But I had more than a terabyte of storage left. So I reduced the CSV file entries to only have 10 entries (3 distinct images) but I had the same error.
I will attach the code zip file here.
[wow.zip](https://github.com/tensorpack/tensorpack/files/3214007/wow.zip)
### 2. What you observed:
(1) **Include the ENTIRE logs here:**
It's always better to copy-paste what you observed instead of describing them.
It's always better to paste **as much as possible**, although sometimes a partial log is OK.
Tensorpack typically saves stdout to its training log.
If stderr is relevant, you can run a command with `my_command 2>&1 | tee logs.txt`
to save both stdout and stderr to one file.
Traceback (most recent call last):
File "debug2.py", line 111, in <module>
LMDBSerializer.save(df, 'train_small.lmdb')
File "C:\Users\dps42\AppData\Local\Continuum\miniconda3\envs\dps42_dev\lib\site-packages\tensorpack\dataflow\serialize.py", line 52, in save
meminit=False, map_async=True) # need sync() at the end
lmdb.Error: train_small.lmdb: There is not enough space on the disk.
(2) **Other observations, if any:**
For example, CPU/GPU utilization, output images, tensorboard curves, if relevant to your issue.
### 3. What you expected, if not obvious.
If you expect higher speed, please read
http://tensorpack.readthedocs.io/tutorial/performance-tuning.html
before posting.
If you expect certain accuracy, only in one of the two conditions can we help with it:
(1) You're unable to reproduce the accuracy documented in tensorpack examples.
(2) It appears to be a tensorpack bug.
Otherwise, how to train a model to certain accuracy is a machine learning question.
We do not answer machine learning questions and it is your responsibility to
figure out how to make your models more accurate.
Since there were only 10 entries in the CSV file and only 3 distinct images, I shouldn't see the message that "There is not enough space on the disk."
### 4. Your environment:
+ Paste the output of this command: `python -c 'import tensorpack.tfutils as u; print(u.collect_env_info())'`
If this command failed, tell us your version of Python/TF/tensorpack.
+ You can install Tensorpack master by `pip install -U git+https://github.com/ppwwyyxx/tensorpack.git`
and see if your issue is already solved.
+ If you're not using tensorpack under a normal command line shell (e.g.,
using an IDE or jupyter notebook), please retry under a normal command line shell.
+ Include relevant hardware information, e.g. number of GPUs used for training, amount of RAM.
Windows 10. I think no GPU was used at the moment.
You may often want to provide extra information related to your issue, but
at the minimum please try to provide the above information __accurately__ to save effort in the investigation.
|
closed
|
2019-05-23T20:25:21Z
|
2019-05-28T14:59:22Z
|
https://github.com/tensorpack/tensorpack/issues/1209
|
[
"enhancement"
] |
dps42
| 6
|
zihangdai/xlnet
|
nlp
| 259
|
OOM ERROR when using local batch size=128 on TPUv3-8
|
Hi,
I am trying to train XLNet on protein sequences. I am running into OOM error when running the script train.py using a TPUv3-8 with train_batch_size=128. (I also get OOM error with train batch size 64, 48, but not with 32, 16).
In the paper it is mentioned: "Specifically, we train on 512 TPU v3 chips for 500K steps with an Adam weight decay optimizer, linear learning rate decay, and a batch size of 8192, which takes about 5.5 days."
If I understand this correctly then the local batch size used is also 128= (8192/(512/8)) and I shouldn't get an OOM error.
for context, am using TPUv3-8 (version 1.14.1.dev20190518) and a cloud instance vm both in us-central1-a and Tensorflow version 1.13.1
For the data preprocessing I am using the script data_utils and it runs with no problem.
Here are the command am using for both preprocessing and training :
python xlnet/data_utils.py \
--use_tpu=True \
--save_dir=proc_data_bsz128/example \
--bsz_per_host=128 \
--num_core_per_host=8 \
--seq_len=512 \
--reuse_len=256 \
--input_glob=testdata_xlnet.txt \
--num_passes=20 \
--bi_data=True \
--sp_path=sp.model \
--mask_alpha=6 \
--mask_beta=1 \
--uncased=False \
--num_predict=85
python xlnet/train.py \
--use_tpu=True \
--tpu=name \
--record_info_dir=$DATA_DIR \
--save_steps=1000 \
--model_dir=$MODEL_DIR \
--train_batch_size=128 \
--seq_len=512 \
--reuse_len=256 \
--mem_len=384 \
--perm_size=256 \
--n_layer=24 \
--d_model=1024 \
--d_embed=1024 \
--n_head=16 \
--d_head=64 \
--d_inner=4096 \
--untie_r=True \
--mask_alpha=6 \
--mask_beta=1 \
--num_predict=85
$DATA_DIR and $MODEL_DIR are google bucket directories.
Is there something am missing here?
Thanks for your help in advance.
|
open
|
2020-03-18T15:53:21Z
|
2021-03-02T08:11:25Z
|
https://github.com/zihangdai/xlnet/issues/259
|
[] |
GhaliaRehawi
| 1
|
OpenInterpreter/open-interpreter
|
python
| 1,373
|
Is this service has ability to read directly PDF, or what is to be installed enabled, if any to read them?
|
### Is your feature request related to a problem? Please describe.
_No response_
### Describe the solution you'd like
We need to have an ability to feed the endpoints with the pdf files
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
open
|
2024-07-31T11:02:24Z
|
2024-08-02T11:47:25Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/1373
|
[] |
lion137
| 4
|
apache/airflow
|
data-science
| 47,905
|
Fix mypy-boto3-appflow version
|
### Body
We set TODO to handle the version limitation
https://github.com/apache/airflow/blob/9811f1d6d0fe557ab204b20ad5cdf7423926bd22/providers/src/airflow/providers/amazon/provider.yaml#L146-L148
I open issue for viability as it's a small scope and good task for new contributors.
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
|
closed
|
2025-03-18T11:28:58Z
|
2025-03-19T13:33:43Z
|
https://github.com/apache/airflow/issues/47905
|
[
"provider:amazon",
"area:providers",
"good first issue",
"kind:task"
] |
eladkal
| 2
|
autogluon/autogluon
|
data-science
| 4,271
|
Release on Conda Forge [1.1.1]
|
Release AutoGluon 1.1.1 on Conda Forge
TODO:
- [x] Release AutoGluon 1.1.1 on Conda Forge
- [x] Add instructions on how to perform Conda Forge release: https://github.com/autogluon/autogluon/blob/master/release_instructions/ReleaseInstructions.md#conda-forge-release
- [x] Add instructions on how to perform post-release Conda Forge patching: https://github.com/autogluon/autogluon/blob/master/release_instructions/ReleaseInstructions.md#conda-forge-release
|
closed
|
2024-06-14T22:11:35Z
|
2024-06-27T00:27:42Z
|
https://github.com/autogluon/autogluon/issues/4271
|
[
"install",
"priority: 0"
] |
Innixma
| 0
|
robotframework/robotframework
|
automation
| 5,273
|
What happened to robotdiff.py?
|
I noticed a tool from way back in v2.1.2: https://robotframework.org/robotframework/2.1.2/tools/robotdiff.html
What happened to this tool? Do we have a modern equivalent?
|
closed
|
2024-11-21T00:05:57Z
|
2024-11-21T12:15:07Z
|
https://github.com/robotframework/robotframework/issues/5273
|
[] |
nogjam
| 1
|
davidsandberg/facenet
|
computer-vision
| 932
|
Identification using a video stream
|
Hi,
I want to create a system for face identification, but use few frames for both acquisition and test time, in order to reduce errors.
1. I couldn't find any work for comparing 2 populations instead of 2 points. Moreover, I would like to apply a threshold in order to add an 'Unknown' class. I would appreciate if you share any resources regarding those points.
2. Can I assume that the embeddings of the same person would be Normally distributed? If so, why?
3. Are there weights trained on grayscale images? or would it be good enough to duplicate the singe channel 3 times?
Thanks,
Lee
|
open
|
2018-12-17T14:12:55Z
|
2021-10-09T17:26:02Z
|
https://github.com/davidsandberg/facenet/issues/932
|
[] |
leetwito
| 1
|
keras-team/keras
|
data-science
| 20,189
|
Keras different versions have numerical deviations when using pretrain model
|
The following code will have output deviations between Keras 3.3.3 and Keras 3.5.0.
```python
#download model
from modelscope import snapshot_download
base_path = 'q935499957/Qwen2-0.5B-Keras'
import os
dir = 'models'
try:
os.mkdir(dir)
except:
pass
model_dir = snapshot_download(base_path,local_dir=dir)
#config
import os
os.environ["KERAS_BACKEND"] = "torch"
import keras
keras.config.set_dtype_policy("bfloat16")
from transformers import AutoTokenizer
import numpy as np
from bert4keras3.models import build_transformer_model,Llama
from bert4keras3.snippets import sequence_padding
base_path = dir+'/'
config_path = base_path+'config.json'
weights_path = base_path+'QWen.weights.h5'#保存路径expand_lm.weights.h5
dict_path = base_path+'qwen_tokenizer'
tokenizer = AutoTokenizer.from_pretrained(dict_path)
#define a model to print middle tensor
class Llama_print(Llama):
def apply_main_cache_layers(self, inputs, index,self_cache_update_index,
cross_cache_update_index=None,
attention_mask=None,position_bias=None,
):
print(inputs[0][:,:,:8])
print(index)
print(inputs[0].shape)
print('-'*50)
return super().apply_main_cache_layers(inputs, index,self_cache_update_index,
cross_cache_update_index,
attention_mask,position_bias)
Novel = build_transformer_model(
config_path,
keras_weights_path=weights_path,
model=Llama_print,
with_lm=True,
return_keras_model=False,
)
x = np.array([tokenizer.encode('hello,')+[0]])
print(Novel.cache_call([x],input_lengths=[3],
end_token=-1,search_mode='topp',k=1))
```
This is a llama-like pre-trained model. The code above will output the middle tensor during the prefill process and the decode process.
With the exact same code, the input during the prefill process is completely different in the two different versions. In the decode phase, even when the input is the same, there will be significant differences in the outputs as the iterations proceed between the two versions.
keras 3.3.3 print
```
#prefill
tensor([[[ 0.0164, 0.0070, -0.0019, -0.0013, 0.0156, 0.0074, -0.0055,
-0.0139],
[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160],
[-0.0204, -0.0093, 0.0121, 0.0091, -0.0065, -0.0225, 0.0149,
0.0108]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-0.0459, -0.0967, -0.0270, 0.0452, 0.2500, -0.1387, 0.1094,
-0.1436],
[ 0.0031, -0.0479, 0.0107, -0.0291, -0.0869, 0.0549, 0.0579,
0.0618],
[-0.1099, 0.0183, 0.1309, -0.1406, 0.0204, -0.0154, 0.2656,
0.0669]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-0.3398, -0.2988, 0.1143, -0.2109, 0.5625, 0.0869, -0.3281,
-0.1465],
[ 0.1895, -0.1562, -0.0292, -0.1348, 0.0283, 0.0452, 0.2734,
0.0396],
[ 0.0127, -0.0498, 0.0388, -0.1484, 0.0791, 0.1118, 0.2578,
0.0879]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
2
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-9.2188e-01, 1.7109e+00, 3.3281e+00, -2.5000e+00, -2.0312e-01,
-4.7070e-01, -7.1250e+00, 3.7891e-01],
[ 6.5918e-02, -3.2031e-01, -2.0312e-01, 1.2207e-01, -1.2598e-01,
1.7090e-03, 9.2773e-02, -1.6699e-01],
[-1.6846e-02, -1.9531e-01, -2.1875e-01, 1.4648e-02, 7.3242e-04,
6.0303e-02, 4.2773e-01, 2.3438e-02]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
3
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-1.1719, 1.4062, 3.2031, -1.6328, -0.8047, -1.0938, -7.9062,
1.2266],
[ 0.3594, -0.1025, 0.0869, 0.3496, -0.0132, 0.0515, 0.2168,
0.1016],
[ 0.0449, -0.2910, -0.2305, 0.0383, 0.1592, -0.1016, 0.6328,
0.0190]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-2.1562e+00, 2.0156e+00, 4.3125e+00, 9.5312e-01, 2.7344e-01,
-1.8750e+00, -1.3875e+01, 2.4062e+00],
[ 4.5703e-01, -2.6172e-01, -2.4414e-02, 3.6133e-01, 1.6016e-01,
1.1768e-01, 4.1992e-01, -4.5898e-02],
[ 9.6680e-02, -4.1016e-01, -2.8906e-01, 7.9346e-03, -1.5430e-01,
-1.5430e-01, 4.7266e-01, -2.6562e-01]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
5
torch.Size([1, 3, 896])
#decode
--------------------------------------------------
tensor(1, device='cuda:0', dtype=torch.int32)tensor([[[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.0031, -0.0481, 0.0107, -0.0291, -0.0869, 0.0549, 0.0579,
0.0618]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.1895, -0.1572, -0.0299, -0.1367, 0.0283, 0.0452, 0.2754,
0.0405]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
2
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.0654, -0.3203, -0.2041, 0.1221, -0.1260, 0.0039, 0.0933,
-0.1660]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
3
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.3574, -0.0986, 0.0898, 0.3516, -0.0137, 0.0518, 0.2158,
0.1064]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 1, 896])
--------------------------------------------------
```
keras 3.5.0 print
```
#prefill
tensor([[[-0.0096, 0.0126, -0.0063, 0.0044, 0.0121, 0.0038, 0.0104,
-0.0009],
[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160],
[-0.0204, -0.0093, 0.0121, 0.0091, -0.0065, -0.0225, 0.0149,
0.0108]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-0.1807, 0.0674, -0.3926, -0.0278, 0.2520, -0.0840, -0.0669,
-0.3047],
[-0.0072, -0.0415, 0.0123, -0.0146, -0.1270, 0.0679, 0.0610,
-0.0205],
[-0.1279, 0.0349, 0.2539, -0.1611, -0.0225, 0.0275, 0.1338,
0.0386]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[ 5.6250e-01, -2.3633e-01, -1.0781e+00, -1.2988e-01, 3.4180e-01,
3.7109e-01, -3.1250e-01, -1.9531e-01],
[ 1.2598e-01, -1.2695e-02, -7.1289e-02, -1.3672e-01, 3.3203e-02,
1.4941e-01, 1.9922e-01, -2.1875e-01],
[-9.5215e-02, -5.9570e-02, 2.0117e-01, -3.2031e-01, 3.6621e-04,
5.8350e-02, 1.6504e-01, -8.9355e-02]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
2
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[ 0.4277, 0.6680, 2.3750, -2.8750, -0.5039, 0.0742, -6.5625,
0.4082],
[ 0.2256, -0.3047, -0.0349, -0.0859, 0.1191, 0.2334, 0.3262,
-0.0088],
[-0.1025, -0.0918, 0.3105, -0.2227, -0.0162, 0.2715, 0.4746,
0.0371]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
3
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[ 0.1719, 0.3652, 2.2812, -2.0156, -1.0938, -0.5547, -7.3438,
1.2500],
[ 0.5938, -0.3047, -0.0126, -0.0981, 0.2676, 0.0479, 0.0771,
0.1455],
[ 0.2051, -0.2188, 0.0391, -0.2949, 0.2539, 0.0566, 0.4355,
0.0227]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 3, 896])
--------------------------------------------------
tensor([[[-8.0469e-01, 9.8438e-01, 3.4062e+00, 6.0938e-01, -7.8125e-03,
-1.3438e+00, -1.3375e+01, 2.4219e+00],
[ 6.3672e-01, -5.7422e-01, 2.8931e-02, -3.1250e-01, 3.2422e-01,
-6.7871e-02, 4.0430e-01, -4.0039e-02],
[ 3.5156e-01, -4.4531e-01, -1.8066e-02, -2.2070e-01, 1.1377e-01,
3.0884e-02, 4.5508e-01, 1.4160e-01]]], device='cuda:0',
dtype=torch.bfloat16, grad_fn=<SliceBackward0>)
#decode
--------------------------------------------------
tensor(1, device='cuda:0', dtype=torch.int32)tensor([[[-0.0325, -0.0471, 0.0239, -0.0009, 0.0129, 0.0027, 0.0299,
0.0160]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
0
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[-0.0072, -0.0415, 0.0122, -0.0146, -0.1270, 0.0679, 0.0610,
-0.0205]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
1
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.1230, -0.0083, -0.0713, -0.1260, 0.0293, 0.1436, 0.2051,
-0.2090]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
2
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.2266, -0.3008, -0.0327, -0.0791, 0.1143, 0.2285, 0.3320,
-0.0068]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
3
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.5977, -0.2930, -0.0059, -0.0884, 0.2637, 0.0449, 0.0889,
0.1465]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
4
torch.Size([1, 1, 896])
--------------------------------------------------
tensor([[[ 0.6367, -0.5586, 0.0376, -0.3047, 0.3242, -0.0654, 0.4277,
-0.0312]]], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
5
torch.Size([1, 1, 896])
--------------------------------------------------
```
|
closed
|
2024-08-30T11:21:08Z
|
2024-08-31T07:38:23Z
|
https://github.com/keras-team/keras/issues/20189
|
[] |
pass-lin
| 2
|
PedroBern/django-graphql-auth
|
graphql
| 161
|
Rewrote this Package for Django 4 and Graphene 3+
|
Hello, if you reading this you probably interested in using `django-graphql-auth` with the latest version of Django, graphene, graphene Django and `django-graphql-jwt`
If so, I might be able to provide a replacement as I created a Django app highly inspired from this package that works with all the latest updates. If there are a lot of people insterested, I can create a public repository where it can be battle-tested and packaged 📦 for pypi.
Let me know!
|
open
|
2023-01-06T15:37:00Z
|
2024-03-03T05:17:17Z
|
https://github.com/PedroBern/django-graphql-auth/issues/161
|
[] |
itzomen
| 13
|
thunlp/OpenPrompt
|
nlp
| 304
|
How to solve the logits equality of LLaMA output
|
# This is my code.
from datasets import load_dataset
from transformers import set_seed
from openprompt.data_utils import InputExample
import os
from tqdm import tqdm
device = "cuda"
classes = ["negative", "positive"]
set_seed(1024)
from accelerate import Accelerator
accelerator = Accelerator()
data_path = 'data'
test_path = os.path.join(data_path, 'test.json')
test_dataset = load_dataset('json', data_files=test_path)['train'] # 1 positive 0 negative
y_true = test_dataset['label']
dataset = []
import copy
data = []
copy_test_dataset = copy.deepcopy(test_dataset)
for example in copy_test_dataset:
temp_data = {"guid": example["label"], "text_a": example["sentence"]}
data.append(temp_data)
for item in data:
dataset.append(InputExample(guid=item["guid"], text_a=item["text_a"]))
from openprompt import plms
from openprompt.plms import *
from transformers import LlamaConfig, LlamaForCausalLM, LlamaTokenizer
plms._MODEL_CLASSES["llama"]= ModelClass(**{"config": LlamaConfig, "tokenizer": LlamaTokenizer, "model": LlamaForCausalLM, "wrapper": LMTokenizerWrapper})
from openprompt.plms import load_plm
plm, tokenizer, model_config, WrapperClass = load_plm("llama", "huggyllama/llama-7b")
tokenizer.pad_token_id = 0
from openprompt.prompts import ManualTemplate
promptTemplate = ManualTemplate(
text=' {"placeholder":"text_a"} This sentence was {"mask"}',
tokenizer=tokenizer,
)
from openprompt.prompts import ManualVerbalizer
promptVerbalizer = ManualVerbalizer(classes=classes,
label_words={"negative": ["bad"], "positive": ["good", "wonderful", "great"], },
tokenizer=tokenizer, )
from openprompt import PromptForClassification
promptModel = PromptForClassification(template=promptTemplate, plm=plm, verbalizer=promptVerbalizer, )
from openprompt import PromptDataLoader
data_loader = PromptDataLoader(dataset=dataset, tokenizer=tokenizer, template=promptTemplate,
tokenizer_wrapper_class=WrapperClass, batch_size=1)
import torch
promptModel.eval()
print(promptModel)
promptModel, data_loader = accelerator.prepare(promptModel, data_loader)
promptModel.to(device)
predictions = []
with torch.no_grad():
for batch in tqdm(data_loader, desc="Processing batches"):
batch = {k: v.to(device) for k, v in batch.items()}
print(batch)
logits = promptModel(batch)
print(logits)
exit()
preds = torch.argmax(logits, dim=-1)
for i in preds:
predictions.append(i.item())
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_true, predictions)
print('Accuracy: %.2f' % (accuracy * 100))
# The output logits is :
tensor([[-1.3863, -1.3863]])
|
open
|
2023-12-19T07:19:23Z
|
2024-03-21T09:29:19Z
|
https://github.com/thunlp/OpenPrompt/issues/304
|
[] |
shuaizhao95
| 3
|
proplot-dev/proplot
|
matplotlib
| 469
|
why saving .svg too slow?
|
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
[Description of the bug or feature.]
### Steps to reproduce
A "[Minimal, Complete and Verifiable Example](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports)" will make it much easier for maintainers to help you.
```python
# your code here
# we should be able to copy-paste this into python and exactly reproduce your bug
```
**Expected behavior**: [What you expected to happen]
**Actual behavior**: [What actually happened]
### Equivalent steps in matplotlib
Please try to make sure this bug is related to a proplot-specific feature. If you're not sure, try to replicate it with the [native matplotlib API](https://matplotlib.org/3.1.1/api/index.html). Matplotlib bugs belong on the [matplotlib github page](https://github.com/matplotlib/matplotlib).
```python
# your code here, if applicable
import matplotlib.pyplot as plt
```
### Proplot version
Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)` here.
|
closed
|
2024-11-13T11:53:47Z
|
2024-11-14T01:44:51Z
|
https://github.com/proplot-dev/proplot/issues/469
|
[] |
KingRyu1998
| 2
|
hanwenlu2016/web-ui
|
pytest
| 5
|
django+rest_framework+react 请问这部分有时间开源么?
|
closed
|
2021-06-25T01:42:01Z
|
2021-08-18T02:29:32Z
|
https://github.com/hanwenlu2016/web-ui/issues/5
|
[] |
god-pane
| 5
|
|
freqtrade/freqtrade
|
python
| 11,249
|
Address:Port conflict
|
I have managed to install Docker Desktop for Windows and the Freqtrade image. I started Freqtrade in the container and see bot heartbeats update each minute. I can also start from the terminal and see the same. I can't access the UI at localhost:8080 because I already have another app using 127.0.0.1:8080. If I shutdown that app, I can get the FreqtradeUI up. I can't find where to change the address and/or port so that the two apps don't conflict. I looked through the documentation and searched online to no avail. I know this is simple, but I just don't understand this stuff. (in over my head).
I'm running Windows 10 Pro with Python 3.13
Someone please give me the simple answer.
Thank you
|
closed
|
2025-01-17T22:41:46Z
|
2025-01-21T13:03:27Z
|
https://github.com/freqtrade/freqtrade/issues/11249
|
[
"Question",
"Docker"
] |
TheFirstVillageIdiot
| 7
|
plotly/dash
|
plotly
| 2,716
|
dcc.Input selectionStart not functioning as expected
|
**Describe the bug**
dcc.Input properties selectionStart and selectionEnd not updating and returning None or initial set value
**Expected behavior**
Expect that the values of selectionStart (or selectionEnd) will return the offset into the element's text content of the first (or last) selected character.
**Screenshots**
This is reported in the following community posts:
https://community.plotly.com/t/explaining-selectionstart-in-dash-input-component/39023
https://community.plotly.com/t/selectionstart-and-selectionend-doesnt-seem-to-work-for-input-not-available-for-textarea/54746
https://community.plotly.com/t/selection-in-dash-component-not-working/36707
<img width="712" alt="Screen Shot 2023-12-20 at 3 33 41 PM" src="https://github.com/plotly/dash/assets/44043492/1fdb685f-a967-4b68-9d9e-b375a4564ffb">
|
open
|
2023-12-20T20:37:21Z
|
2024-08-13T19:44:12Z
|
https://github.com/plotly/dash/issues/2716
|
[
"bug",
"sev-2",
"P3"
] |
e-wallace
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.