repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
pykaldi/pykaldi
|
numpy
| 309
|
Could not find Kaldi. Please install Kaldi under the tools directory or set KALDI_DIR environment variable.
|
After following the instruction steps, I installed everything in sequence, but I also got the same error in the end
`Please install Kaldi under the tools directory or set `KALDI_DIR` environment variable.`
Even though I checked the KALDI_DIR variable, it is having a correct path to the kaldi folder pykaldi?
Any help would be appreciated
|
closed
|
2022-11-15T19:18:39Z
|
2023-09-12T14:00:29Z
|
https://github.com/pykaldi/pykaldi/issues/309
|
[] |
shakeel608
| 1
|
facebookresearch/fairseq
|
pytorch
| 4,998
|
fairseq install error
|
## 🐛 Bug
<!-- Hello, I meet a bug when trying to install fairseq. -->
### To Reproduce
Steps to reproduce the behavior:
1. Run cmd 'pip install --editable ./'
2. See error
```
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple, https://pypi.ngc.nvidia.com, https://pypi.ngc.nvidia.com
Obtaining file:///home/jhwu/ContinualMT/fairseq
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... error
error: subprocess-exited-with-error
× Getting requirements to build editable did not run successfully.
│ exit code: 1
╰─> [34 lines of output]
Traceback (most recent call last):
File "/tmp/pip-build-env-5bn040xc/overlay/lib/python3.8/site-packages/torch/__init__.py", line 172, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/home/jhwu/anaconda3/envs/CLMT/lib/python3.8/ctypes/__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /tmp/pip-build-env-5bn040xc/overlay/lib/python3.8/site-packages/torch/lib/../../nvidia/cublas/lib/libcublas.so.11: undefined symbol: cublasLtGetStatusString, version libcublasLt.so.11
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jhwu/anaconda3/envs/CLMT/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 351, in <module>
main()
File "/home/jhwu/anaconda3/envs/CLMT/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 333, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/home/jhwu/anaconda3/envs/CLMT/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 132, in get_requires_for_build_editable
return hook(config_settings)
File "/tmp/pip-build-env-5bn040xc/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 447, in get_requires_for_build_editable
return self.get_requires_for_build_wheel(config_settings)
File "/tmp/pip-build-env-5bn040xc/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 338, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "/tmp/pip-build-env-5bn040xc/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 320, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-5bn040xc/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 335, in run_setup
exec(code, locals())
File "<string>", line 12, in <module>
File "/tmp/pip-build-env-5bn040xc/overlay/lib/python3.8/site-packages/torch/__init__.py", line 217, in <module>
_load_global_deps()
File "/tmp/pip-build-env-5bn040xc/overlay/lib/python3.8/site-packages/torch/__init__.py", line 178, in _load_global_deps
_preload_cuda_deps()
File "/tmp/pip-build-env-5bn040xc/overlay/lib/python3.8/site-packages/torch/__init__.py", line 158, in _preload_cuda_deps
ctypes.CDLL(cublas_path)
File "/home/jhwu/anaconda3/envs/CLMT/lib/python3.8/ctypes/__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
OSError: /tmp/pip-build-env-5bn040xc/overlay/lib/python3.8/site-packages/nvidia/cublas/lib/libcublas.so.11: undefined symbol: cublasLtGetStatusString, version libcublasLt.so.11
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build editable did not run successfully.
│ exit code: 1
╰─> See above for output.
```
### Environment
- fairseq Version (e.g., 1.0 or main): main
- PyTorch Version (e.g., 1.0): 1.12.1
- OS (e.g., Linux): Linux
- How you installed fairseq (`pip`, source): source
- Build command you used (if compiling from source): pip install --editable ./
- Python version: 3.8.16
- CUDA/cuDNN version: 11.3
- GPU models and configuration: NVIDIA TITAN Xp
- Any other relevant information:
|
open
|
2023-02-28T12:35:03Z
|
2023-02-28T12:35:25Z
|
https://github.com/facebookresearch/fairseq/issues/4998
|
[
"bug",
"needs triage"
] |
WJMacro
| 0
|
iperov/DeepFaceLab
|
machine-learning
| 5,349
|
Any way to acceralete inference of SAEHD tf-model?
|
Hello there, I'm working on lightweight-deepfake, I'm wondering how to optimize the pretrained SAEHD model aimed to lower inference time, ways including quantization onnx tensorrt etc.
If u have any good idea or practice before , plz share in this issue.
|
closed
|
2021-06-11T03:27:42Z
|
2021-07-01T05:39:45Z
|
https://github.com/iperov/DeepFaceLab/issues/5349
|
[] |
ykk648
| 2
|
tensorflow/tensor2tensor
|
machine-learning
| 1,533
|
tf.session
|
### Description
I would like to know, if you could tell me, in what part of the code, the session is created in the training part. that is, when executing:
```
t2t-trainer --model=transformer --hparams_set=transformer_librispeech_tpu \
--problem=librispeech_train_full_test_clean \
```
, where the session (tf.session) is created. I need to do a test to change from:
```with tf.Session() as sess: ``` to ``` sess = tf.Session() ```
...
I would thank you a lot.
|
closed
|
2019-04-08T05:01:15Z
|
2020-02-17T16:15:34Z
|
https://github.com/tensorflow/tensor2tensor/issues/1533
|
[] |
manuel3265
| 1
|
PablocFonseca/streamlit-aggrid
|
streamlit
| 67
|
df resetting after adding a row
|
Good day everyone,
I'm trying to create a new df using AgGrid but every time I add a new row, the df resets without saving the inputs previously typed.
Do you know how to avoid this issue?
|
closed
|
2022-03-02T19:47:04Z
|
2024-04-04T17:53:17Z
|
https://github.com/PablocFonseca/streamlit-aggrid/issues/67
|
[] |
lalo-ap
| 5
|
microsoft/nni
|
machine-learning
| 5,627
|
YOLOV5-s model is a bit larger than pruned before with L1NormPruner
|
**Describe the bug**:
After speedup yolov5-s with sparsity 0.2, and save state dict with torch.save(), I found that the model is a bit larger than pruned before.

**Environment**:
- NNI version: 3.0rc1
- Training service (local|remote|pai|aml|etc):
- Python version: 3.9
- PyTorch version: 1.12
- Cpu or cuda version: cuda11.3
**Reproduce the problem**
- Code|Example:
`
import torch
from nni.common.concrete_trace_utils import concrete_trace
from nni.contrib.compression.pruning import L1NormPruner
from nni.contrib.compression.utils import auto_set_denpendency_group_ids
from nni.compression.pytorch.speedup.v2 import ModelSpeedup
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', device='cpu')
model(torch.rand([1, 3, 640, 640]))
config_list = [{
'sparsity': 0.2,
'op_types': ['Conv2d']
}, {
'op_names':['model.model.model.24.m.0','model.model.model.24.m.1','model.model.model.24.m.2'],
'exclude': True
}]
config_list = auto_set_denpendency_group_ids(model, config_list, torch.rand([1, 3, 640, 640]))
pruner = L1NormPruner(model, config_list)
masked_model, masks = pruner.compress()
pruner.unwrap_model()
graph_module = concrete_trace(model, (torch.rand([1, 3, 640, 640]), None, None, None))
ModelSpeedup(model, torch.rand([1, 3, 640, 640]), masks, graph_module=graph_module).speedup_model()
model(torch.rand([1, 3, 640, 640]))
torch.save(model.state_dict(), '/tmp/modelhub/yolo/pruned_yolo.pth')
`
- How to reproduce:
-python yolo_prune.py
|
open
|
2023-07-03T11:22:17Z
|
2023-07-05T02:36:12Z
|
https://github.com/microsoft/nni/issues/5627
|
[] |
moonlightian
| 0
|
google-research/bert
|
nlp
| 1,134
|
BERT masked lenguaje model. How can calculate the embedding of the MASK token?
|
On the training step of the masked lenguaje model, we constuct the embedding of the "masked" token using the embeddings of the contextual words, right? Then with a softmax layer we predict the "masked" word".
If we construct the "masked" embedding with the contextual tokens, we would need to calculate the dot product of the query of the "masked" embedding and the key of each contextual tokens. My question is....how can we calculate the query of the "masked" token if we dont know the input embedding of it (because we "masked" it intentionaly)?
|
open
|
2020-08-07T19:21:20Z
|
2020-08-07T19:23:09Z
|
https://github.com/google-research/bert/issues/1134
|
[] |
NicolasMontes
| 0
|
TencentARC/GFPGAN
|
deep-learning
| 413
|
amo
|
open
|
2023-07-11T17:42:41Z
|
2023-07-11T18:32:15Z
|
https://github.com/TencentARC/GFPGAN/issues/413
|
[] |
jpyaser
| 2
|
|
microsoft/qlib
|
deep-learning
| 1,662
|
ERROR: No matching distribution found for blosc2>=2.2.8 (macos, python 3.8)
|
## 🐛 Bug Description
When following the installation instructions, I get the following error
`ERROR: No matching distribution found for blosc2>=2.2.8`
on macos 14.0, python 3.8.
## To Reproduce
Steps to reproduce the behavior:
```
conda create --yes -n qlib python=3.8
conda activate qlib
pip install pyqlib
```
## Expected Behavior
Installation completes successfully
## Screenshot
```
...
Collecting tables>=3.6.1 (from pyqlib)
Using cached tables-3.9.0.tar.gz (4.7 MB)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [20 lines of output]
Collecting setuptools>=61.0.0
Obtaining dependency information for setuptools>=61.0.0 from https://files.pythonhosted.org/packages/bb/26/7945080113158354380a12ce26873dd6c1ebd88d47f5bc24e2c5bb38c16a/setuptools-68.2.2-py3-none-any.whl.metadata
Using cached setuptools-68.2.2-py3-none-any.whl.metadata (6.3 kB)
Collecting wheel
Obtaining dependency information for wheel from https://files.pythonhosted.org/packages/b8/8b/31273bf66016be6ad22bb7345c37ff350276cfd46e389a0c2ac5da9d9073/wheel-0.41.2-py3-none-any.whl.metadata
Using cached wheel-0.41.2-py3-none-any.whl.metadata (2.2 kB)
Collecting oldest-supported-numpy
Obtaining dependency information for oldest-supported-numpy from https://files.pythonhosted.org/packages/94/9a/756fef9346e5ca2289cb70d73990b4c9f25446a885c1186cfb93a85e7da0/oldest_supported_numpy-2023.8.3-py3-none-any.whl.metadata
Using cached oldest_supported_numpy-2023.8.3-py3-none-any.whl.metadata (9.5 kB)
Collecting packaging
Obtaining dependency information for packaging from https://files.pythonhosted.org/packages/ec/1a/610693ac4ee14fcdf2d9bf3c493370e4f2ef7ae2e19217d7a237ff42367d/packaging-23.2-py3-none-any.whl.metadata
Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB)
Collecting py-cpuinfo
Using cached py_cpuinfo-9.0.0-py3-none-any.whl (22 kB)
Collecting Cython>=0.29.32
Obtaining dependency information for Cython>=0.29.32 from https://files.pythonhosted.org/packages/5b/de/f57f7dc68629b52a2e6feea0499ebf1324395d2d4f06e643e7052f590d90/Cython-3.0.2-cp38-cp38-macosx_10_9_x86_64.whl.metadata
Using cached Cython-3.0.2-cp38-cp38-macosx_10_9_x86_64.whl.metadata (3.1 kB)
ERROR: Ignored the following versions that require a different python version: 2.2.8 Requires-Python <4,>=3.9
ERROR: Could not find a version that satisfies the requirement blosc2>=2.2.8 (from versions: 0.1.1, 0.1.2, 0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.1.10, 0.2.0, 0.3.0, 0.3.1, 0.3.2, 0.4.0, 0.4.1, 0.5.1, 0.5.2, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.6.5, 0.6.6, 2.0.0, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.2.6, 2.2.7)
ERROR: No matching distribution found for blosc2>=2.2.8
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
```
## Environment
**Note**: User could run `cd scripts && python collect_info.py all` under project directory to get system information
and paste them here directly.
--> I cannot run the above command since I couldn't install qlib in the first place, however I provide all the information I can get.
- Qlib version: N/A
- Python version: 3.8.17
- OS (`Windows`, `Linux`, `MacOS`): macos 14.0
- Commit number (optional, please provide it if you are using the dev version):
## Additional Notes
<!-- Add any other information about the problem here. -->
|
open
|
2023-10-05T11:02:56Z
|
2023-10-05T11:02:56Z
|
https://github.com/microsoft/qlib/issues/1662
|
[
"bug"
] |
fhamborg
| 0
|
charlesq34/pointnet
|
tensorflow
| 32
|
error if batch size =1
|
Hi..
I'm getting the following error error while trying to change batch size to 1
`python evaluate.py --visu --batch_size 1
`
```
.....
......
ValueError: Shape must be rank 2 but is rank 3 for 'MatMul_1' (op: 'MatMul') with input shapes: [1024,64], [1,64,64].
```
My final goal is to call the network to predicted from data input from Depth sensor (e.g. Kinect)
I'm trying to call it with only one point cloud and see what it will predict, any idea how to achieve that!
modifying the code and changing batch size to None, so it should depend on actual input, results in different error!
`TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [None, -1]. Consider casting elements to a supported type.
`
|
closed
|
2017-08-07T18:45:07Z
|
2017-08-29T09:36:56Z
|
https://github.com/charlesq34/pointnet/issues/32
|
[] |
mzaiady
| 2
|
seleniumbase/SeleniumBase
|
web-scraping
| 2,959
|
Selecting default search engine is needed for Chrome version 127
|
Hi!
The actual problem is described here: https://stackoverflow.com/questions/78787332/selecting-default-search-engine-is-needed-for-chrome-version-127
So you need to make this small fix to work with the new chrome 127.
```
diff -rNu a/browser_launcher.py b/browser_launcher.py
--- a/browser_launcher.py 2024-07-25 08:26:34.821841698 +0300
+++ b/browser_launcher.py 2024-07-25 08:39:00.743068259 +0300
@@ -1666,6 +1666,7 @@
chrome_options.add_argument("--disable-prompt-on-repost")
chrome_options.add_argument("--dns-prefetch-disable")
chrome_options.add_argument("--disable-translate")
+ chrome_options.add_argument("--disable-search-engine-choice-screen")
if binary_location:
chrome_options.binary_location = binary_location
if not enable_3d_apis and not is_using_uc(undetectable, browser_name):
```
Thanks.
|
closed
|
2024-07-25T05:45:22Z
|
2024-07-25T15:34:32Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2959
|
[
"enhancement"
] |
adron-s
| 3
|
strawberry-graphql/strawberry
|
django
| 3,121
|
Generic model type says `_type_definition` is deprecated but no `__strawberry_definition__` is set
|
When a strawberry type definition is generic, it reports a deprecation warning about `_type_definition`, but the alternative `__strawberry_definition__` is not set.
## Describe the Bug
Reproduction:
```
from typing import Generic, TypeVar
import strawberry
from strawberry.printer import print_schema
T = TypeVar("T")
@strawberry.type
class GenericType(Generic[T]):
@strawberry.field
def type_field(self) -> T:
...
@strawberry.type
class Query:
@strawberry.field
def int_generic(self) -> GenericType[int]:
...
print(print_schema(strawberry.Schema(query=Query)))
print(GenericType[int]._type_definition)
print(GenericType[int].__strawberry_definition__)
```
This prints:
* The schema (which looks as expected)
* A deprecation warning that says to use `__strawberry_definition__` not `_type_definition`
* StrawberryObjectDefinition object (which is set)
* An AttributeError that `__strawberry_definition__` does not exsting.
```
type IntGenericType {
typeField: Int!
}
type Query {
intGeneric: IntGenericType!
}
/Users/james/repos/strawberry-example/.venv/lib/python3.11/site-packages/strawberry/utils/deprecations.py:23: UserWarning: _type_definition is deprecated, use __strawberry_definition__ instead
self.warn()
StrawberryObjectDefinition(name='GenericType', is_input=False, is_interface=False, origin=<class '__main__.GenericType'>, description=None, interfaces=[], extend=False, directives=(), is_type_of=None, resolve_type=None, _fields=[Field(name='type_field',type=<strawberry.type.StrawberryTypeVar object at 0x104362d10>,default=<dataclasses._MISSING_TYPE object at 0x1030875d0>,default_factory=<dataclasses._MISSING_TYPE object at 0x1030875d0>,init=False,repr=False,hash=None,compare=False,metadata=mappingproxy({}),kw_only=True,_field_type=_FIELD)], concrete_of=None, type_var_map={})
Traceback (most recent call last):
File "/Users/james/repos/strawberry-example/models.py", line 26, in <module>
print(GenericType[int].__strawberry_definition__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/james/.pyenv/versions/3.11.5/lib/python3.11/typing.py", line 1295, in __getattr__
raise AttributeError(attr)
AttributeError: __strawberry_definition__
```
I think the expected behaviour is that it should be possible to print just `__strawberry_definition__`, with no deprecation warning. This works if the type is not generic.
## System Information
- Operating system: macos 14
- Strawberry version (if applicable): 0.209.2
|
open
|
2023-09-27T13:13:17Z
|
2025-03-20T15:56:23Z
|
https://github.com/strawberry-graphql/strawberry/issues/3121
|
[
"bug"
] |
jthorniley
| 1
|
deepset-ai/haystack
|
pytorch
| 9,016
|
Add run_async for `AzureOpenAIDocumentEmbedder`
|
We should be able to reuse the implementation once it is made for the `OpenAIDocumentEmbedder`
|
open
|
2025-03-11T11:07:12Z
|
2025-03-23T07:08:53Z
|
https://github.com/deepset-ai/haystack/issues/9016
|
[
"Contributions wanted!",
"P2"
] |
sjrl
| 0
|
psf/requests
|
python
| 6,242
|
setup.py
|
<!-- Summary. -->
## This bug does not affect usage, but it does exist
the bug code is in the "setup.py" file, line 78
here is code:
```python
with open(os.path.join(here, "requests", "__version__.py"), "r", "utf-8") as f:
exec(f.read(), about)
```
## It's an issue about function open()
I got this from Python Documentation
```python
open(file, mode='r', buffering=- 1, encoding=None, errors=None, newline=None, closefd=True, opener=None)
```
The argument ```"utf-8"``` should be passing by ```keyword```, but actually it's passed by ```position```,
and the third argument is ```buffering```(a int) .
So the system throws an ```TypeError```: 'str' object cannot be interpreted as an integer
## Fix
```python
with open(os.path.join(here, "requests", "__version__.py"), "r", encoding="utf-8") as f:
exec(f.read(), about)
```
## Reproduction Steps
open any file with the code, you will get the TypeError.
## System Information
TypeError: 'str' object cannot be interpreted as an integer
```python
here = os.path.abspath(os.path.dirname(__file__))
with open(os.path.join(here, "requests", "__version__.py"), "r", "utf-8") as f:
exec(f.read(), about)
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
|
closed
|
2022-09-22T11:55:13Z
|
2023-10-02T00:03:18Z
|
https://github.com/psf/requests/issues/6242
|
[] |
lucas-ccc
| 1
|
hzwer/ECCV2022-RIFE
|
computer-vision
| 244
|
关于pytorch版本
|
您好,我使用pytorch1.6以及更高的版本训练,loss比使用pytorch1.5训练,大很多,会不会用问题?
|
closed
|
2022-04-08T08:56:00Z
|
2022-07-10T04:18:37Z
|
https://github.com/hzwer/ECCV2022-RIFE/issues/244
|
[] |
dawei03896
| 2
|
nerfstudio-project/nerfstudio
|
computer-vision
| 2,841
|
In the new viewer, I want to import my data
|
Thank you very much for your work. In the new viewer, I want to import my data and I want to add some controls, such as sliders.
How to make the viewer displayed in the browser immediately display the effect when I modify the Python code. (Now I need to use Ctrl+C to close the viewer, and then use ns viewer -- load config again, which is too slow.)
|
open
|
2024-01-28T09:48:36Z
|
2024-01-29T02:55:05Z
|
https://github.com/nerfstudio-project/nerfstudio/issues/2841
|
[] |
smart4654154
| 0
|
recommenders-team/recommenders
|
deep-learning
| 1,861
|
[ASK] Error in NCFDataset creation
|
### Description
Hello all,
i'm trying to use the NCF_deep_dive notebook with my own data.
With the following structure
<html>
<body>
<!--StartFragment-->
| usr_id | code_id | amt_trx | bestelldatum
-- | -- | -- | -- | --
0 | 0 | 35 | 1 | 2022-03-01
1 | 0 | 2 | 1 | 2022-03-01
2 | 0 | 18 | 1 | 2022-03-01
3 | 0 | 9 | 1 | 2022-03-01
4 | 0 | 0 | 1 | 2022-03-01
<!--EndFragment-->
</body>
</html>
when I try to create the dataset i get the following error
`data = NCFDataset(train_file=train_file,
test_file=leave_one_out_test_file,
seed=SEED,
overwrite_test_file_full=True,
col_user='usr_id',
col_item='code_id',
col_rating='amt_trx',
binary=False)`
```
---------------------------------------------------------------------------
MissingUserException Traceback (most recent call last)
Cell In [39], line 1
----> 1 data = NCFDataset(train_file=train_file,
2 test_file=leave_one_out_test_file,
3 seed=SEED,
4 overwrite_test_file_full=True,
5 col_user='usr_id',
6 col_item='code_id',
7 col_rating='amt_trx',
8 binary=False)
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/dataset.py:376, in Dataset.__init__(self, train_file, test_file, test_file_full, overwrite_test_file_full, n_neg, n_neg_test, col_user, col_item, col_rating, binary, seed, sample_with_replacement, print_warnings)
374 self.test_file_full = os.path.splitext(self.test_file)[0] + "_full.csv"
375 if self.overwrite_test_file_full or not os.path.isfile(self.test_file_full):
--> 376 self._create_test_file()
377 self.test_full_datafile = DataFile(
378 filename=self.test_file_full,
379 col_user=self.col_user,
(...)
383 binary=self.binary,
384 )
385 # set random seed
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/dataset.py:417, in Dataset._create_test_file(self)
415 if user in train_datafile.users:
416 user_test_data = test_datafile.load_data(user)
--> 417 user_train_data = train_datafile.load_data(user)
418 # for leave-one-out evaluation, exclude items seen in both training and test sets
419 # when sampling negatives
420 user_positive_item_pool = set(
421 user_test_data[self.col_item].unique()
422 ).union(user_train_data[self.col_item].unique())
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/dataset.py:194, in DataFile.load_data(self, key, by_user)
192 while (self.line_num == 0) or (self.row[key_col] != key):
193 if self.end_of_file:
--> 194 raise MissingUserException("User {} not in file {}".format(key, self.filename))
195 next(self)
196 # collect user/test batch data
MissingUserException: User 58422 not in file ./train_new.csv
```
I made some checks
print(train.usr_id.nunique()) --> output: 81062
print(test.usr_id.nunique()) --> output: 81062
print(leave.usr_id.nunique()) --> output: 81062
also checked by hand and the user 58422 is in all the files. Also the types are the same i'm using int64 for usr_id, code_id and amt_trx like movielens dataset
I can't understand the error, could you help me please?
### Update
If i remove the parameter **overwrite_test_file_full** it creates the dataset but then I can't make predictions because the dataset object didn't create the user2id mapping
```
data = NCFDataset(train_file=train_file,
test_file=leave_one_out_test_file,
seed=SEED,
col_user='usr_id',
col_item='code_id',
col_rating='amt_trx',
print_warnings=True)
model = NCF (
n_users=data.n_users,
n_items=data.n_items,
model_type="NeuMF",
n_factors=4,
layer_sizes=[16,8,4],
n_epochs=EPOCHS,
batch_size=BATCH_SIZE,
learning_rate=1e-3,
verbose=99,
seed=SEED
)
predictions = [[row.usr_id, row.code_id, model.predict(row.usr_id, row.code_id)]
for (_, row) in test.iterrows()]
predictions = pd.DataFrame(predictions, columns=['usr_id', 'code_id', 'prediction'])
predictions.head()
```
```
AttributeError Traceback (most recent call last)
Cell In [38], line 1
----> 1 predictions = [[row.usr_id, row.code_id, model.predict(row.usr_id, row.code_id)]
2 for (_, row) in test.iterrows()]
5 predictions = pd.DataFrame(predictions, columns=['usr_id', 'code_id', 'prediction'])
6 predictions.head()
Cell In [38], line 1, in <listcomp>(.0)
----> 1 predictions = [[row.usr_id, row.code_id, model.predict(row.usr_id, row.code_id)]
2 for (_, row) in test.iterrows()]
5 predictions = pd.DataFrame(predictions, columns=['usr_id', 'code_id', 'prediction'])
6 predictions.head()
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/ncf_singlenode.py:434, in NCF.predict(self, user_input, item_input, is_list)
431 return list(output.reshape(-1))
433 else:
--> 434 output = self._predict(np.array([user_input]), np.array([item_input]))
435 return float(output.reshape(-1)[0])
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/ncf_singlenode.py:440, in NCF._predict(self, user_input, item_input)
437 def _predict(self, user_input, item_input):
438
439 # index converting
--> 440 user_input = np.array([self.user2id[x] for x in user_input])
441 item_input = np.array([self.item2id[x] for x in item_input])
443 # get feed dict
File /anaconda/envs/recsys/lib/python3.8/site-packages/recommenders/models/ncf/ncf_singlenode.py:440, in <listcomp>(.0)
437 def _predict(self, user_input, item_input):
438
439 # index converting
--> 440 user_input = np.array([self.user2id[x] for x in user_input])
441 item_input = np.array([self.item2id[x] for x in item_input])
443 # get feed dict
AttributeError: 'NCF' object has no attribute 'user2id'
```
|
open
|
2022-11-28T09:13:17Z
|
2022-11-28T14:20:49Z
|
https://github.com/recommenders-team/recommenders/issues/1861
|
[
"help wanted"
] |
mrcmoresi
| 0
|
graphql-python/graphene-django
|
graphql
| 1,031
|
Extending global filter set overrides
|
**Is your feature request related to a problem? Please describe.**
I am generating `DjangoObjectType` from my models dynamically. Some of those models have custom model fields that I have defined in my project. This throws an error i.e.:
```python
AssertionError: MyModelSet resolved field 'custom' with 'exact' lookup to an unrecognized field type CustomField. Try adding an override to 'Meta.filter_overrides'. See: https://django-filter.readthedocs.io/en/master/ref/filterset.html#customise-filter-generation-with-filter-overrides
```
**Describe the solution you'd like**
I would like to be able to extend global filterset overrides for those custom fields just like graphene-django does with `GRAPHENE_FILTER_SET_OVERRIDES`. Doing this by manually extending `GrapheneFilterSetMixin.FILTER_DEFAULTS` seems to work for my case.
I think this solution does not cause any backward incompatibility nor introduce any complexity. The solution is pretty easy to understand and introduce to the project code base. I hope this will add more flexibility to the project for future users facing similar problems.
```python
class GrapheneFilterSetMixin(BaseFilterSet):
""" A django_filters.filterset.BaseFilterSet with default filter overrides
to handle global IDs """
FILTER_DEFAULTS = dict(
itertools.chain(
FILTER_FOR_DBFIELD_DEFAULTS.items(),
GRAPHENE_FILTER_SET_OVERRIDES.items(),
getattr(settings, 'EXTRA_FILTER_SET_OVERRIDES', {}).items()
)
)
```
**Describe alternatives you've considered**
An alternative I have used before developing above solution was to quickly monkey patch solution, however this did not seem to work not to mention is not a long-time solution.
|
open
|
2020-08-26T10:46:16Z
|
2020-08-26T10:46:16Z
|
https://github.com/graphql-python/graphene-django/issues/1031
|
[
"✨enhancement"
] |
an0o0nym
| 0
|
influxdata/influxdb-client-python
|
jupyter
| 311
|
Make FluxTable and FluxRecord json serializable
|
it would be really handy for debugging if the objects returned from the write_api were json serializable by default (FluxTable and FluxRecord). this would avoid the need to do stuff like `json.dumps(record.__dict__)`
I think for this you would simply need to supply a `toJSON` function in those classes.
|
closed
|
2021-08-18T02:19:26Z
|
2021-08-31T09:31:55Z
|
https://github.com/influxdata/influxdb-client-python/issues/311
|
[
"enhancement"
] |
russorat
| 2
|
seleniumbase/SeleniumBase
|
pytest
| 2,281
|
Add option for setting `--host-resolver-rules=RULES`
|
## Add option for setting `--host-resolver-rules=RULES`
This is quite powerful. Here's some documentation on what it can be used for:
* https://www.chromium.org/developers/design-documents/network-stack/socks-proxy/
* https://www.electronjs.org/docs/latest/api/command-line-switches
```
A comma-separated list of rules that control how hostnames are mapped.
For example:
MAP * 127.0.0.1 | Forces all hostnames to be mapped to 127.0.0.1
MAP *.google.com proxy | Forces all google.com subdomains to be resolved to "proxy".
MAP test.com [::1]:77 | Forces "test.com" to resolve to IPv6 loopback. Will also force the port of the resulting socket address to be 77.
MAP * baz, EXCLUDE www.google.com | Remaps everything to "baz", except for "[www.google.com"](http://www.google.com"/).
```
In simple terms, this option lets you do powerful things such as:
* Blocking analytics software.
* Blocking advertisements.
|
closed
|
2023-11-14T22:27:29Z
|
2024-01-25T16:18:48Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2281
|
[
"enhancement",
"SeleniumBase 4"
] |
mdmintz
| 1
|
keras-team/keras
|
tensorflow
| 20,113
|
Conv2D is no longer supporting Masking in TF v2.17.0
|
Dear Keras team,
Conv2D layer no longer supports Masking layer in TensorFlow v2.17.0. I've already raised this issue with TensorFlow. However, they requested that I raise the issue here.
Due the dimensions of our input (i.e. (timesteps, width, channels)), size of the input shape (i.e. (2048, 2000, 3)) and size of the dataset (i.e. over 1 million samples), it is not practical to use LSTM, GRU, RNN or ConvLSTM1D layers, and therefore, Conv2D layers worked sufficiently well in our applications. The gaps in the dataset was handled with the Masking layer, and the masking layer was compatible with the Conv layers (among other layers, such as Cropping and Padding) from all TF versions up to (and including) TF v2.16. However, in TF v2.17.0, we get the following user warning "Layer 'conv2d' (of type Conv2D) was passed an input with a mask attached to it. However, this layer does not support masking and will therefore destroy the mask information. Downstream layers will not see the mask".
Is this a bug in TF v2.17.0?
Or is this feature now depreciated in TF v2.17.0?
Would you be able to reintroduce this feature in future versions?
Best
Kav
**LINK TO THE CODE ON COLAB NOTEBOOK:**
https://colab.research.google.com/drive/102k6UNSKb-d03DcmcUtCxmV9Qz9bjZoD?usp=drive_link
**STANDALONE CODE:**
from tensorflow.keras.layers import Conv2D, Masking, Flatten
from tensorflow.keras import Model, Input
batch = 1
timesteps = 10
width = 10
channels = 2
filters = 4
kernel_size = 3
mask_value = -1
x_input = Input(shape=(timesteps, width, channels))
x_masking = Masking(mask_value)(x_input)
x_conv2d = Conv2D(filters, kernel_size)(x_masking)
x_flatten = Flatten()(x_conv2d)
model = Model(x_input, x_flatten)
model.compile(loss='mse')
**RELEVANT LOG OUTPUT**
/usr/local/lib/python3.10/dist-packages/keras/src/layers/layer.py:915: UserWarning: Layer 'conv2d' (of type Conv2D) was passed an input with a mask attached to it. However, this layer does not support masking and will therefore destroy the mask information. Downstream layers will not see the mask.
warnings.warn(
**LINK TO THE ORIGINAL RAISED ISSUE ON TENSORFLOW REPO**
https://github.com/tensorflow/tensorflow/issues/73531
|
closed
|
2024-08-12T12:52:32Z
|
2024-08-15T13:36:52Z
|
https://github.com/keras-team/keras/issues/20113
|
[
"type:support",
"stat:awaiting keras-eng"
] |
kavjayawardana
| 4
|
nltk/nltk
|
nlp
| 2,577
|
Averaged perceptron tagger cannot load data from paths with %-encoded characters in their names
|
If you use a non-default data directory that happens to have something that looks like a URL-encoded character in its name, you can't use `PerceptronTagger`, because both in `__init__.py` (for Russian) and in `perceptron.py`, it does
url = "file:" + str(find(NAME_OF_PICKLE))
tagger.load(url)
(You can see this pattern in the `_get_tagger()` function on line 100 of `__init__.py`, as well as in the `__init__()` method of `PerceptronTagger` on line 167.)
The problem is that `find()` returns a path, not a URL fragment. For this code to be valid, it needs to url-encode the result of the `find()` call before prepending "file". As it stands, what will happen is that the `load()` call will eventually call `find()` again, which will url-decode the path even though it wasn't actually url-encoded. And then it will fail to find the file, because it won't be using the correct path name any more.
|
open
|
2020-07-30T13:17:51Z
|
2020-07-30T13:19:09Z
|
https://github.com/nltk/nltk/issues/2577
|
[] |
al45tair
| 0
|
autogluon/autogluon
|
scikit-learn
| 4,505
|
[tabular] Kaggle Feedback
|
From tilii7's very nice kaggle post: https://www.kaggle.com/competitions/playground-series-s4e9/discussion/536980
1. Investigate adding LAMA's NN model to AutoGluon.
2. Investigate if hill climbing leads to performance improvement in ensembling. Can verify via [TabRepo](https://github.com/autogluon/tabrepo).
Both of these tasks are things that could be implemented by members in the community if they want to give it at try.
- For LAMA's NN: Refer to the [custom model tutorial](https://auto.gluon.ai/stable/tutorials/tabular/advanced/tabular-custom-model.html)
- For hill climbing, refer to the experiment running code in TabRepo, you can replace the ensemble selection algorithm with a hill climbing algorithm and see if it improves the simulation score: https://github.com/autogluon/tabrepo/blob/main/scripts/baseline_comparison/evaluate_baselines.py
|
open
|
2024-10-01T01:16:53Z
|
2024-11-25T22:56:52Z
|
https://github.com/autogluon/autogluon/issues/4505
|
[
"enhancement",
"help wanted",
"discussion",
"module: tabular"
] |
Innixma
| 2
|
dpgaspar/Flask-AppBuilder
|
rest-api
| 1,480
|
How to enabled/disable an api from being viewed.
|
Is it possible to restrict which APIs are visible in the swagger page? For example, create brand new app, add FAB_API_SWAGGER_UI = True to the config and go to /swagger/v1 and there you can see Menu, OpenApi and Security. How do I hide Menu and OpenApi but keep Security there?
Thanks.
|
closed
|
2020-10-03T01:46:24Z
|
2021-01-10T14:20:54Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1480
|
[
"stale"
] |
memjr
| 1
|
httpie/cli
|
rest-api
| 928
|
Some URLs result in badly formated httpie output
|
Some URLS result badly formated httpie output, e.g. `https://doi.org/10.1001/archneur.62.9.1459`

Debug info:
```
http --debug
HTTPie 2.1.0
Requests 2.22.0
Pygments 2.6.1
Python 3.7.4 (default, Jul 9 2019, 18:13:23)
[Clang 10.0.1 (clang-1001.0.46.4)]
/usr/local/opt/python/bin/python3.7
Darwin 18.5.0
<Environment {'colors': 256,
'config': {'__meta__': {'about': 'HTTPie configuration file',
'help': 'https://httpie.org/doc#config',
'httpie': '1.0.2'},
'default_options': []},
'config_dir': PosixPath('/Users/k/.httpie'),
'is_windows': False,
'log_error': <function Environment.log_error at 0x1061415f0>,
'program_name': 'http',
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='UTF-8'>,
'stdin_encoding': 'UTF-8',
'stdin_isatty': True,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>,
'stdout_encoding': 'UTF-8',
'stdout_isatty': True}>
```
|
closed
|
2020-06-10T09:02:26Z
|
2021-01-29T21:27:11Z
|
https://github.com/httpie/cli/issues/928
|
[] |
kennell
| 2
|
pinry/pinry
|
django
| 233
|
[Feature] Allow board to have a password
|
I know that this is a big one, but I would love to share my private boards to "guests" via a private url or some kind of password protection so that they are able to see my pins, but not do anything else, only to those pins.
|
open
|
2020-11-01T07:55:19Z
|
2021-03-25T17:45:59Z
|
https://github.com/pinry/pinry/issues/233
|
[
"enhancement"
] |
Avalarion
| 2
|
dropbox/PyHive
|
sqlalchemy
| 149
|
Should not have nextUri if failed with PrestoDB 0.181
|
We have this error with Superset
```
Should not have nextUri
```
|
closed
|
2017-08-09T12:30:49Z
|
2017-09-13T08:36:44Z
|
https://github.com/dropbox/PyHive/issues/149
|
[] |
damiencarol
| 4
|
sqlalchemy/alembic
|
sqlalchemy
| 938
|
Alembic Primary Key issue with mssql ( Azure Synapse SQL DW )
|
**Describe the bug**
Im trying to apply migrations using alembic to Azure Synapse SQL DW. Im facing following issue while performing alembic upgrade head:
```
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Enforced unique constraints are not supported. To create an unenforced unique constraint you must include the NOT ENFORCED syntax as part of your statement. (104467) (SQLExecDirectW)')
[SQL:
CREATE TABLE alembic_version (
version_num VARCHAR(32) NOT NULL,
CONSTRAINT alembic_version_pkc PRIMARY KEY (version_num)
)
]
(Background on this error at: https://sqlalche.me/e/14/f405)
```
**Expected behavior**
Version table created normally and migration successful
**To Reproduce**
my `alembic.ini` configuration for mssql:
```
sqlalchemy.url = mssql+pyodbc://{user}:{password}@{host/server}:1433/{db}?autocommit=True&driver=ODBC+Driver+17+for+SQL+Server
```
**Error**
```
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Enforced unique constraints are not supported. To create an unenforced unique constraint you must include the NOT ENFORCED syntax as part of your statement. (104467) (SQLExecDirectW)')
[SQL:
CREATE TABLE alembic_version (
version_num VARCHAR(32) NOT NULL,
CONSTRAINT alembic_version_pkc PRIMARY KEY (version_num)
)
]
(Background on this error at: https://sqlalche.me/e/14/f405)
```
**Versions.**
- OS: Linux Ubuntu 20.04
- Python: 3.9.7
- Alembic: 1.7.3
- SQLAlchemy: 1.4.25
- Database: Azure Synapse SQL DW
- DBAPI:
**Additional context**
<!-- Add any other context about the problem here. -->
**Have a nice day!**
|
closed
|
2021-09-29T04:36:44Z
|
2021-09-29T15:18:49Z
|
https://github.com/sqlalchemy/alembic/issues/938
|
[
"question"
] |
ashishmgofficial
| 2
|
pytorch/vision
|
machine-learning
| 8,327
|
Model instantiation without loading from disk
|
### 🚀 The feature
So far, the only way to use a model from `torchvision` is through loading a `jit` checkpoint from the disk like so:
```c++
#include <torch/script.h>
#include <iostream>
#include <memory>
int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
}
// Deserialize the ScriptModule from a file using torch::jit::load().
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1]);
assert(module != nullptr);
std::cout << "ok\n";
}
```
The feature that I would like to propose is to purge away the need to have a precompiled jit file and integrate a methodology in the C++ PyTorch frontend that can easily instantiate any `torchvision.models` file as easily as in Python. For example:
```c++
#include <torch/script.h>
#include <iostream>
#include <memory>
int main(int argc, const char* argv[]) {
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(torchvision::models::resnet50);
assert(module != nullptr);
std::cout << "ok\n";
}
```
### Motivation, pitch
There shouldn't be any dependencies between the Python frontend and the C++ frontend. Specifically, there are projects that leverage the C++ PyTorch API solely, and in that case, the developers have to invoke every time a Python script before the utilization of their framework just to create an instance of the desired model from `torchvision.models` to then use their framework. This is a timely process, particularly if there is frequent model change at runtime.
Specific use case:
I am building a framework that is connected with Torch TensorRT and utilizes NVIDIA NVDLAs of Jetson boards. However, every time I query my framework for some workload, I have to first use Python and compile a jit instance to later load in my framework. This creates a huge overhead and since disk operations are most timely, it defeats the whole purpose of using C++ to accelerate the process.
|
open
|
2024-03-17T07:45:48Z
|
2024-03-18T20:21:12Z
|
https://github.com/pytorch/vision/issues/8327
|
[] |
AndreasKaratzas
| 2
|
aleju/imgaug
|
machine-learning
| 445
|
error thrown when calling _draw_samples_image ia.do_assert(regain_bottom <= crop_bottom)
|
I'm trying to train a Mask RCNN model using imgaug for augmentations. All of my images are 512x512 and I've updated the default config params from to:
IMAGE_MAX_DIM = 512
IMAGE_MIN_DIM = 512
IMAGE_RESIZE_MODE = "none" (I've also tried setting this to "square")
I keep encountering this error and have grown frustrated over several hours of troubleshooting. From the 2nd to last error line, I'm guessing from the assert error that it has something to do with the `regain_bottom <= crop_bottom` but I can't figure out where this condition is being violated.
Matterport's config.py file says (lines 98-101):
` # If enabled, resizes instance masks to a smaller size to reduce
# memory load. Recommended when using high-resolution images.
USE_MINI_MASK = True
MINI_MASK_SHAPE = (56, 56) # (height, width) of the mini-mask`
But I think the mask_shape is sufficiently large enough to work on 512x512 images.
Here's the error:
> ERROR:root:Error processing image {'id': '2017050507084918155YBL-XR1IO000017.jpg', 'source': 'dental', 'path': 'D:\\PDS projects\\C137\\082819masks\\annotation_images\\2017050507084918155YBL-XR1IO000017.jpg', 'width': 512, 'height': 512, 'polygons': [{'all_points_x': [307, 321, 326, 319, 314, 305], 'all_points_y': [372, 347, 321, 305, 303, 334], 'name': 'polygon'}], 'r_object_name': ['decay']}
>
> Traceback (most recent call last):
> File "< my anaconda env >lib\site-packages\mask_rcnn-2.1-py3.6.egg\mrcnn\model.py", line 1720, in data_generator
> use_mini_mask=config.USE_MINI_MASK)
> File "< my anaconda env >lib\site-packages\mask_rcnn-2.1-py3.6.egg\mrcnn\model.py", line 1264, in load_image_gt
> hooks=imgaug.HooksImages(activator=hook))
> File "< my anaconda env >lib\site-packages\imgaug\augmenters\meta.py", line 323, in augment_image
> return self.augment_images([image], hooks=hooks)[0]
> File "< my anaconda env >lib\site-packages\imgaug\augmenters\meta.py", line 431, in augment_images
> hooks=hooks
> File "< my anaconda env >lib\site-packages\imgaug\augmenters\meta.py", line 1514, in _augment_images
> hooks=hooks
> File "< my anaconda env >lib\site-packages\imgaug\augmenters\meta.py", line 431, in augment_images
> hooks=hooks
> File "< my anaconda env >lib\site-packages\imgaug\augmenters\size.py", line 611, in _augment_images
> crop_top, crop_right, crop_bottom, crop_left, pad_top, pad_right, pad_bottom, pad_left, pad_mode, pad_cval = self._draw_samples_image(seed, height, width)
> File "< my anaconda env >lib\site-packages\imgaug\augmenters\size.py", line 727, in _draw_samples_image
> ia.do_assert(regain_bottom <= crop_bottom)
> File "< my anaconda env >lib\site-packages\imgaug\imgaug.py", line 678, in do_assert
> raise AssertionError(str(message))
> AssertionError: Assertion failed.
`
Any help would be appreciated. Please let me know if I'm doing something wrong with imgaug or rather if the problem lies with the Mask_RCNN configuration. Thank you
|
open
|
2019-09-26T23:51:20Z
|
2019-09-28T07:09:54Z
|
https://github.com/aleju/imgaug/issues/445
|
[] |
cpoptic
| 3
|
ploomber/ploomber
|
jupyter
| 263
|
Show a warning when using the CLI and the entry point fails to resolve
|
e.g. `ploomber build --entry-point some.entry.point --help` can be used to see what parameters are available to run the pipeline but if the entry point is invalid, the cli does not show any warnings, it just not displays any extra parameters
|
closed
|
2020-09-30T15:41:50Z
|
2020-10-01T17:55:17Z
|
https://github.com/ploomber/ploomber/issues/263
|
[] |
edublancas
| 0
|
mckinsey/vizro
|
plotly
| 179
|
ModuleNotFoundError: No module named 'vizro.tables'
|
### Question
my version of Vizro: 0.1.0
Here is my code:

I'd like from vizro.tables import dash_data_table
but the terminal will pop up the error: **ModuleNotFoundError: No module named 'vizro.tables'**
### Code/Examples
_No response_
### Other information
_No response_
### vizro version
_No response_
### Python version
_No response_
### OS
_No response_
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
|
closed
|
2023-11-24T10:09:37Z
|
2023-12-04T09:21:21Z
|
https://github.com/mckinsey/vizro/issues/179
|
[
"General Question :question:"
] |
bvbvtw
| 1
|
jupyterhub/repo2docker
|
jupyter
| 600
|
Specifying multiple commands when launching a container from the CLI
|
I was expecting to be able to run several commands as the command repo2docker runs but can't work out how.
Something like `repo2docker https://github.com/binder-examples/conda-freeze conda install numpy pandas && conda env export -n root` is what I'd like to do. If you quote the command like so `repo2docker https://github.com/binder-examples/conda-freeze "conda install numpy pandas && conda env export -n root"` you get a "command not found' error.
The more complicated `repo2docker https://github.com/binder-examples/conda-freeze /bin/sh -c 'conda install -y numpy && conda env export -n root'` does work.
I think this is related to #599 and how we pass the command to dockerd/not using a shell to run the command.
|
open
|
2019-03-02T07:51:45Z
|
2021-05-17T20:39:07Z
|
https://github.com/jupyterhub/repo2docker/issues/600
|
[
"documentation",
"needs: discussion"
] |
betatim
| 2
|
CatchTheTornado/text-extract-api
|
api
| 12
|
Pulling model from cli
|
root@DESKTOP-5T7CRRP:/mnt/c/Users/user/pdf-extract-api# python3 client/cli.py llm_pull --model llama3.1
Failed to pull the model: Internal Server Error
On "server side":
```
fastapi_app | INFO: 172.18.0.1:37452 - "POST /llm_pull HTTP/1.1" 500 Internal Server Error
fastapi_app | ERROR: Exception in ASGI application
fastapi_app | Traceback (most recent call last):
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
fastapi_app | yield
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 236, in handle_request
fastapi_app | resp = self._pool.handle_request(req)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
fastapi_app | raise exc from None
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
fastapi_app | response = connection.handle_request(
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
fastapi_app | raise exc
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
fastapi_app | stream = self._connect(request)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 122, in _connect
fastapi_app | stream = self._network_backend.connect_tcp(**kwargs)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
fastapi_app | with map_exceptions(exc_map):
fastapi_app | File "/usr/local/lib/python3.10/contextlib.py", line 153, in __exit__
fastapi_app | self.gen.throw(typ, value, traceback)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
fastapi_app | raise to_exc(exc) from exc
fastapi_app | httpcore.ConnectError: [Errno 111] Connection refused
fastapi_app |
fastapi_app | The above exception was the direct cause of the following exception:
fastapi_app |
fastapi_app | Traceback (most recent call last):
fastapi_app | File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
fastapi_app | result = await app( # type: ignore[func-returns-value]
fastapi_app | File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
fastapi_app | return await self.app(scope, receive, send)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
fastapi_app | await super().__call__(scope, receive, send)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 113, in __call__
fastapi_app | await self.middleware_stack(scope, receive, send)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in __call__
fastapi_app | raise exc
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in __call__
fastapi_app | await self.app(scope, receive, _send)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
fastapi_app | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
fastapi_app | raise exc
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
fastapi_app | await app(scope, receive, sender)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 715, in __call__
fastapi_app | await self.middleware_stack(scope, receive, send)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 735, in app
fastapi_app | await route.handle(scope, receive, send)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
fastapi_app | await self.app(scope, receive, send)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
fastapi_app | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
fastapi_app | raise exc
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
fastapi_app | await app(scope, receive, sender)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 73, in app
fastapi_app | response = await f(request)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
fastapi_app | raw_response = await run_endpoint_function(
fastapi_app | File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
fastapi_app | return await dependant.call(**values)
fastapi_app | File "/app/main.py", line 91, in pull_llama
fastapi_app | response = ollama.pull(request.model)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/ollama/_client.py", line 319, in pull
fastapi_app | return self._request_stream(
fastapi_app | File "/usr/local/lib/python3.10/site-packages/ollama/_client.py", line 99, in _request_stream
fastapi_app | return self._stream(*args, **kwargs) if stream else self._request(*args, **kwargs).json()
fastapi_app | File "/usr/local/lib/python3.10/site-packages/ollama/_client.py", line 70, in _request
fastapi_app | response = self._client.request(method, url, **kwargs)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 837, in request
fastapi_app | return self.send(request, auth=auth, follow_redirects=follow_redirects)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 926, in send
fastapi_app | response = self._send_handling_auth(
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 954, in _send_handling_auth
fastapi_app | response = self._send_handling_redirects(
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 991, in _send_handling_redirects
fastapi_app | response = self._send_single_request(request)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1027, in _send_single_request
fastapi_app | response = transport.handle_request(request)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 235, in handle_request
fastapi_app | with map_httpcore_exceptions():
fastapi_app | File "/usr/local/lib/python3.10/contextlib.py", line 153, in __exit__
fastapi_app | self.gen.throw(typ, value, traceback)
fastapi_app | File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions
fastapi_app | raise mapped_exc(message) from exc
fastapi_app | httpx.ConnectError: [Errno 111] Connection refused
```
|
closed
|
2024-11-01T13:15:33Z
|
2024-11-07T13:29:30Z
|
https://github.com/CatchTheTornado/text-extract-api/issues/12
|
[
"bug"
] |
Marcelas751
| 3
|
fastapi-users/fastapi-users
|
fastapi
| 295
|
Websocket support?
|
Hey, is there a way to have websocket support with this or is there a function we can call inside of a function/route to get the current user? I'm trying to use it in a function for a websocket to see if there's a user returned by get_active_user but it gives me an exception everytime:
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/user/dev/Yacht-work/Yacht/backend/venv/lib/python3.8/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 154, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "/home/user/dev/Yacht-work/Yacht/backend/venv/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/home/user/dev/Yacht-work/Yacht/backend/venv/lib/python3.8/site-packages/fastapi/applications.py", line 180, in __call__
await super().__call__(scope, receive, send)
File "/home/user/dev/Yacht-work/Yacht/backend/venv/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/user/dev/Yacht-work/Yacht/backend/venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 146, in __call__
await self.app(scope, receive, send)
File "/home/user/dev/Yacht-work/Yacht/backend/venv/lib/python3.8/site-packages/starlette/exceptions.py", line 58, in __call__
await self.app(scope, receive, send)
File "/home/user/dev/Yacht-work/Yacht/backend/venv/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
await route.handle(scope, receive, send)
File "/home/user/dev/Yacht-work/Yacht/backend/venv/lib/python3.8/site-packages/starlette/routing.py", line 283, in handle
await self.app(scope, receive, send)
File "/home/user/dev/Yacht-work/Yacht/backend/venv/lib/python3.8/site-packages/starlette/routing.py", line 57, in app
await func(session)
File "/home/user/dev/Yacht-work/Yacht/backend/venv/lib/python3.8/site-packages/fastapi/routing.py", line 242, in app
await dependant.call(**values)
File "./backend/api/routers/apps.py", line 57, in ws
user = await get_active_user()
File "<makefun-gen-4>", line 2, in get_current_active_user
File "/home/user/dev/Yacht-work/Yacht/backend/venv/lib/python3.8/site-packages/fastapi_users/authentication/__init__.py", line 96, in get_current_active_user
raise self._get_credentials_exception()
fastapi.exceptions.HTTPException
```
|
closed
|
2020-08-12T22:42:49Z
|
2023-08-21T00:37:47Z
|
https://github.com/fastapi-users/fastapi-users/issues/295
|
[
"question"
] |
SelfhostedPro
| 10
|
pyjanitor-devs/pyjanitor
|
pandas
| 1,225
|
`summarize`
|
# Brief Description
<!-- Please provide a brief description of what you'd like to propose. -->
I would like to propose a `summarize` function, similar to dplyr's `summarise` function and pandas' `agg` function, but for grouping operations, and more flexible
# Example API
```python
df.summarize(y='sum',n=lambda df: df.nth(1), by='x')
# summarize on multiple columns
df.summarize((['a','b','c'], 'sum'), by = 'x')
# replicate dplyr's across
# https://stackoverflow.com/q/63200530/7175713
# select_columns syntax can fit in nicely here
mtcars.summarize(("*t", "mean"), ("*p", "sum"), by='cyl')
```
|
closed
|
2022-12-18T10:37:03Z
|
2025-03-02T22:27:38Z
|
https://github.com/pyjanitor-devs/pyjanitor/issues/1225
|
[] |
samukweku
| 0
|
autogluon/autogluon
|
scikit-learn
| 4,362
|
Time Series Predictor: Changing the random seed does not change results for all models
|
Hi,
I am trying to affect the results of certain models by changing the random_seed parameter in the Time Series Predictor module. The models I used and the effect are in the table below.
The models that had different results were:
PatchTST, Simplefeedforward, TemporalFusionTransformer, DirectTabular and AutoETS.
The models that had identical results despite changing the seed were:
DeepAR, RecursiveTabular, Dlinear
**From the documentation**
random_seed : int or None, default = 123
If provided, fixes the seed of the random number generator for all models. This guarantees reproducible
results for most models (except those trained on GPU because of the non-determinism of GPU operations).
**Implementation:**
predictor = TimeSeriesPredictor(
prediction_length=12,
path=model_dir,
target="target",
eval_metric="MAPE",
quantile_levels=[0.01, 0.05, 0.1, 0.25, 0.75, 0.9, 0.95, 0.99],
)
model_params = {}
predictor.fit(
train_data,
hyperparameters={model_type: model_params},
time_limit=3600,
random_seed=500
)
Would there be a way to make the results of the three models (DeepAR, RecursiveTabular, Dlinear) change with different random seeds? Thank you!
|
open
|
2024-08-01T16:00:08Z
|
2024-11-26T10:17:56Z
|
https://github.com/autogluon/autogluon/issues/4362
|
[
"enhancement",
"module: timeseries"
] |
ZohrahV
| 0
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,819
|
Detected on fingerprint.com
|

fingerprint.com is detecting UC driver:
https://fingerprint.com/products/bot-detection/
|
open
|
2024-04-05T09:35:13Z
|
2024-10-21T04:16:07Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1819
|
[] |
BersXx
| 13
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 42
|
Make ProxyAnchorLoss extend WeightRegularizerMixin
|
Should be straightforward.
|
closed
|
2020-04-11T09:05:48Z
|
2020-04-11T19:53:27Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/42
|
[
"enhancement"
] |
KevinMusgrave
| 1
|
stitchfix/hamilton
|
numpy
| 137
|
Metadata emission
|
**Is your feature request related to a problem? Please describe.**
Hamilton encodes a lot of metadata that lives in code. It also creates some at execution time. There are projects such as https://datahubproject.io/, https://openlineage.io/ that capture this metadata across a wide array of tooling to create a central view in a heterogenous environment. Hamilton should be able to emit metadata/executions information to them.
**Describe the solution you'd like**
A user should be able to specify whether their Hamilton DAG should emit metadata.
This should play nicely with graph adapters, e.g. spark, ray, dask.
UX questions:
1. Should this be something in the graph adapter universe? E.g. a mixin?
2. Or should this be on the driver side, so you change drivers for functionality, but change graph adapters for scale...
# TODO:
- [ ] find motivating use case to develop for
|
closed
|
2022-06-21T21:57:26Z
|
2023-02-26T17:09:14Z
|
https://github.com/stitchfix/hamilton/issues/137
|
[
"enhancement",
"product idea"
] |
skrawcz
| 3
|
widgetti/solara
|
jupyter
| 685
|
Typo in type hint of `solara.components.tooltip.Tooltip`
|
There is a typo in the definition of the `Tooltip` component:
https://github.com/widgetti/solara/blob/714e41d8950b38bd14435ae4356d9d841c4a278f/solara/components/tooltip.py#L10C5-L10C40
Currently, it's like this:
```python
@solara.component
def Tooltip(
tooltip=Union[str, solara.Element],
children=[],
color: Optional[str] = None,
):
```
but should be
```python
@solara.component
def Tooltip(
tooltip: Union[str, solara.Element],
children=[],
color: Optional[str] = None,
):
```
|
closed
|
2024-06-14T11:40:01Z
|
2024-06-28T11:09:05Z
|
https://github.com/widgetti/solara/issues/685
|
[] |
MG-MW
| 1
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 849
|
still block by nike.com
|
where using UC on chrome version 86 or 106, automate into nike.com
it blocks and redirect into error page.
any suggestion on fix this problem?
|
open
|
2022-10-24T13:18:34Z
|
2022-11-16T13:40:11Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/849
|
[] |
terrylao
| 4
|
apache/airflow
|
automation
| 47,682
|
Fix attrs 25.2.0 compatibility
|
### Body
attrs just released 25.2.0, which was broken Airflow with exceptions like:
```
self.dag = DAG("test_dag_id", default_args=args)
E TypeError: __init__() got an unexpected keyword argument 'default_args'
```
https://github.com/python-attrs/attrs/releases/tag/25.2.0
Might be this change: https://github.com/python-attrs/attrs/releases/tag/25.2.0
Upstream issue: https://github.com/python-attrs/attrs/issues/1416
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
|
closed
|
2025-03-12T16:03:36Z
|
2025-03-13T18:12:54Z
|
https://github.com/apache/airflow/issues/47682
|
[
"kind:bug",
"kind:meta"
] |
jedcunningham
| 0
|
xuebinqin/U-2-Net
|
computer-vision
| 333
|
A new webapp that generate ID photos for free based on U2NET
|
Thank you for publishing this DL model. This really enables many applications. I personally made one that helps people generate ID photo easily.
https://freeidphoto.com/
|
open
|
2022-09-19T03:25:54Z
|
2022-09-19T03:25:54Z
|
https://github.com/xuebinqin/U-2-Net/issues/333
|
[] |
hckuo2
| 0
|
jpjacobpadilla/Stealth-Requests
|
web-scraping
| 3
|
Can't use requests.Session()
|
AttributeError: module 'stealth_requests' has no attribute 'Session'
|
closed
|
2025-01-23T18:29:13Z
|
2025-01-23T18:48:58Z
|
https://github.com/jpjacobpadilla/Stealth-Requests/issues/3
|
[] |
reillychase
| 2
|
uriyyo/fastapi-pagination
|
fastapi
| 590
|
Different response models
|
Hi. I was wondering if there was support to dynamically return either of two response models similar to how FastAPI allows. I checked the docs and issues and couldn't find it documented. I tried using Union shown by the example below but am given: `RuntimeError: Use params or add_pagination`.
```python
@router.get("/all")
async def get_all(user: User) -> Union[Page[AdminResponseModel], Page[UserResponseModel]]:
if user.role == "admin":
return AdminResponseModel
else:
return UserResponseModel
```
|
closed
|
2023-04-02T21:11:07Z
|
2023-04-09T09:47:46Z
|
https://github.com/uriyyo/fastapi-pagination/issues/590
|
[
"question"
] |
visuxls
| 2
|
PaddlePaddle/models
|
computer-vision
| 5,042
|
module 'paddle' has no attribute 'enable_static'
|
Traceback (most recent call last):
File "train.py", line 252, in <module>
paddle.enable_static()
AttributeError: module 'paddle' has no attribute 'enable_static'
|
closed
|
2020-12-14T03:27:52Z
|
2020-12-22T07:41:49Z
|
https://github.com/PaddlePaddle/models/issues/5042
|
[] |
JonyJiang123
| 1
|
vitalik/django-ninja
|
rest-api
| 973
|
[BUG] When running the server using django-ninja's router, there is an issue of multiple objects with different types being received.
|
**Describe the bug**
A clear and concise description of what the bug is.
**Versions (please complete the following information):**
- Python version: 3.11
- Django version: 4.2.7
- Django-Ninja version: 1.0.1
- Pydantic version: 2.5.2
Hello, developers!
When trying to set up a new project with django-ninja, the following error occurred. Please check if this is a bug or fault.
The bug exists at line 385, and this line belongs to the 'add_router' method.
self._routers.extend(router.build_routers(prefix))
router.set_api_instance(self, parent_router)
The old version of django-ninja - 0.22.2, it seems to work without any issues.

But the newest version 1.0.1,


The "Router" class object and the "NinjaAPI" class object seem to be mixed up, and when checked with the getattr function, the NinjaAPI does not have the "build_routers" method. There appears to be a bug or fault in the code.
Was it originally intended for Python objects of different types to cross over like this?
|
closed
|
2023-12-01T13:04:53Z
|
2023-12-02T01:34:56Z
|
https://github.com/vitalik/django-ninja/issues/973
|
[] |
Gibartes
| 2
|
marshmallow-code/flask-smorest
|
rest-api
| 27
|
Cannot create a second flask application
|
Hi,
I believe that due to recent changes around the Blueprint.doc decorator it's not possible to have 2 flask applications created at the same time. See below a minimal example. Tests should pass with flask-rest-api 0.10 but the test `test_app` fails with 0.11. Is this the expected behavior or it's a bug?
```
### app.py ###
from flask import Flask
from flask.views import MethodView
from flask_rest_api import Api, Blueprint
rest_api = Api()
_ = Blueprint('Test', __name__, url_prefix='/api/test')
@_.route('/')
class TestView(MethodView):
@_.response()
def get(self):
return [1,2,3]
def create_app():
app = Flask(__name__)
rest_api.init_app(app)
rest_api.register_blueprint(_)
return app
```
```
### test_app.py ###
from app import create_app
def test_app():
app = create_app()
# This will fail with flask-rest-api >= 0.11
app2 = create_app()
```
Here's the output from `pytest`
```
$ pytest
============================================================================ test session starts ============================================================================
platform darwin -- Python 3.7.0, pytest-4.0.0, py-1.7.0, pluggy-0.8.0
rootdir: /Users/sancho/Development/test, inifile:
collected 1 item
test_app.py F [100%]
================================================================================= FAILURES ==================================================================================
_________________________________________________________________________________ test_app __________________________________________________________________________________
def test_app():
app = create_app()
> app2 = create_app()
test_app.py:5:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
app.py:21: in create_app
rest_api.register_blueprint(_)
../flask-rest-api/flask_rest_api/__init__.py:81: in register_blueprint
blp.register_views_in_doc(self._app, self.spec)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <flask_rest_api.blueprint.Blueprint object at 0x103a3d470>, app = <Flask 'app'>, spec = <flask_rest_api.spec.APISpec object at 0x10433a400>
def register_views_in_doc(self, app, spec):
"""Register views information in documentation
If a schema in a parameter or a response appears in the spec
`definitions` section, it is replaced by a reference to its definition
in the parameter or response documentation:
"schema":{"$ref": "#/definitions/MySchema"}
"""
# This method uses the documentation information associated with the
# endpoint (in self._docs) to provide documentation for the route to
# the spec object.
for endpoint, doc in self._docs.items():
# doc is a dict of documentation per method for the endpoint
# {'get': documentation, 'post': documentation,...}
# Prepend Blueprint name to endpoint
endpoint = '.'.join((self.name, endpoint))
# Format operations documentation in OpenAPI structure
# Tag all operations with Blueprint name
# Merge manual doc
for key, (auto_doc, manual_doc) in doc.items():
self._prepare_doc(auto_doc, spec.openapi_version)
> auto_doc['tags'] = [self.name]
E TypeError: 'str' object does not support item assignment
../flask-rest-api/flask_rest_api/blueprint.py:161: TypeError
```
|
closed
|
2018-11-19T13:59:09Z
|
2019-06-11T10:10:42Z
|
https://github.com/marshmallow-code/flask-smorest/issues/27
|
[
"enhancement"
] |
svidela
| 8
|
s3rius/FastAPI-template
|
graphql
| 64
|
Question: what is the endpoint link when running from python -m ?
|
Hi I executed the a newly generated template with dummy model, router and self-hosted API. But somehow the endpoint (including /health) is returning 404. I did not change the router settings either.
What is the url link?
`http://127.0.0.1:8000/health`
$ python -m main
INFO: Started server process [4505]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:60642 - "GET / HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:60642 - "GET / HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:60644 - "GET /health HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:60644 - "GET /dummy HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:60646 - "GET /<project>/dummy HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:60648 - "GET /docs HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:60656 - "GET /health HTTP/1.1" 404 Not Found
|
closed
|
2022-01-27T20:43:06Z
|
2022-01-27T20:45:58Z
|
https://github.com/s3rius/FastAPI-template/issues/64
|
[] |
am1ru1
| 1
|
predict-idlab/plotly-resampler
|
plotly
| 43
|
On click callback in notebooks.
|
Is it possible to have the plotly on_click() listener work in a Jupyter notebook? If so, could you extend the sample notebook to demonstrate this?
|
closed
|
2022-04-15T09:15:20Z
|
2022-05-06T10:34:27Z
|
https://github.com/predict-idlab/plotly-resampler/issues/43
|
[] |
mcourteaux
| 6
|
microsoft/nni
|
deep-learning
| 5,002
|
Error runing quantization_speedup.py
|
**Describe the issue**:
When running quantization_speedup.py in the tutorial file (I did not change anything), I got an error as below.
```
[2022-07-20 01:41:46] Model state_dict saved to ./log/mnist_model.pth
[2022-07-20 01:41:46] Mask dict saved to ./log/mnist_calibration.pth
[07/20/2022-01:41:49] [TRT] [W] DynamicRange(min: -0.424213, max: 2.82149). Dynamic range should be symmetric for better accuracy.
Traceback (most recent call last):
File "quantization_speedup.py", line 114, in <module>
engine.compress()
File "/opt/conda/lib/python3.7/site-packages/nni/compression/pytorch/quantization_speedup/integrated_tensorrt.py", line 298, in compress
context = self._tensorrt_build_withoutcalib(self.onnx_path)
File "/opt/conda/lib/python3.7/site-packages/nni/compression/pytorch/quantization_speedup/integrated_tensorrt.py", line 348, in _tensorrt_build_withoutcalib
engine = build_engine(onnx_path, self.onnx_config, self.extra_layer_bits, self.strict_datatype)
File "/opt/conda/lib/python3.7/site-packages/nni/compression/pytorch/quantization_speedup/integrated_tensorrt.py", line 198, in build_engine
handle_gemm(network, i, config)
File "/opt/conda/lib/python3.7/site-packages/nni/compression/pytorch/quantization_speedup/integrated_tensorrt.py", line 82, in handle_gemm
pre_in_tensor.dynamic_range = (tracked_min_input, tracked_max_input)
AttributeError: 'NoneType' object has no attribute 'dynamic_range'
```
**Environment**:
- NNI version: 2.8
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu20.04
- Server OS (for remote mode only):
- Python version: 3.7.10
- PyTorch/TensorFlow version: 1.9.0
- Is conda/virtualenv/venv used?: no
- Is running in Docker?: yes
- GPU: 3090
- cuda: 11.1
- Nvidia-tensorrt: 8.4.1.5
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
open
|
2022-07-20T02:00:08Z
|
2022-08-15T08:56:52Z
|
https://github.com/microsoft/nni/issues/5002
|
[
"model compression",
"support"
] |
Raychen0617
| 2
|
pallets-eco/flask-wtf
|
flask
| 76
|
SESSION_COOKIE_SECURE = True Causing CSRF to always fail
|
Whenever SESSION_COOKIE_SECURE is set to True, CSRF always fails.
Default setting for SESSION_COOKIE_SECURE is False.
|
closed
|
2013-07-18T00:47:44Z
|
2020-10-26T00:28:40Z
|
https://github.com/pallets-eco/flask-wtf/issues/76
|
[] |
owenmead
| 4
|
zwczou/weixin-python
|
flask
| 29
|
支持下小程序的登录啊
|
closed
|
2018-06-21T06:55:06Z
|
2019-02-25T02:49:39Z
|
https://github.com/zwczou/weixin-python/issues/29
|
[] |
wangying11
| 1
|
|
waditu/tushare
|
pandas
| 1,099
|
MYSQL读取报错
|
数据下载,导入数据库时,直接以纯数字命名,查看数据时,mysql1064报错
|
closed
|
2019-07-22T21:38:51Z
|
2019-07-23T03:09:50Z
|
https://github.com/waditu/tushare/issues/1099
|
[] |
PegmanHuang
| 1
|
LAION-AI/Open-Assistant
|
machine-learning
| 2,957
|
Running OA_SFT_Pythia_12B local inference with GPU support
|
Hi, I'm trying to run on a cloud VM the inference setup using the steps on the `inference` folder.
Been able to run ok the `distilgpt2`, now I want to try the `the OA_SFT_Pythia_12B model`
Any guidance on how to modify the `docker-compose.yaml` , inference-worker, to use the GPU on my machine.
|
closed
|
2023-04-28T12:22:48Z
|
2023-04-28T20:52:44Z
|
https://github.com/LAION-AI/Open-Assistant/issues/2957
|
[
"question"
] |
velascoluis
| 1
|
autogluon/autogluon
|
scikit-learn
| 4,839
|
[timeseries] Expose all MLForecast configuration options in DirectTabular and RecursiveTabular models
|
[timeseries module]
In nixtla mlforecast one can specify not just the lags (1, 2, 3, etc.) but also lag transforms such as min, max, mean, rolling, etc.
https://nixtlaverse.nixtla.io/mlforecast/docs/how-to-guides/lag_transforms_guide.html
The idea would be when specifying hyperparameters one could pass in lag_transforms as well.
Mock Example:
```
lag_transforms={
1: [ExpandingStd()],
7: [RollingMean(window_size=7, min_samples=1), RollingMean(window_size=14)]}
hyperparameters = {"DirectTabular": {"lags":[1, 2, 3],
"lag_transforms":lag_transforms
}}
predictor = TimeSeriesPredictor(
prediction_length=6,
path="test",
target="y",
eval_metric="MAE"
)
predictor.fit(train_data,
hyperparameters=hyperparameters
time_limit=300)
```
|
open
|
2025-01-24T21:11:04Z
|
2025-01-28T12:21:07Z
|
https://github.com/autogluon/autogluon/issues/4839
|
[
"enhancement",
"module: timeseries"
] |
breadwall
| 0
|
aimhubio/aim
|
data-visualization
| 2,555
|
AimLogger is not supporting new PyTorch Lightning module name
|
## 🐛 Bug
There is a hardcoded check in aim where it checks the package name for pytorch-lightning.
However, they are migrating to a new name 'lightning'. Hence the check fails. See the warning message on PL website.
Location in aim-code where the check fails: `aim/sdk/adapters/pytorch_lightning.py", line 23`

### To reproduce
Create an AimLogger instance when running with pytorch-lightning, installed with `pip install lightning` (instead of `pip install pytorch-lightning`.
### Expected behavior
It doesn't throw an error
### Environment
- Aim Version latest
### Additional context
<!-- Add any other context about the problem here. -->
|
open
|
2023-03-01T21:38:27Z
|
2023-03-03T11:19:09Z
|
https://github.com/aimhubio/aim/issues/2555
|
[
"type / bug",
"help wanted",
"phase / exploring",
"area / integrations"
] |
vanhumbeecka
| 1
|
onnx/onnx
|
tensorflow
| 6,191
|
Importing `onnx==1.16.1` causes a segmentation fault on MacOS 11 (Big Sur)
|
# Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
No.
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
I receive a segmentation fault when importing onnx.

### System information
<!--
- GCC/Compiler version (if compiling from source): N/A
- CMake version: N/A
- Protobuf version: N/A
- Visual Studio version (if applicable):-->
- OS Platform and Distribution: macOS Big Sur 11.7.10
- ONNX version: 1.16.1
- Python version: 3.9.19
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
...
```
- Attach the ONNX model to the issue (where applicable)-->
Spin up a macOS 11 VM. Install `onnx==1.16.1`. Import `onnx`.
```
import onnx
```
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
No segfault.
### Notes
<!-- Any additional information -->
- Downgrading to 1.16.0 fixes the issue.
- I was able to reproduce this in a fresh VM, but we first encountered this issue in the wild with [one of our users](https://github.com/ivadomed/canproco/pull/95#issuecomment-2161600569).
|
open
|
2024-06-18T19:32:08Z
|
2024-07-18T13:17:53Z
|
https://github.com/onnx/onnx/issues/6191
|
[
"bug"
] |
joshuacwnewton
| 7
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 543
|
Lack of pre-compiled results in lost interest
|
so I know the first thing people are going to say is, this isn't an issue. However, it is. by not having a precompiled version to download over half the people that find their way to this GitHub are going to lose interest. Honestly, I'm one of them. I attempted to compile it but then I saw that I had to track down each module for this, yeah quickly drove me away from it. all I wanted to do was mess around and see what it can do. even if the results arent mind-blowing the concept interests me. but due to not having a ready to use executable I like many others I'm sure of, have decided it isn't even worth messing with.
|
closed
|
2020-10-04T20:44:15Z
|
2020-10-09T06:08:30Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/543
|
[] |
drats666
| 8
|
graphdeco-inria/gaussian-splatting
|
computer-vision
| 917
|
Question about frustum culling options
|
First and foremost, I would like to thank you for sharing your code and research with the community. It has been incredibly helpful and insightful for my project.
I have a question regarding a specific part of your code in auxiliary.h that I was hoping you could clarify.
in the following line:
`if (p_view.z <= 0.2f)// || ((p_proj.x < -1.3 || p_proj.x > 1.3 || p_proj.y < -1.3 || p_proj.y > 1.3)))
`
I noticed that frustum culling for x and y has been commented out. I am curious to understand the reasoning behind excluding x and y frustum culling via comments.
Using the HyperNeRF dataset for testing, I observed that there is not a significant difference in accuracy when the x and y frustum culling is enabled versus when it is disabled. However, the number of Gaussians differs notably, with many Gaussians being skipped during the frustum culling stage when the x and y checks are enabled.
Could you please provide some insights into why the x and y frustum culling was commented out?
Thank you once again for your valuable contribution to the field and for your time in addressing my query. Your work has been a significant asset in advancing my research.
|
open
|
2024-08-01T06:23:47Z
|
2024-08-01T06:23:47Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/917
|
[] |
TheKyu27
| 0
|
mars-project/mars
|
pandas
| 2,342
|
Tileable Progress can be illustrated on Task Detail DAG
|
**Is your feature request related to a problem? Please describe.**
On the Task Detail DAG, we can paint the color of each tileable based on its progress. This will help us to oversee the status of the task.
Right now, all the tileables on the DAG are filled with a yellow color:

but if we paint the color based on the percentage finished, the graph will be more informative. A tileable that has been executed can have a filled color, while a tileable that is 70% executed can have 70% of it filled with yellow and the other 30% remain white. This gives us a more straightforward understanding of the progress of the task overall.
**Describe the solution you'd like**
We can create another API endpoint that takes a task id and returns a list/dict that contains all tileables in the DAG and their percentage finished as a float between 0 and 1, where 1 means this tileable has been executed and 0 means this tileable has not been executed, and a float in between means how much of this tileable has been executed.
A sample JSON returned may look like this:
`
{
[
{ tileable_id_1: 1 },
{ tileable_id_2: 0.7 },
...
]
}
`
Then on the frontend, we can change the color for tileables based on the corresponded float value.
|
closed
|
2021-08-16T07:51:10Z
|
2021-08-23T10:10:29Z
|
https://github.com/mars-project/mars/issues/2342
|
[
"type: feature",
"mod: web"
] |
RandomY-2
| 1
|
dynaconf/dynaconf
|
flask
| 462
|
Regression detected [was] Case insensitive access of structures inside lists
|
**Describe the bug**
When I access a Box that is stored in a BoxList the access becomes case sensitive. I know about DynaBox, but for some reason the list access returns a vanilla Box and not a DynaBox.
Background: I need to parse more or less complex data from the config (routing stuff) and enrich the data structures with defaults after parsing. Therefore I want to set change the settings from within code.
If something like this is out of scope for Dynaconf, could someone recommend an alternative approach? Maybe only store user provided routing settings and all the other general simple configs like logging level in Dynaconf and manage the routing config elsewhere?
**To Reproduce**
Steps to reproduce the behavior:
1. Run the following code placed in `tmp.py` with pytest `pytest tmp.py`:
```python
from dynaconf.vendor.box import BoxList, Box
from dynaconf.utils.boxing import DynaBox
def test_accessing_dynabox_inside_boxlist_inside_dynabox():
data = DynaBox({"nested": [{"deeper": "nest"}]})
assert data.nested[0].deeper == "nest"
assert data.NESTED[0].deeper == "nest"
with pytest.raises(BoxKeyError):
assert data.NESTED[0].DEEPER == "nest"
data = DynaBox({"nested": [DynaBox({"deeper": "nest"})]})
assert data.nested[0].deeper == "nest"
assert data.NESTED[0].deeper == "nest"
with pytest.raises(BoxKeyError):
assert data.NESTED[0].DEEPER == "nest"
```
Even though I am passing in a DynaBox it gets changed to a Box
Dynaconf 3.1.2
|
closed
|
2020-10-25T19:31:35Z
|
2021-03-08T18:50:18Z
|
https://github.com/dynaconf/dynaconf/issues/462
|
[
"bug",
"enhancement"
] |
trallnag
| 9
|
miguelgrinberg/python-socketio
|
asyncio
| 612
|
Callbacks lost when reconnecting
|
I am calling the `call` function of the SocketIO client and the response from the `call` will take a long time to come return (about 1 hour). As such, I am passing `timeout=None` to force the connection to stay open and wait this long time. However, if the client looses contact during that timeframe, and successfully reconnects, the callback is lost and the following warning shows up:
> Unknown callback received, ignoring.
Looking through the code for client.py, it appears that when `_handle_eio_disconnect` is called, it clears the `callbacks` dictionary. However, when the reconnect is established, the contents of the dictionary are never rebuilt.
https://github.com/miguelgrinberg/python-socketio/blob/eabcc4679bc283acdb9f87022ef1e0e82c48497e/socketio/client.py#L626-L639
|
closed
|
2021-01-13T16:18:59Z
|
2021-01-13T17:37:14Z
|
https://github.com/miguelgrinberg/python-socketio/issues/612
|
[
"question"
] |
jsexauer
| 2
|
vitalik/django-ninja
|
rest-api
| 1,258
|
Is there a way to use swagger and redoc together?
|
I'm currently using redoc. I'd like to use a swagger with another endpoint. Please let me know if you know how
```python
from ninja import NinjaAPI, Redoc
from django.contrib.admin.views.decorators import staff_member_required
base_api = NinjaAPI(
title="My API",
version="0.0.1",
docs=Redoc(settings={"disableSearch": True}),
docs_decorator=staff_member_required,
docs_url="/docs/",
)
```
|
closed
|
2024-08-08T17:57:21Z
|
2024-08-08T18:00:56Z
|
https://github.com/vitalik/django-ninja/issues/1258
|
[] |
updaun
| 1
|
vaexio/vaex
|
data-science
| 1,467
|
Memory Error while trying to export data in an infitine while loop[BUG-REPORT]
|
**Description**
Hello Everyone
I was running an infinite loop which was getting data from sql. I converted this sql data to pandas dataframe and finally to vaex dataframe.
After that, I had added some addition of column and merging another dataframe to the same dataframe.
Everything was running fine till now in the infinite loop. Even used memry_profiler, everything was fine till now.
But then i added the command "df.export_parquet("data.parquet",chunk_size=100000)". But the command was always ended up in memory error in the same while loop.
I tried using "df.export.hdf5("data.hdf5",chunk_size=100000)", but it also resulted in the same error.
Can anyone help.
I'm running the below function in an infinite while loop.
My PC is having 16 GB RAM.
If I'm not adding the code to export the data in hdf5 or parquet form in the end then the loop is running fine without any memory
error. But after trying to export in the end of function i receive memory error.
I used memory_profiler and i got to know that if the the overall memory consumption before exporting data is around than 1500 MiB than it's fine. But if it's around 2000 MiB or more than it's showing Memory Error.
``` python
import os
import vaex
import pandas as pd
import numpy as np
def createFile():
#Consider df2 as the back of my sql file
if os.path.isfile('./testpdfile.parquet.parquet.gzip') == True:
df2 = pd.read_parquet(r'testpdfile.parquet.gzip')
else:
df2 = pd.DataFrame(np.random.randint(1, 10, size=(4500000, 45)), columns=list("ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$"
"%^&*()123456789"))
#I don't know sql connection here to show in example with required number of columns in the data and the data
#I'm using can't be share here
# After reading the df2 file i got to know how many rows I'm having right now and I'll fetch data from sql
#starting after the last row of my back up file. Let's create a dummy data for it.
df4 = pd.DataFrame(np.random.randint(1, 10, size=(2, 45)), columns=list("ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$"
"%^&*()123456789"))
df2 = pd.concat([df2,df4])
df2.sort_values(by='A',inplace=True)
#I have to save this file again to create a backup for again fetching data from sql in the same way after
# every minute
df2.to_parquet('testpdfile.parquet.gzip', compression = 'gzip')
df2 = vaex.from_pandas(df2)
df2['calculatedcolumn'] = df2.A + df2.B
df2['calculatedcolumn2'] = df2.A + df2.B / df2.D
df2['calculatedcolumn3'] = df2.A + df2.R / df2.D
df2.join(df1, on='A', how="left", inplace=True)
df2.join(df3, on='B', how="left", inplace=True)
df2.export_hdf5('testDataFile.hdf5',chunk_size=100000)
```
Thanks
Balkar Singh
**Software information**
- Vaex version (`import vaex; vaex.__version__)`:
- {'vaex': '4.3.0',
- 'vaex-core': '4.3.0.post1',
- 'vaex-viz': '0.5.0',
- 'vaex-hdf5': '0.8.0',
- 'vaex-server': '0.5.0',
- 'vaex-astro': '0.8.2',
- 'vaex-jupyter': '0.6.0',
- 'vaex-ml': '0.12.0'}
- Vaex was installed via: pip / conda-forge / from source
- pip
- OS:
- Windows 10
**Additional information**
I'm using vaex in PyCharm instead of in Jupyter Notebook, as ultimately i wanna use it with my Dash Application for doing quick calculations for my Dashboard.
|
closed
|
2021-07-15T18:18:31Z
|
2022-08-07T19:42:40Z
|
https://github.com/vaexio/vaex/issues/1467
|
[] |
bsbalkar
| 8
|
babysor/MockingBird
|
deep-learning
| 977
|
windows11输入>python demo_toolbox.py后没反应,不报错
|

输入>python demo_toolbox.py后没反应,不报错
|
open
|
2023-12-14T13:10:41Z
|
2024-01-11T06:28:52Z
|
https://github.com/babysor/MockingBird/issues/977
|
[] |
gogoner
| 1
|
MolSSI/cookiecutter-cms
|
pytest
| 140
|
Revisit using src/package_name directory instead of package_name directory
|
So around a month ago I read Hynek Schlawack nice article about [testing and packaging](https://hynek.me/articles/testing-packaging/), which also direct me to [another article](https://blog.ionelmc.ro/2014/05/25/python-packaging/#the-structure) by Ionel MC. The main point of those articles are why it is important to move source code directory below src directory instead of inside the main directory. They argue that is better for testing and packaging.
Back then I would like to share this thought in here but I was reluctant as I wasn't completely get their point (but now I'm pretty much do). Until, just recently I'm having a testing problem with my project. Took me many painful hours for me to figure out what's wrong, basically I was testing the package import using [monkeypatching](https://docs.pytest.org/en/6.2.x/monkeypatch.html). And no matter how I manipulate the sys.path and the environment the package is already imported before the test run. It took me many hours to understand that pytest traversing through the `__init__.py` that is above its path and execute them before running the actual test. Then I tried reading the articles by Hynek and Ionel again, and now I can appreciate their approach more.
Then I started restructure my project, move source code to src/project_name, move tests to main directory, update CI.yaml, MANIFEST.in, and setup.cfg. And the problem above is gone.
Before I submit this issue I checked if there is similar issue, and turns out @jaimergp has already mention it on #78 , and I have to say that I disagree with this statement
> Ionel seems to steer more into the general web framework domain which is quite different from our current practices. (@dgasmith)
I believe that this apply to our current practice as well, and I think one of my favorite argument from Hynek is
> Your tests do not run against the package as it will be installed by its users. They run against whatever the situation in your project directory is.
I hope that we can revisit this idea again, to avoid many potential problems in package testing.
|
open
|
2021-09-13T07:05:57Z
|
2021-09-18T16:23:50Z
|
https://github.com/MolSSI/cookiecutter-cms/issues/140
|
[] |
radifar
| 6
|
apify/crawlee-python
|
automation
| 801
|
Is it possible to pass in a custom transport?
|
I'm trying to add response caching via [hishel transports](https://hishel.com/userguide/#using-the-transports), but am not seeing a way to customize the transport used by the Crawlee client as it is [created internally in _get_client()](https://github.com/apify/crawlee-python/blob/13bb4002c75ee906db3539404fa73fff825e83e4/src/crawlee/http_clients/_httpx.py#L213-L237):
```python
def _get_client(self, proxy_url: str | None) -> httpx.AsyncClient:
"""Helper to get a HTTP client for the given proxy URL.
If the client for the given proxy URL doesn't exist, it will be created and stored.
"""
if proxy_url not in self._client_by_proxy_url:
# Prepare a default kwargs for the new client.
kwargs: dict[str, Any] = {
'transport': _HttpxTransport(
proxy=proxy_url,
http1=self._http1,
http2=self._http2,
),
'proxy': proxy_url,
'http1': self._http1,
'http2': self._http2,
}
# Update the default kwargs with any additional user-provided kwargs.
kwargs.update(self._async_client_kwargs)
client = httpx.AsyncClient(**kwargs)
self._client_by_proxy_url[proxy_url] = client
return self._client_by_proxy_url[proxy_url]
```
Is there a way to customize the httpx client transport that I'm not seeing?
Or instead of using a 3rd party library, does Crawlee have a native method for storing long term persistent caches of responses?
Somewhat related question, if its not possible to customize the transport. Is overriding `HttpxHttpClient._get_client()` the recommended way to use a custom httpx client, or is there a cleaner way?
```python
hishel_client = await _create_hishel_client(cache_path)
class HishelCacheClient(HttpxHttpClient):
def _get_client(self, proxy_url: str | None) -> httpx.AsyncClient:
return hishel_client
http_client = HishelCacheClient()
crawler = BeautifulSoupCrawler(
http_client=http_client,
)
```
|
closed
|
2024-12-10T13:46:24Z
|
2024-12-12T09:51:30Z
|
https://github.com/apify/crawlee-python/issues/801
|
[
"enhancement",
"t-tooling"
] |
tleyden
| 4
|
svc-develop-team/so-vits-svc
|
deep-learning
| 324
|
[Bug]:
|
### 系统平台版本号
windows 11
### GPU 型号
4090
### Python版本
3.10
### PyTorch版本
n/a
### sovits分支
4.0(默认)
### 数据集来源(用于判断数据集质量)
个人声音处理
### 出现问题的环节或执行的命令
推理
### 情况描述
训练2400步后,生成的模型,在预处理的时候报错
异常信息:Given groups=1, weight of size [192, 768, 5], expected input[1, 256, 773] to have 768 channels, but got 256 channels instead
请排障后重试
### 日志
```python
Traceback (most recent call last):
File "app.py", line 150, in vc_fn
_audio = model.slice_inference(temp_path, sid, vc_transform, slice_db, cluster_ratio, auto_f0, noise_scale,pad_seconds,cl_num,lg_num,lgr_num,F0_mean_pooling,enhancer_adaptive_key,cr_threshold)
File "/root/so-vits-svc4/inference/infer_tool.py", line 285, in slice_inference
out_audio, out_sr = self.infer(spk, tran, raw_path,
File "/root/so-vits-svc4/inference/infer_tool.py", line 210, in infer
audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float()
File "/root/so-vits-svc4/models.py", line 409, in infer
x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 313, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 309, in _conv_forward
return F.conv1d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [192, 768, 5], expected input[1, 256, 773] to have 768 channels, but got 256 channels instead
```
### 补充说明
_No response_
|
closed
|
2023-07-20T12:54:18Z
|
2023-08-01T09:12:29Z
|
https://github.com/svc-develop-team/so-vits-svc/issues/324
|
[
"bug?"
] |
729533572
| 2
|
python-gitlab/python-gitlab
|
api
| 2,918
|
.play() on job returns 500 even though job is started successfully
|
## Description of the problem, including code/CLI snippet
I'm starting a job via the API and getting a 400 error even though I can see the job getting kicked off and running in the UI. Because of this I have no way of knowing if the error is reliable.
```
job = project.jobs.get(job.id, lazy=True)
job.play()
```

## Expected Behavior
`200 OK - Job played`
Job runs and completes successfully
## Actual Behavior
`400 Bad request - Unplayable Job`
Job runs and completes successfully
## Specifications
- python-gitlab version: 4.7.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): gitlab.com
|
closed
|
2024-07-09T19:36:47Z
|
2024-07-11T08:45:10Z
|
https://github.com/python-gitlab/python-gitlab/issues/2918
|
[
"need info"
] |
wardbox
| 6
|
lepture/authlib
|
django
| 289
|
Ability to configure the authorization and refresh grant type for uncompliant clients [AzureDevops]
|
**Is your feature request related to a problem? Please describe.**
When using authlib with flask to create an [azure devops](https://docs.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-6.1) client, we are missing the ability to pass custom arguments to `authlib.oauth2.rfc6749.parameters.prepare_token_request`, both on authorize and on refresh.
Microsoft Azure DevOps requires a custom grant_type different than `authorize_code` and some custom fields (assertion, client_assertion_type, and client_assertion) and we found no way to configure these in authlib
**Describe the solution you'd like**
Be able to configure these different fields to make it work with azure devops. Either as some kwargs dictionary, using a compliance fix, or any other method that doesn't pollute the code
**Describe alternatives you've considered**
We wrote some code that imitates what authlib does until it reaches `authlib/oauth2/client.py _prepare_authorization_code_body`. Then we pass our custom arguments to `prepare_token_request`. This works currently but it depends on the internal structure of the library. And as soon as this changes we will have to update it
**Additional context**
### Our fix for `authorize_access_token`
```python
def _authorize_azure_devops_access_token():
# This was copied from Authlib==0.12.1 and modified to be able to send azure devops's custom fields:
# custom grant_type, assertion, client_assertion and client_assertion_type
# -> authlib/flask/client/oauth.py authorize_access_token
params = _generate_oauth2_access_token_params(oauth.azure_devops.name)
cb_key = "_{}_authlib_callback_".format(oauth.azure_devops.name)
redirect_uri = session.pop(cb_key, None)
# -> authlib/client/oauth_client.py fetch_access_token
token_endpoint = oauth.azure_devops.access_token_url
with oauth.azure_devops._get_session() as azure_devops_oauth_session:
# -> authlib/oauth2/client.py fetch_token
InsecureTransportError.check(token_endpoint)
# -> authlib/oauth2/client.py _prepare_authorization_code_body
#
# ATENTION: Here is the altered code, we include custom fields and change the grant_type
#
body = prepare_token_request(
grant_type="urn:ietf:params:oauth:grant-type:jwt-bearer",
assertion=params["code"],
state=params.get("state") or azure_devops_oauth_session.state,
redirect_uri=redirect_uri,
client_assertion=oauth.azure_devops.client_secret,
client_assertion_type="urn:ietf:params:oauth:client-assertion-type:jwt-bearer",
)
# -> authlib/oauth2/client.py fetch_token
return azure_devops_oauth_session._fetch_token(
token_endpoint,
body=body,
auth=azure_devops_oauth_session.client_auth,
method="POST",
headers={"Accept": "application/json", "Content-Type": "application/x-www-form-urlencoded;charset=UTF-8"},
)
```
|
closed
|
2020-11-03T13:36:43Z
|
2020-11-03T19:34:07Z
|
https://github.com/lepture/authlib/issues/289
|
[] |
angelsenra
| 1
|
huggingface/transformers
|
nlp
| 36,812
|
Not able to trace GPT2DoubleHeadsModel
|
### System Info
Hi, I'm trying to create trace of GPT2DoubleHeadsModel model but I'm facing issue. Here is my code
```
from transformers.utils import fx
from transformers import *
gpt2_config = GPT2Config()
model = GPT2DoubleHeadsModel(gpt2_config)
input_names = model.dummy_inputs.keys()
trace = fx.symbolic_trace(model, input_names)
```
I'm getting below error
File "~/venv/lib/python3.12/site-packages/torch/fx/proxy.py", line 327, in iter
raise TraceError('Proxy object cannot be iterated. This can be '
torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors
Any help, Thanks!
### Who can help?
@ArthurZucker @michaelbenayoun
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers.utils import fx
from transformers import *
gpt2_config = GPT2Config()
model = GPT2DoubleHeadsModel(gpt2_config)
input_names = model.dummy_inputs.keys()
trace = fx.symbolic_trace(model, input_names)
```
### Expected behavior
Expecting it to work without error
|
open
|
2025-03-19T06:24:36Z
|
2025-03-19T15:46:52Z
|
https://github.com/huggingface/transformers/issues/36812
|
[
"bug"
] |
levindabhi
| 1
|
serengil/deepface
|
machine-learning
| 963
|
DeepId throws exception in
|
To reproduce the issue, run the command
```shell
cd tests
python visual_tests.py
```
|
closed
|
2024-01-21T13:53:46Z
|
2024-01-21T18:26:21Z
|
https://github.com/serengil/deepface/issues/963
|
[
"bug"
] |
serengil
| 1
|
yeongpin/cursor-free-vip
|
automation
| 23
|
requirements.txt missing
|
this project is missing the requirements.txt file to install dependencies.
|
closed
|
2025-01-14T07:22:22Z
|
2025-01-14T08:05:08Z
|
https://github.com/yeongpin/cursor-free-vip/issues/23
|
[] |
muhammedfurkan
| 0
|
allure-framework/allure-python
|
pytest
| 94
|
Teardown function's logs not captured
|
If I have a xunit style setup and teardown methods at function level ,
and if i run my test script without -s option,
only stdout setup and stdout call are captured. Why is stdout teardown not captured?
<img width="1399" alt="screenshot at jul 10 13-21-13" src="https://user-images.githubusercontent.com/1056793/28039645-f0eefabe-6577-11e7-8bc3-984587178809.png">
#### Please tell us about your environment:
- Test framework: pytest@3.0.7
- Allure adaptor: allure-pytest@1.7.6
|
closed
|
2017-07-10T20:59:13Z
|
2017-07-13T00:41:23Z
|
https://github.com/allure-framework/allure-python/issues/94
|
[] |
shreyashah
| 5
|
pyeve/eve
|
flask
| 821
|
JSON API I/O
|
I'm working with an [API Consumer](http://emberjs.com/api/data/classes/DS.JSONAPISerializer.html) which expects API output to conform to the [JSON API](http://jsonapi.org) specification. While the HATEOAS output from Eve is close, it's not quite compatible. This can be worked around either on the server (using `on_fetched_item` or `on_fetched_resource` to inject fields to the response), or on the consumer-side by providing a mapping of Eve-to-JSON API. However, this seems like too much work being done by either side.
I'm suggesting a new feature which would allow you to configure an API for JSON API-compatible I/O. The proposed feature would work like this:
1. A new configuration keyword `JSON_API` that, when true, enables JSON API requests.
2. When `JSON_API == True`, the following config values are set as defaults:
- ID_FIELD = "id"
- LINKS = "links"
- ITEMS = "data"
- META = "meta"
- ISSUES = "errors"
- STATUS = "status"
3. When an incoming request has the header `Content-Type: application/vnd.api+json`, this activates JSON API handling iff `JSON_API == True`.
4. The following changes to API response objects are made when JSON API handling is activated.
- An item's field data are placed in an `attributes` map inside the resource object, with the exception of "id", which stays at the top level of the item object.
- A new field, "type", at the top level of the item object is added to the response payload. This is set to the resource name.
- The `links` section of output will be made JSON API-compatible. At the moment this just looks like wrapping the "title" key in a "meta" object.
- Any embeddable data relation will be returned in a [relationships](http://jsonapi.org/format/#document-resource-object-relationships) object.
- Endpoints will accept the [include](http://jsonapi.org/format/#fetching-includes) query string key, similar to how embedded works now. However, included object will be returned in an `included` object at the top level of the response object, instead of embedded in the items.
- Endpoints will accept the [fields](http://jsonapi.org/format/#fetching-sparse-fieldsets) query string key, similar to how projection works now.
5. For [CRUD](http://jsonapi.org/format/#crud) operations, a similar format to the above will be accepted.
6. Compatible [error responses](http://jsonapi.org/format/#errors).
This doesn't cover the JSON API spec entirely, but I think it's a good start toward supporting the minimal requirements.
|
closed
|
2016-02-09T11:45:07Z
|
2018-05-18T16:19:41Z
|
https://github.com/pyeve/eve/issues/821
|
[
"stale"
] |
anthony-arnold
| 10
|
falconry/falcon
|
api
| 2,364
|
Multipart form parser should not require CRLF after the closing `--`
|
It seems that appending `CRLF` after the closing `--` is not strictly required by [RFC 2046](https://www.rfc-editor.org/rfc/rfc2046) if the client does not include any trailing epilogue, although it is a common convention that nearly all clients follow.
However, as witnessed by my colleague, the Node [Undici](https://undici.nodejs.org/) client, a rather new kid on the block, opts not to append it.
|
closed
|
2024-10-10T12:07:26Z
|
2024-10-10T18:58:59Z
|
https://github.com/falconry/falcon/issues/2364
|
[
"bug"
] |
vytas7
| 3
|
flairNLP/fundus
|
web-scraping
| 316
|
[Feature Request]: Easy way to test URLSource
|
### Problem statement
When adding a news source, the first important step is to identify a sitemap/newsmap/RSS feed and point the corresponding `URLSource` instance to it.
The tutorial uses this example:
```python
NewsMap("https://www.latimes.com/news-sitemap.xml")
```
As a contributor, I would like to get a feeling if I pointed the correct object to the correct URL.
For instance, if someone adds a parser for Le Monde, it would be great if there was a printout that lists a few URLs accessed through this source. Just to see if this part is setup correctly.
### Solution
I am thinking either a general printout like this that lists some urls:
```python
from fundus import NewsMap
# init URLSource
news_map = NewsMap("https://www.lemonde.fr/sitemap_news.xml")
# printout lists a few URLs found through this source
print(news_map)
```
Or a function like this where one an request X URLs:
```python
# printout lists a few URLs found through this source
print(news_map.get_urls(number_of_urls=10))
```
### Additional Context
_No response_
|
closed
|
2023-09-01T11:09:26Z
|
2023-09-11T17:31:20Z
|
https://github.com/flairNLP/fundus/issues/316
|
[
"feature"
] |
alanakbik
| 1
|
coqui-ai/TTS
|
python
| 2,877
|
[Bug] Cant install on MacOS Intel
|
### Describe the bug
Cant install on MacOS Intel
### To Reproduce
` INFO: clang: numpy/core/src/multiarray/flagsobject.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
numpy/core/src/multiarray/einsum.c.src:408:32: warning: unknown warning group '-Wmaybe-uninitialized', ignored [-Wunknown-warning-option]
#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
^
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/multiarray/item_selection.c
1 warning generated.
INFO: clang: numpy/core/src/multiarray/dlpack.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: numpy/core/src/multiarray/ctors.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/multiarray/einsum_sumprod.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: numpy/core/src/multiarray/getset.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/multiarray/datetime_busday.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/multiarray/lowlevel_strided_loops.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/multiarray/hashdescr.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/multiarray/multiarraymodule.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/multiarray/nditer_constr.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: numpy/core/src/multiarray/refcount.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/multiarray/sequence.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/multiarray/scalarapi.c
INFO: clang: numpy/core/src/multiarray/shape.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
numpy/core/src/multiarray/scalarapi.c:773:37: warning: 'ob_shash' is deprecated [-Wdeprecated-declarations]
((PyBytesObject *)obj)->ob_shash = -1;
^
/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/cpython/bytesobject.h:7:5: note: 'ob_shash' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) Py_hash_t ob_shash;
^
/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
INFO: clang: numpy/core/src/multiarray/iterators.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
2 warnings generated.
1 warning generated.
numpy/core/src/multiarray/scalarapi.c:773:37: warning: 'ob_shash' is deprecated [-Wdeprecated-declarations]
((PyBytesObject *)obj)->ob_shash = -1;
^
/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/cpython/bytesobject.h:7:5: note: 'ob_shash' has been explicitly marked deprecated here
Py_DEPRECATED(3.11) Py_hash_t ob_shash;
^
/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11/pyport.h:336:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
1 warning generated.
1 warning generated.
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/multiarray/scalartypes.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: numpy/core/src/multiarray/temp_elide.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/multiarray/typeinfo.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/multiarray/usertypes.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/multiarray/legacy_dtype_implementation.c
INFO: clang: numpy/core/src/multiarray/nditer_pywrap.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: numpy/core/src/multiarray/vdot.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
1 warning generated.
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/npysort/timsort.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/npysort/quicksort.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/multiarray/number.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/multiarray/nditer_templ.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/multiarray/strfuncs.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/npysort/binsearch.c
1 warning generated.
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/umath/loops.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/umath/umathmodule.c
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/npysort/mergesort.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: numpy/core/src/multiarray/nditer_api.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/umath/reduction.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: numpy/core/src/multiarray/experimental_public_dtype_api.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/umath/legacy_array_method.c
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/umath/scalarmath.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/umath/ufunc_object.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/umath/_scaled_float_dtype.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/common/array_assign.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/common/mem_overlap.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/common/npy_argparse.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/common/npy_hashtable.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: numpy/core/src/common/ucsnarrow.c
1 warning generated.
1 warning generated.
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/common/npy_longdouble.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: numpy/core/src/common/ufunc_override.c
1 warning generated.
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/common/npy_cpu_features.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: numpy/core/src/common/numpyos.c
INFO: clang: numpy/core/src/multiarray/array_coercion.c
numpy/core/src/common/npy_cpu_features.c.src:125:1: warning: unused function 'npy__cpu_baseline_fid' [-Wunused-function]
npy__cpu_baseline_fid(const char *feature)
^
numpy/core/src/common/npy_cpu_features.c.src:138:1: warning: unused function 'npy__cpu_dispatch_fid' [-Wunused-function]
npy__cpu_dispatch_fid(const char *feature)
^
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
3 warnings generated.
numpy/core/src/common/npy_cpu_features.c.src:125:1: warning: unused function 'npy__cpu_baseline_fid' [-Wunused-function]
npy__cpu_baseline_fid(const char *feature)
^
numpy/core/src/common/npy_cpu_features.c.src:138:1: warning: unused function 'npy__cpu_dispatch_fid' [-Wunused-function]
npy__cpu_dispatch_fid(const char *feature)
^
2 warnings generated.
INFO: clang: numpy/core/src/common/cblasfuncs.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/multiarray/array_method.c
INFO: clang: numpy/core/src/umath/extobj.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: numpy/core/src/common/python_xerbla.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
1 warning generated.
INFO: clang: numpy/core/src/umath/ufunc_type_resolution.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/npysort/heapsort.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/umath/override.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/npysort/selection.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/multiarray/mapping.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/multiarray/methods.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
1 warning generated.
INFO: clang: build/src.macosx-10.9-universal2-3.11/numpy/core/src/umath/matmul.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
INFO: clang: numpy/core/src/umath/dispatching.c
warning: overriding currently unsupported use of floating point exceptions on this target [-Wunsupported-floating-point-opt]
1 warning generated.
error: Command "clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -arch arm64 -arch x86_64 -g -ftrapping-math -DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Ibuild/src.macosx-10.9-universal2-3.11/numpy/core/src/common -Ibuild/src.macosx-10.9-universal2-3.11/numpy/core/src/umath -Inumpy/core/include -Ibuild/src.macosx-10.9-universal2-3.11/numpy/core/include/numpy -Ibuild/src.macosx-10.9-universal2-3.11/numpy/distutils/include -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/src/_simd -I/Users/zap/myenv/include -I/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -Ibuild/src.macosx-10.9-universal2-3.11/numpy/core/src/common -Ibuild/src.macosx-10.9-universal2-3.11/numpy/core/src/npymath -c numpy/core/src/multiarray/dragon4.c -o build/temp.macosx-10.9-universal2-3.11/numpy/core/src/multiarray/dragon4.o -MMD -MF build/temp.macosx-10.9-universal2-3.11/numpy/core/src/multiarray/dragon4.o.d -msse3 -I/System/Library/Frameworks/vecLib.framework/Headers" failed with exit status 1
INFO:
########### EXT COMPILER OPTIMIZATION ###########
INFO: Platform :
Architecture: unsupported
Compiler : clang
CPU baseline :
Requested : optimization disabled
Enabled : none
Flags : none
Extra checks: none
Requested : optimization disabled
CPU dispatch :
Enabled : none
Generated : none
INFO: CCompilerOpt.cache_flush[817] : write cache to path -> /private/var/folders/20/0qr8ky2558d47l3llvpglfz00000gn/T/pip-install-k3xzvyzv/numpy_e1d80e0ac53a4ad0b7fecb1268e59842/build/temp.macosx-10.9-universal2-3.11/ccompiler_opt_cache_ext.py
INFO:
########### CLIB COMPILER OPTIMIZATION ###########
INFO: Platform :
Architecture: unsupported
Compiler : clang
CPU baseline :
Requested : optimization disabled
Enabled : none
Flags : none
Extra checks: none
Requested : optimization disabled
CPU dispatch :
Enabled : none
Generated : none
INFO: CCompilerOpt.cache_flush[817] : write cache to path -> /private/var/folders/20/0qr8ky2558d47l3llvpglfz00000gn/T/pip-install-k3xzvyzv/numpy_e1d80e0ac53a4ad0b7fecb1268e59842/build/temp.macosx-10.9-universal2-3.11/ccompiler_opt_cache_clib.py
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for numpy
Failed to build numpy
ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.`
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
pip install TTS
```
### Additional context
_No response_
|
closed
|
2023-08-15T18:45:53Z
|
2023-08-26T10:09:23Z
|
https://github.com/coqui-ai/TTS/issues/2877
|
[
"bug"
] |
alxbouchard
| 0
|
recommenders-team/recommenders
|
deep-learning
| 1,929
|
[BUG] News recommendation method: npa_MIND.ipynb cannot run properly!
|
### Description
I'm running `recommenders/examples/00_quick_start/npa_MIND.ipynb`, and I encountered the following error message when I ran `print(model.run_eval(valid_news_file, valid_behaviors_file))`:
> File ~/anaconda3/envs/tf2/lib/python3.8/site-packages/tensorflow/python/client/session.py:1480, in BaseSession._Callable.__call__(self, *args, **kwargs)
> 1478 try:
> 1479 run_metadata_ptr = tf_session.TF_NewBuffer() if run_metadata else None
> -> 1480 ret = tf_session.TF_SessionRunCallable(self._session._session,
> 1481 self._handle, args,
> 1482 run_metadata_ptr)
> 1483 if run_metadata:
> 1484 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
>
> InvalidArgumentError: indices[0,0] = 173803 is not in [0, 94057)
> [[{{node news_encoder/embedding_1/embedding_lookup}}]]
The reason for this error is that the index is out of bounds. When we try to access an index that does not exist in an array or matrix, this error occurs. In this case, the value of the index [0,0] is 173803, but it should be within the range of [0, 94057). This may be due to incorrect handling or allocation of the index values, or some issues in the dataset.
In the MIND_type option, I chose 'small'. I noticed that there is a similar issue on https://github.com/microsoft/recommenders/issues/1291, but there hasn't been any recent updates, so I'm not sure if the issue has been resolved.
### In which platform does it happen?
Tensorflow: 2.6.1
Python: 3.7.11
Linux Ubuntu: 18.04.6
### How do we replicate the issue?
Just run `recommenders/examples/00_quick_start/npa_MIND.ipynb`
### Expected behavior (i.e. solution)
Print evaluation metrics, like as:
{'group_auc': 0.5228, 'mean_mrr': 0.2328, 'ndcg@5': 0.2377, 'ndcg@10': 0.303}
### Other Comments
Thank you to everyone who has helped and contributed to the community.
|
closed
|
2023-05-08T08:06:59Z
|
2023-05-13T09:02:39Z
|
https://github.com/recommenders-team/recommenders/issues/1929
|
[
"bug"
] |
SnowyMeteor
| 1
|
huggingface/datasets
|
pandas
| 7,022
|
There is dead code after we require pyarrow >= 15.0.0
|
There are code lines specific for pyarrow versions < 15.0.0.
However, we require pyarrow >= 15.0.0 since the merge of PR:
- #6892
Those code lines are now dead code and should be removed.
|
closed
|
2024-07-03T08:52:57Z
|
2024-07-03T09:17:36Z
|
https://github.com/huggingface/datasets/issues/7022
|
[
"maintenance"
] |
albertvillanova
| 0
|
pywinauto/pywinauto
|
automation
| 1,121
|
Excuse me, what are the factors that affect running speed
|
Excuse me, I have a set of the same program, but when running on two computers, the page filling speed is significantly different (twice slower).
The computers have the same hardware except the resolution of the monitor. Is there any other reason that will affect the running speed of the program?
Please let me know. Thank you.
|
open
|
2021-09-28T09:10:05Z
|
2021-10-03T09:43:25Z
|
https://github.com/pywinauto/pywinauto/issues/1121
|
[
"question"
] |
prcciencecc
| 1
|
PaddlePaddle/PaddleHub
|
nlp
| 1,877
|
PaddleHub 识别出来的,能否直接调用,不要保存图片,然后再读取图片
|
PaddleHub 识别出来的
我看有自带的画框 能返回带框的图像
能否直接调用
不要保存图片到本地,然后再本地读取图片
|
open
|
2022-05-24T03:29:22Z
|
2024-02-26T05:02:05Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/1877
|
[] |
monkeycc
| 2
|
OpenInterpreter/open-interpreter
|
python
| 1,226
|
The interpreter cannot install required dependencies by itself.
|
### Describe the bug
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[65], line 4
2 import os
3 print('##active_line2##')
----> 4 from rdkit import Chem
5 print('##active_line4##')
6 sdf_files = []
ModuleNotFoundError: No module named 'rdkit'
```
I can see that there are some issues with the code block. It seems like you're missing the RDKit
library, which is required for working with chemical structures.
To fix this, you'll need to install RDKit. You can do this using pip:
```
`
pip install rdkit
Cell In[70], line 1
`
^
SyntaxError: invalid syntax
```
### Reproduce
The interpreter wrote a piece of code according to the plan, but was unable to install the required dependencies.
### Expected behavior
Install the dependencies required for the scripts you need.
### Screenshots
_No response_
### Open Interpreter version
Open-interpreter Version: cmd: Open Interpreter 0.2.4 New Computer Update , pkg: 0.2.4
### Python version
Python Version: 3.11.9
### Operating System name and version
Windows-10-10.0.22631-SP0
### Additional context
_No response_
|
open
|
2024-04-23T06:34:09Z
|
2024-04-23T06:34:09Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/1226
|
[] |
stan1233
| 0
|
saleor/saleor
|
graphql
| 17,118
|
Bug: Cannot delete demo account in Dashboard
|
### What are you trying to achieve?
I'm trying to delete a superuser account with email "anna.jenkins@example.com. Nor can I disable this account.
### Steps to reproduce the problem
Go to Staff Members and delete unnecessary accounts.

### What did you expect to happen?
As other demo accounts, I can delete this account.
### Logs
_No response_
### Environment
dashboard v3.20.20
core v3.20.56
OS and version: debian 12
|
open
|
2024-12-02T07:32:48Z
|
2024-12-09T10:01:07Z
|
https://github.com/saleor/saleor/issues/17118
|
[
"bug",
"accepted"
] |
charleslcso
| 3
|
great-expectations/great_expectations
|
data-science
| 10,711
|
Missing Failed Row Identification in Results for expect_column_values_to_be_type and expect_column_values_to_be_in_type_list Expectations
|
**Describe the bug**
The expect_column_values_to_be_type and expect_column_values_to_be_in_type_list expectations do not provide failed row identification in their results.
**Expected behavior**
The documentation([https://greatexpectations.io/expectations/expect_column_values_to_be_of_type]) includes the result format as
`
{
"exception_info": {
"raised_exception": false,
"exception_traceback": null,
"exception_message": null
},
"result": {
"element_count": 3,
"unexpected_count": 3,
"unexpected_percent": 100.0,
"partial_unexpected_list": [
"12345",
"abcde",
"1b3d5"
],
"missing_count": 0,
"missing_percent": 0.0,
"unexpected_percent_total": 100.0,
"unexpected_percent_nonmissing": 100.0
},
"meta": {},
"success": false
}
`
But , actual output is
`{
"success": false,
"expectation_config": {
"type": "expect_column_values_to_be_of_type",
"kwargs": {
"batch_id": "spark_data_source-my_data_asset",
"column": "Location",
"type_": "IntegerType"
},
"meta": {}
},
"result": {
"observed_value": "StringType"
},
"meta": {},
"exception_info": {
"raised_exception": false,
"exception_traceback": null,
"exception_message": null
}
}
`
**Environment (please complete the following information):**
- Operating System: { Linux]
- Great Expectations Version: [1.2.0]
- Data Source: [spark]
|
open
|
2024-11-27T12:42:55Z
|
2025-02-12T21:26:25Z
|
https://github.com/great-expectations/great_expectations/issues/10711
|
[
"bug",
"backlog"
] |
akmahi
| 1
|
quantmind/pulsar
|
asyncio
| 196
|
Deprecate multi_async
|
Replace with `gather`
|
closed
|
2016-01-30T21:08:29Z
|
2017-02-09T21:53:14Z
|
https://github.com/quantmind/pulsar/issues/196
|
[
"design decision",
"enhancement"
] |
lsbardel
| 1
|
hbldh/bleak
|
asyncio
| 1,386
|
Overwhelming amounts of Notifications - any way to handle that?
|
* bleak version: 0.20.2
* Python version: 3.11.0
* Operating System: Windows 10,11 - but also seen on Linux
* BlueZ version (`bluetoothctl -v`) in case of Linux:
### Description
I have a start-notify listening to a Bluetooth multimeter something like this
```
from datetime import datetime
import asyncio
from bleak import BleakClient, BleakScanner
async def ableloop(devicename):
MODEL_NBR_UUID = "0000fff4-0000-1000-8000-00805f9b34fb"
def handle(sender,data):
print(datetime.now(),data)
print("BLE Scanning for",devicename)
devices = await BleakScanner.discover(return_adv=True)
for d, a in devices.values():
if d.name == devicename:
print("BLE Connected")
try:
async with BleakClient(d.address) as client:
await client.start_notify(MODEL_NBR_UUID,handle)
while client.is_connected:
await asyncio.sleep(3)
print("BLE Disconnected")
print("BLE Here")
except Exception as e:
print("Bleak Exception",e,)
print("BLE Ending")
asyncio.run(ableloop("BDM"))
```
It works well for the most part - until it runs on slightly less powerful hardware (say an Intel 6th gen laptop) where it seems to 'fall behind' the actual data being sent.
I've removed ALL processing - "handle" just returns without doing anything - still seems to fall behind (the results you see are from seconds and then even MINUTES earlier)
If you disconnect the device, data stops being received instantly and Python then hangs - all threads stop responding e.g. the code never reaches "BLE Disconnected" (there's a Flask thread which also stops responding)
My guess is the meter is sending WAY too much data but I can't control that - the only thought I have is to disconnect and reconnect constantly but that's a slow process/not really ideal - thoughts?
Is there a way to tell the notification to throw-away pending messages or clear it's queue or???
|
closed
|
2023-08-10T12:26:07Z
|
2023-08-10T21:36:49Z
|
https://github.com/hbldh/bleak/issues/1386
|
[] |
shrewdlogarithm
| 5
|
unionai-oss/pandera
|
pandas
| 1,214
|
pyright type annotations for DataFrame[Schema] don't match those generated via mypy
|
First, thanks for the great package!
I noticed that the hints provided in the [Mypy Integration](https://pandera.readthedocs.io/en/stable/mypy_integration.html#mypy-integration) docs don't seem to consistently align with pyright, the type checker included in microsoft's vscode python plugin.
Specifically, looking at this example derived from the docs
#### test.py
```python
from typing import cast
import pandas as pd
import pandera as pa
from pandera.typing import DataFrame, Series
class InputSchema(pa.DataFrameModel):
year: Series[int] = pa.Field(gt=2000, coerce=True)
month: Series[int] = pa.Field(ge=1, le=12, coerce=True)
day: Series[int] = pa.Field(ge=0, le=365, coerce=True)
df = pd.DataFrame(
{
"year": ["2001", "2002", "2003"],
"month": ["3", "6", "12"],
"day": ["200", "156", "365"],
}
)
df1 = DataFrame[InputSchema](df)
reveal_type(df1)
df2 = df.pipe(DataFrame[InputSchema])
reveal_type(df2)
df3 = cast(DataFrame[InputSchema], df)
reveal_type(df3)
```
The output of `mypy test.py` is
```
test.py:23: note: Revealed type is "pandera.typing.pandas.DataFrame[test.InputSchema]"
test.py:26: note: Revealed type is "pandera.typing.pandas.DataFrame[test.InputSchema]"
test.py:29: note: Revealed type is "pandera.typing.pandas.DataFrame[test.InputSchema]"
```
While the same script in vscode gives:
```
Type of "df1" is "DataFrame"
Type of "df2" is "DataFrame"
Type of "df3" is "DataFrame[InputSchema]"
```
As a result, the only type manipulation that seems to satisfy vscode is to explicitly cast dataframes as pandera schemas, which likely doesn't invoke run-time validation / coercion unless you're using the `check_types` decorator.
|
open
|
2023-06-04T19:44:04Z
|
2023-06-04T19:45:19Z
|
https://github.com/unionai-oss/pandera/issues/1214
|
[
"enhancement"
] |
pstjohn
| 0
|
feder-cr/Jobs_Applier_AI_Agent_AIHawk
|
automation
| 791
|
[BUG]: While generating resume through gemini getting error
|
### Describe the bug
while generating resume throght gemin it is hitting open ai api
### Steps to reproduce
creating resume using gemini
### Expected behavior
create a resume
### Actual behavior
failed, was trying to create api through openai
### Branch
None
### Branch name
_No response_
### Python version
_No response_
### LLM Used
Gemin
### Model used
gemini-1.5-flash-latest
### Additional context

|
closed
|
2024-11-09T11:20:55Z
|
2024-11-09T16:52:11Z
|
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/791
|
[
"bug"
] |
surajmoolya
| 1
|
ansible/awx
|
django
| 14,912
|
User Details redirects to a resource that is "not found"
|
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
When any user selects the "User Details" Button in the upper right hand corner they are redirected to a missing resource.
This is the path "http://,Hostname"/users/4/details"
### AWX version
17.1.0
### Select the relevant components
- [X] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
2.14.9
### Operating system
Rocky Linux 9.3
### Web browser
Chrome
### Steps to reproduce
Install AWX as normal process indicates and add a new user in GUI, select dropdown and error should occur.
### Expected results
The resource for http://,Hostname"/users/4/details should be missing.
### Actual results
The resource for http://,Hostname"/users/4/details is missing.
### Additional information
Please let me know what further information you may require.
|
open
|
2024-02-21T19:40:02Z
|
2024-02-22T16:52:07Z
|
https://github.com/ansible/awx/issues/14912
|
[
"type:bug",
"component:ui",
"needs_triage",
"community"
] |
daydone
| 1
|
kaarthik108/snowChat
|
streamlit
| 4
|
Intallation Build
|
I am facing a problem setting up the environment mainly in step 4 of the installation guide, can someone guide what to do specifically there?
|
open
|
2023-05-19T11:53:26Z
|
2023-07-28T03:24:34Z
|
https://github.com/kaarthik108/snowChat/issues/4
|
[] |
Arush04
| 3
|
microsoft/unilm
|
nlp
| 781
|
in expectation of markupxlm
|
**Describe**
Model I am using (MarkupLM ...):
https://github.com/microsoft/unilm/issues/547#issuecomment-981703151
Seven months have passed. What about the markupxlm?@lockon-n
|
open
|
2022-07-06T09:19:51Z
|
2022-07-06T09:43:36Z
|
https://github.com/microsoft/unilm/issues/781
|
[] |
nlper-77
| 0
|
biolab/orange3
|
numpy
| 6,860
|
Path processing in the Formula widget
|
**What's your use case?**
This would be most useful for _Spectroscopy_ and similar workflows.
Files always come with a path that carry information about organization. In `Formula` it is possible to do plenty of math but not much of text processing (except for the obvious string operations). However, it would be awesome to be able to extract the path and the filename using the `os` python module for example.
**What's your proposed solution?**
<!-- Be specific, clear, and concise. -->
Make path handling functions available in `Formula`.
**Are there any alternative solutions?**
One can use string functions but that can become a bit awkward.
|
open
|
2024-07-23T08:00:43Z
|
2024-07-23T08:00:43Z
|
https://github.com/biolab/orange3/issues/6860
|
[] |
borondics
| 0
|
flairNLP/flair
|
nlp
| 2,864
|
IndexError
|
I was trying to train FLAIR for NER and i had this bug.
File "FLAIR1.py", line 37, in <module>
trainer.train('resources/taggers/example-ner',learning_rate=0.1,mini_batch_size=16,max_epochs=10,main_evaluation_metric=("macro avg", "f1-score"),embeddings_storage_mode="cpu")
File "flair/trainers/trainer.py", line 624, in train
dev_eval_result = self.model.evaluate(
File "flair/nn/model.py", line 236, in evaluate
loss_and_count = self.predict(
File "flair/models/sequence_tagger_model.py", line 487, in predict
loss = self._calculate_loss(features, gold_labels)
File "flair/models/sequence_tagger_model.py", line 336, in _calculate_loss
[
File "flair/models/sequence_tagger_model.py", line 337, in <listcomp>
self.label_dictionary.get_idx_for_item(label[0])
File "flair/data.py", line 90, in get_idx_for_item
raise IndexError
IndexError
Any ideas to solve this please?
|
closed
|
2022-07-15T12:45:09Z
|
2023-04-07T16:35:46Z
|
https://github.com/flairNLP/flair/issues/2864
|
[
"question",
"wontfix"
] |
faz9
| 3
|
collerek/ormar
|
fastapi
| 1,181
|
Failing test: test_weakref_init
|
When I run the test suite one of the tests fails.
```
$ python3 -mpytest -k test_weakref_init
==================================================================================== test session starts ====================================================================================
platform linux -- Python 3.11.5, pytest-7.4.0, pluggy-1.3.0
benchmark: 3.2.2 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/edward/src/2023/vendor/ormar
plugins: benchmark-3.2.2, astropy-header-0.2.2, anyio-3.7.0, flaky-3.7.0, forked-1.6.0, socket-0.5.1, lazy-fixture-0.6.3, openfiles-0.5.0, twisted-1.13.2, arraydiff-0.5.0, kgb-7.1.1, asyncio-0.20.3, doctestplus-1.0.0, repeat-0.9.1, xdist-3.3.1, remotedata-0.4.0, django-4.5.2, timeout-2.1.0, httpx-0.21.3, filter-subpackage-0.1.2, astropy-0.10.0, cov-4.1.0, xprocess-0.22.2, hypothesis-6.82.7, tornasync-0.6.0.post2, trio-0.8.0, requests-mock-1.9.3, mock-3.11.1, pylama-8.4.1
asyncio: mode=Mode.STRICT
collecting ... USED DB: sqlite:///test.db
collected 531 items / 530 deselected / 1 selected
tests/test_relations/test_weakref_checking.py F [100%]
========================================================================================= FAILURES ==========================================================================================
_____________________________________________________________________________________ test_weakref_init _____________________________________________________________________________________
def test_weakref_init():
band = Band(name="Band")
artist1 = Artist(name="Artist 1", band=band)
artist2 = Artist(name="Artist 2", band=band)
artist3 = Artist(name="Artist 3", band=band)
del artist1
Artist(
name="Artist 2", band=band
) # Force it to check for weakly-referenced objects
del artist3
band.artists # Force it to clean
> assert len(band.artists) == 1
E AssertionError: assert 3 == 1
E + where 3 = len([<weakproxy at 0x7f3d34e40130 to Artist at 0x7f3d34e45fd0>, <weakproxy at 0x7f3d34e9fbf0 to Artist at 0x7f3d34e46190>, <weakproxy at 0x7f3d34ee18f0 to Artist at 0x7f3d34e45ef0>])
E + where [<weakproxy at 0x7f3d34e40130 to Artist at 0x7f3d34e45fd0>, <weakproxy at 0x7f3d34e9fbf0 to Artist at 0x7f3d34e46190>, <weakproxy at 0x7f3d34ee18f0 to Artist at 0x7f3d34e45ef0>] = Band({'id': None, 'name': 'Band', 'artists': [<weakproxy at 0x7f3d34e40130 to Artist at 0x7f3d34e45fd0>, <weakproxy at 0x7f3d34e9fbf0 to Artist at 0x7f3d34e46190>, <weakproxy at 0x7f3d34ee18f0 to Artist at 0x7f3d34e45ef0>]}).artists
tests/test_relations/test_weakref_checking.py:51: AssertionError
===================================================================================== warnings summary ======================================================================================
ormar/fields/base.py:51
ormar/fields/base.py:51
/home/edward/src/2023/vendor/ormar/ormar/fields/base.py:51: DeprecationWarning: Parameter `pydantic_only` is deprecated and will be removed in one of the next releases.
You can declare pydantic fields in a normal way.
Check documentation: https://collerek.github.io/ormar/fields/pydantic-fields
warnings.warn(
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================== short test summary info ==================================================================================
FAILED tests/test_relations/test_weakref_checking.py::test_weakref_init - AssertionError: assert 3 == 1
======================================================================= 1 failed, 530 deselected, 2 warnings in 1.32s =======================================================================
$
````
**Versions**
* Python 3.11.5
* ormar 0.12.2
* pydantic 1.10.4
|
closed
|
2023-08-31T16:24:55Z
|
2024-06-10T08:48:20Z
|
https://github.com/collerek/ormar/issues/1181
|
[
"bug"
] |
EdwardBetts
| 1
|
ultralytics/yolov5
|
pytorch
| 12,490
|
Putting data into different numpy array's based on confidence
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello! I am currently working on an application that reads out a string contains all of the detected objects using TTS, but the sentence right now is very shallow and lacks depth. I want the sentence to read out if it's very sure, quite sure or pretty sure of what an object is, but I need to classify each object based on it's confidence in order to do that. How can I separate the results based on confidence?
### Additional
_No response_
|
closed
|
2023-12-10T15:13:24Z
|
2024-10-20T19:33:57Z
|
https://github.com/ultralytics/yolov5/issues/12490
|
[
"question",
"Stale"
] |
ThundahPrime
| 5
|
JaidedAI/EasyOCR
|
pytorch
| 569
|
Newbie Question : how to produce Output as TXT file
|
hi, sorry for the very newbie question, and my broken English,
I am not a python programmer, so installing to this "working state" is quite miraculous for me,
for the info, I use windows 10 and use "easyocr.exe" using cmd and this is the result:
```
([[384, 46], [862, 46], [862, 124], [384, 124]], '元旱安,好久不旯', 0.5539275336640896)
```
(later i learn about "--detail 0")
my question is "how to sent the result to TXT file"?
i tried
`
easyocr.exe -l ch_sim -f Source.jpeg > result.txt
`
but the result is error
thanks for the great job, and thank for reading, have a nice day
|
closed
|
2021-10-15T13:23:30Z
|
2024-03-28T12:51:04Z
|
https://github.com/JaidedAI/EasyOCR/issues/569
|
[] |
kucingkembar
| 3
|
dropbox/PyHive
|
sqlalchemy
| 253
|
Possible DB-API fetchall() performance improvements
|
Currently the cursor implementation for [fetchall](https://github.com/dropbox/PyHive/blob/65076bbc8697a423b438dc03e928a6dff86fd2cb/pyhive/common.py#L129) indirectly iterates over the result set via `fetchone` rather than directly iterating over the result set similar to prestodb's python [DB-API](https://github.com/prestodb/presto-python-client/blob/master/prestodb/dbapi.py#L306).
I'm not certain whether it's viable to use this approach, however I mocked up a basic example which shows that the proposed solution is about 10x faster.
```python
import timeit
def current():
class Cursor:
def __init__(self):
self._iterator = iter(range(10000))
def fetchone(self):
try:
return next(self._iterator)
except StopIteration:
return None
def fetchall(self):
return list(iter(self.fetchone, None))
cursor = Cursor()
cursor.fetchall()
def proposed():
class Cursor:
def __init__(self):
self._iterator = iter(range(10000))
def fetchall(self):
return list(self._iterator)
cursor = Cursor()
cursor.fetchall()
print(timeit.timeit("current()", number=1000, setup="from __main__ import current"))
print(timeit.timeit("proposed()", number=1000, setup="from __main__ import proposed"))
```
resulted in 2.999s and 0.205s respectively, i.e., the proposed solution is about 10x faster.
|
open
|
2018-11-15T06:49:13Z
|
2018-11-28T05:53:52Z
|
https://github.com/dropbox/PyHive/issues/253
|
[] |
john-bodley
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.