repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
comfyanonymous/ComfyUI
|
pytorch
| 6,331
|
No more real-time preview SamplerCustomAdvanced node after update?
|
### Your question
How do I enable it? I cant seem to find the option.

### Logs
_No response_
### Other
_No response_
|
closed
|
2025-01-03T12:01:54Z
|
2025-01-03T13:06:43Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6331
|
[
"User Support"
] |
Orenji-Tangerine
| 2
|
pydata/xarray
|
numpy
| 9,084
|
Release 2024.06.0
|
### What is your issue?
We should get this out before numpy 2 (June 16).
@keewis, thanks for [volunteering!](https://github.com/pydata/xarray/pull/8946#issuecomment-2158022820)
|
closed
|
2024-06-10T15:52:19Z
|
2024-06-14T10:18:53Z
|
https://github.com/pydata/xarray/issues/9084
|
[
"Release"
] |
dcherian
| 2
|
deepset-ai/haystack
|
machine-learning
| 8,622
|
Update serialization in HuggingFace Generators to include missing parameters
|
**Is your feature request related to a problem? Please describe.**
Certain initialization parameters, such as `chat_template` in `HuggingFaceLocalChatGenerator` and `stop_words` in `HuggingFaceAPIChatGenerator`, are not included in the serialization process. This omission can result in inconsistent behavior when deserializing these components.
**Describe the solution you'd like**
Ensure all relevant initialization parameters are properly serialized and deserialized in the HuggingFace generators.
|
closed
|
2024-12-10T16:27:55Z
|
2024-12-19T14:12:14Z
|
https://github.com/deepset-ai/haystack/issues/8622
|
[
"P2",
"2.x"
] |
Amnah199
| 1
|
charlesq34/pointnet
|
tensorflow
| 174
|
The visi
|
closed
|
2019-04-22T06:59:13Z
|
2019-04-22T06:59:59Z
|
https://github.com/charlesq34/pointnet/issues/174
|
[] |
Liu-Feng
| 0
|
|
huggingface/datasets
|
machine-learning
| 7,053
|
Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple`
|
### Describe the bug
in data_files.py, line 332,
`fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)`
If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')`
So, `isinstance(fs.protocol, str) == False` and
`protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""` will raise
`TypeError: can only concatenate tuple (not "str") to tuple`.
### Steps to reproduce the bug
Steps to reproduce:
1. Run on a cloud server like AWS,
2. `import datasets.data_files as datafile`
3. datafile.resolve_pattern('path/to/dataset', '.')
4. `TypeError: can only concatenate tuple (not "str") to tuple`
### Expected behavior
Should return path of the dataset, with fs.protocol at the beginning
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.19
- Huggingface_hub version: 0.23.5
- PyArrow version: 16.1.0
- Pandas version: 1.1.5
|
closed
|
2024-07-18T13:42:35Z
|
2024-07-18T15:17:42Z
|
https://github.com/huggingface/datasets/issues/7053
|
[] |
MatthewYZhang
| 2
|
numpy/numpy
|
numpy
| 28,216
|
numpy import issue with python 3.13.1 over nfs [Windows]
|
### Describe the issue:
opening a new bug as I could not able to reopen previous bug #28207 (numpy 2.0.1 on python 3.12.6 vs 3.13.1)
As mentioned in this bug , I built numpy 2.1.0 with python 3.13.1 but still getting same import error -
from numpy._core._multiarray_umath import (
add_docstring, _get_implementing_args, _ArrayFunctionDispatcher)
ImportError: DLL load failed while importing _multiarray_umath: The parameter is incorrect.
I thing i missed to mention that I am invoking python (3.13.1) over NFS on remote host where import is failing with above error.
this issue is not seen with python 3.12.6,
so I was just curious if changes introduced in python 3.13.1 has resulted to this behaviour .
as a work around if I set PYTHONHOME=<nfs python path>\python import works.
Please note that this issue is only seen on windows . Linux works fine.
### Reproduce the code example:
```python
>>> import numpy
Traceback (most recent call last):
File "C:/temp/2757610094/python/lib/site-packages/numpy/_core/__init__.py", line 23, in <module>
from . import multiarray
File "C:/temp/2757610094/python/lib/site-packages/numpy/_core/multiarray.py", line 10, in <module>
from . import overrides
File "C:/temp/2757610094/python/lib/site-packages/numpy/_core/overrides.py", line 8, in <module>
from numpy._core._multiarray_umath import (
add_docstring, _get_implementing_args, _ArrayFunctionDispatcher)
ImportError: DLL load failed while importing _multiarray_umath: The parameter is incorrect.
```
### Error message:
```shell
```
### Python and NumPy Versions:
python 3.13.1 on windows
numpy 2.1.0
### Runtime Environment:
_No response_
### Context for the issue:
_No response_
|
closed
|
2025-01-22T11:35:13Z
|
2025-01-24T23:39:36Z
|
https://github.com/numpy/numpy/issues/28216
|
[
"00 - Bug"
] |
Ganeshkumhar1
| 9
|
ultralytics/yolov5
|
machine-learning
| 12,677
|
Cant understand how does yolov5 work.
|
Hello.I dont understand some basic principles of the yolov5 work and need help.Could you, please, answer my questions?
As far as i am concerned, convolutional neural network always has fully connected layer on its exit to predict the class of the object, but i dont see it on this picture.It seems yolov5 output consists of only 3 conv layers, so i cant get how does it predict the class.Could you explain me, please, how does yolov5 predict the class without fully connected layer ?Maybe this scheme is not full and doesnt show the fully connected layer ?
What exactly does the BottleNeckCSP layer ? Is it just better version of the convolution layer ?How do they (BottleNeckCSP and conv layers) interract between each other and why do yolov5 needs ordinary convolution layer if c3 is just better ?

I copied this diagram from [here](https://blog.csdn.net/Q1u1NG/article/details/107511465), it is written in Chinese.
Copyright statement: This article is the original article of the blogger and follows the CC 4.0 BY-SA copyright agreement. Please attach the original source link and this statement for reprinting.
Link to this article: https://blog.csdn.net/Q1u1NG/article/details/107511465
|
closed
|
2024-01-28T12:41:51Z
|
2024-10-20T19:38:23Z
|
https://github.com/ultralytics/yolov5/issues/12677
|
[] |
Aflexg
| 4
|
ckan/ckan
|
api
| 7,548
|
CKAN bulk_update_delete not reflecting changes in frontend
|
## CKAN 2.9
## Made a request to bulk_update_delete endpoint, received success:true, but packages are not deleted when I check the site.
A clear and concise description of what the bug is.
### Steps to reproduce
Steps to reproduce the behavior:
data_dict = {
"id": f"{id}",
"organization_id": f"{ORGANIZATION_ID}"
}
url = 'http://your-site/api/3/action/bulk_update_delete'
response = requests.post(url, data=data_dict, headers=HEADERS)
response_dict = response.json()
json_pretty = json.dumps(response_dict, indent=2)
print(json_pretty)
### Expected behavior
There should be a dictionary returned as such(which I receive):
{
"help": "http://your-site/api/3/action/help_show?name=dataset_purge",
"success": true,
"result": null
}
and when I view the site the packages should not appear on the frontend.
### Additional details
There was a previous issue opened on a similar topic and a user said that it was fixed, but it seems to not be working again.
|
closed
|
2023-04-14T19:26:44Z
|
2023-05-27T19:54:49Z
|
https://github.com/ckan/ckan/issues/7548
|
[] |
jamxiao2025
| 1
|
google-research/bert
|
nlp
| 801
|
Intended Behaviour for Impossible (out-of-span) SQuAD 1.1 Features
|
Hello! We have a quick question regarding the featurization for BERT on SQuAD 1.1 specifically.
We noticed a confusing contradiction in your current `run_squad` implementation: regardless of how the `version_2_with_negative` flag is set, you do not discard “impossible” features (chunks of a context). Instead of discarding them, you train on them but with the span start and end indices pointing to the `[CLS]` token. However, this comment in your code indicates that you do intend to discard such features (at least for SQuAD 1.1 we would assume):
```
https://github.com/google-research/bert/blob/master/run_squad.py#L410-L411
# For training, if our document chunk does not contain an annotation
# we throw it out, since there is nothing to predict.
```
We ask since we do not see any reference in your paper to training SQuAD 1.1 with impossible contexts. Should we assume for SQuAD 1.1 the max_sequence_length was just always longer that all SQuAD contexts, and thus no "impossible" features are ever produced?
Ultimately, we are wondering if this behavior is intentional or not for purely extractive QA (like SQuAD 1.1, as opposed to 2.0)? Are you aware of anyone using “impossible" inputs to train a model for extractive QA without an abstention objective?
Thank you for your time and insights!
|
open
|
2019-08-13T01:55:15Z
|
2019-08-13T01:55:15Z
|
https://github.com/google-research/bert/issues/801
|
[] |
Shayne13
| 0
|
vaexio/vaex
|
data-science
| 2,062
|
[BUG-REPORT] (Major) vaex hdf5 incompatible with number 1.22.4
|
Thank you for reaching out and helping us improve Vaex!
Before you submit a new Issue, please read through the [documentation](https://docs.vaex.io/en/latest/). Also, make sure you search through the Open and Closed Issues - your problem may already be discussed or addressed.
**Description**
Vaex cannot export hdf5 files with numpy 1.22.4 arrays
**Software information**
- Vaex version (`import vaex; vaex.__version__)`: `{'vaex-core': '4.9.1', 'vaex-hdf5': '0.12.1'}`
- Vaex was installed via: pip / conda-forge / from source pip
- OS: Mac/linux
**Additional information**
The following works with numpy 1.22.3 but fails with numpy 1.22.4
```
import vaex
print(vaex.__version__) # {'vaex-core': '4.9.1', 'vaex-hdf5': '0.12.1'}
df =vaex.example()
df.export("file.hdf5")
```
```
In [3]: df.export("file.hdf5")
---------------------------------------------------------------------------
BufferError Traceback (most recent call last)
<ipython-input-3-c4ce35d9bf95> in <module>
----> 1 df.export("file.hdf5")
~/.pyenv/versions/3.8.11/lib/python3.8/site-packages/vaex/dataframe.py in export(self, path, progress, chunk_size, parallel, fs_options, fs)
6694 self.export_feather(path, parallel=parallel, fs_options=fs_options)
6695 elif naked_path.endswith('.hdf5'):
-> 6696 self.export_hdf5(path, progress=progress, parallel=parallel)
6697 elif naked_path.endswith('.fits'):
6698 self.export_fits(path, progress=progress)
~/.pyenv/versions/3.8.11/lib/python3.8/site-packages/vaex/dataframe.py in export_hdf5(self, path, byteorder, progress, chunk_size, parallel, column_count, writer_threads, group, mode)
6910 with Writer(path=path, group=group, mode=mode, byteorder=byteorder) as writer:
6911 writer.layout(self, progress=progressbar_layout)
-> 6912 writer.write(
6913 self,
6914 chunk_size=chunk_size,
~/.pyenv/versions/3.8.11/lib/python3.8/site-packages/vaex/hdf5/writer.py in __exit__(self, *args)
40
41 def __exit__(self, *args):
---> 42 self.close()
43
44 def layout(self, df, progress=None):
~/.pyenv/versions/3.8.11/lib/python3.8/site-packages/vaex/hdf5/writer.py in close(self)
32 def close(self):
33 if self.mmap is not None:
---> 34 self.mmap.close()
35 self.file.close()
36 self.h5.close()
BufferError: cannot close exported pointers exist
```
|
closed
|
2022-05-24T14:44:17Z
|
2023-02-16T14:54:09Z
|
https://github.com/vaexio/vaex/issues/2062
|
[] |
Ben-Epstein
| 14
|
developmentseed/lonboard
|
data-visualization
| 593
|
support classification schemes
|
Hi folks, thanks for all your work on this; lonboard has been lovely! I'm curious if there is interest in supporting commonly-used map classification schemes (a-la the `scheme` argument in geopandas's plot function) under the hood (particularly in the `viz` API)? We just added [a small function in mapclassify](https://github.com/pysal/mapclassify/blob/main/notebooks/08_manual_coloring.ipynb) that makes it much easier to grab a color set from matplotlib than the one geopandas is using manually at the moment
From a users' perspective, it would be sweet if `viz` had parity with geopandas's `explore`, though of course, it's also easy to just pass the colors in manually, so no worries if it's not in scope :)
|
closed
|
2024-08-13T22:51:57Z
|
2024-08-14T19:09:26Z
|
https://github.com/developmentseed/lonboard/issues/593
|
[] |
knaaptime
| 6
|
raphaelvallat/pingouin
|
pandas
| 298
|
`pairwise_tests` producing array of t values internally
|
I occasionally get mysterious crashes when using the pairwise t-test, e.g:
```
.../pingouin/pairwise.py", line 397, in pairwise_tests
df_ttest = ttest(
.../pingouin/parametric.py", line 310, in ttest
bf = bayesfactor_ttest(tval, nx, ny, paired=paired, alternative=alternative, r=r)
.../pingouin/bayesian.py", line 128, in bayesfactor_ttest
assert isinstance(t, (int, float)), "The T-value must be a int or a float."
AssertionError: The T-value must be a int or a float.
```
This happens when using a fairly innocuous dataframe:
```
between dv
0 2.0 4.00
1 1.0 7.00
2 3.0 13.00
3 2.0 8.00
4 2.0 6.00
5 2.0 5.50
6 1.0 10.00
7 1.0 10.00
8 2.0 10.00
9 1.0 6.00
10 2.0 9.00
11 1.0 13.00
12 2.0 12.00
13 2.0 12.00
14 1.0 7.75
15 1.0 5.50
16 1.0 6.00
17 2.0 9.00
18 1.0 8.50
```
The issue seems to arise from the fact that the `t` variable being checked by the assert is actually a single-element array (e.g. `[0.521]`)
I've been unable to figure out the exact cause of the bug or replicate it, any help with this would be appreciated.
|
closed
|
2022-08-26T00:04:18Z
|
2022-09-09T22:37:28Z
|
https://github.com/raphaelvallat/pingouin/issues/298
|
[
"bug :boom:"
] |
George3d6
| 4
|
mitmproxy/pdoc
|
api
| 723
|
Support `#:` annotation
|
#### Problem Description
Document variables in a compatible and terse way.
#### Proposal
Generate documentation from comments that start with `#:`
#### Alternatives
Triple-quote on a new line below the variable declaration. Which is rather confusing...
#### Additional context
This code:
```py
class Dog:
"""Woof"""
billy = Dog() #: Barking dog
```
would generate:

|
closed
|
2024-08-08T12:32:38Z
|
2024-08-09T12:25:57Z
|
https://github.com/mitmproxy/pdoc/issues/723
|
[
"enhancement"
] |
savchenko
| 3
|
axnsan12/drf-yasg
|
django
| 554
|
Missing validation of query parameters in a GET request
|
I’m developing an app with Django 3.
I have an endpoint that I’m trying to use in this way:
`/queryenpoint/?entity=A&id=1`
In my view, that I want to use just for GET requests (at least for now), I have the following configuration for the swagger_auto_schema decorator:
```
@swagger_auto_schema(manual_parameters=[
Parameter('entity', IN_QUERY, type=TYPE_STRING, required=True, enum=possible_entities),
Parameter('id', IN_QUERY, type=TYPE_NUMBER, required=True)])
```
This allowed me to see in my swagger editor the query possibilities as I wanted.
Until here great!
But in my view code, I still have to do the validation of the entity and id values as I’m using them to retrieve different querysets and different serializer classes.
Some colleagues with more experience than me (and not using Django) told me that when they use swagger for this type of endpoints, with query parameters, it does that validation automatically (entity being one of the enum options and id is an integer) and just let's pass the request in case the values respect the configuration. In case the entity or id values did not respect the requisites, it should be shown an error message.
But if I don’t do the validation myself in my view code, my app crashes (in the methods `get_queryset` and `get_serializer_class`) when I pass an invalid option on the entity or a string on id value. I manage to solve the problem changing my code to perform validation in these methods. But I would like to avoid repeating this type of validation in my code every time I want to use an endpoint like this.
Is there anything I missed from your documentation that solves my problem?
My parameters in swagger are defined in this way:
```
"parameters": [ ...
{
"name": "entity",
"in": "query",
"required": true,
"type": "string",
"enum": [
"A",
"B"
]
}, …
```
My colleague uses a configuration like this:
```
"parameters": [ ... {
"name": "entity",
"in": "query",
"required": true,
“schema”:
{“type": "string",
"enum": [
"A",
"B”]}
]
},
```
I don’t know if this configuration, defining the type inside a schema, would solve my problem or not.
But there’s any way to obtain this with the `@swagger_auto_schema`?
|
open
|
2020-03-06T08:46:27Z
|
2025-03-07T12:15:25Z
|
https://github.com/axnsan12/drf-yasg/issues/554
|
[
"triage"
] |
MartaLopesGomes
| 0
|
NVIDIA/pix2pixHD
|
computer-vision
| 241
|
question about fp16
|
Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'")
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 2048.0
Traceback (most recent call last):
File "train.py", line 85, in <module>
with amp.scale_loss(loss_G, optimizer_G) as scaled_loss: scaled_loss.backward()
File "/home/cvv/anaconda3/envs/pytorch15/lib/python3.7/contextlib.py", line 119, in __exit__
next(self.gen)
File "/home/cvv/anaconda3/envs/pytorch15/lib/python3.7/site-packages/apex/amp/handle.py", line 123, in scale_loss
optimizer._post_amp_backward(loss_scaler)
File "/home/cvv/anaconda3/envs/pytorch15/lib/python3.7/site-packages/apex/amp/_process_optimizer.py", line 249, in post_backward_no_master_weights
post_backward_models_are_masters(scaler, params, stashed_grads)
File "/home/cvv/anaconda3/envs/pytorch15/lib/python3.7/site-packages/apex/amp/_process_optimizer.py", line 135, in post_backward_models_are_masters
scale_override=(grads_have_scale, stashed_have_scale, out_scale))
File "/home/cvv/anaconda3/envs/pytorch15/lib/python3.7/site-packages/apex/amp/scaler.py", line 184, in unscale_with_stashed
out_scale/stashed_have_scale)
File "/home/cvv/anaconda3/envs/pytorch15/lib/python3.7/site-packages/apex/amp/scaler.py", line 148, in unscale_with_stashed_python
self.dynamic)
File "/home/cvv/anaconda3/envs/pytorch15/lib/python3.7/site-packages/apex/amp/scaler.py", line 22, in axpby_check_overflow_python
cpu_sum = float(model_grad.float().sum())
RuntimeError: CUDA error: an illegal memory access was encountered
|
open
|
2021-02-17T01:11:31Z
|
2021-02-25T02:33:42Z
|
https://github.com/NVIDIA/pix2pixHD/issues/241
|
[] |
najingligong1111
| 1
|
ageitgey/face_recognition
|
machine-learning
| 885
|
How to Enrol Face Timeout
|
* face_recognition version:
* Python version: 3
* Operating System: raspian stretch
### Description
I am trying to make a simple face enrolment script, after user id entered the person is asked to look at the camera.
If there is no face detected on the camera, say after 5 seconds, i wish the script to go to the beginning (i.e. cancel existing enrolment, and ask for next user id to start over again)
### What I Did
I cant figure out how to add a timeout to the face detection portion, so it will not stay in face detection mode indefinitely waiting for a face to be detected.
How do you tackle such a condition?
```
|
open
|
2019-07-12T20:07:41Z
|
2019-07-12T20:09:12Z
|
https://github.com/ageitgey/face_recognition/issues/885
|
[] |
akeilox
| 0
|
ultralytics/yolov5
|
deep-learning
| 13,349
|
Why is the GPU usage low and the CPU usage high when training the model?
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello, I train my data with Yolov5.7, it works well. But I find that the GPU usage is very low and the CPU usage high, my train code like bellow:
```bash
python train.py --weights xxx --batch_size 8 --imgsz 832 --workers 4 --device 0
```
Well, if I set the workers parameter to 8 or bigger, the CPU usage may be high to 100%, the training process like bellow:

What should I do to get more high usage of GPU?
### Additional
_No response_
|
open
|
2024-10-09T09:24:22Z
|
2024-10-27T13:30:33Z
|
https://github.com/ultralytics/yolov5/issues/13349
|
[
"question"
] |
Assassintears
| 3
|
dynaconf/dynaconf
|
fastapi
| 491
|
Error with dynaconf+django+pytest
|
I have a Django project with pytest and dynaconf.
In `conftest.py` I force the dynaconf environment as seen in the dynaconf documentation https://www.dynaconf.com/advanced/#a-python-program
```python
@pytest.fixture(scope="session", autouse=True)
def set_test_settings():
from dynaconf import settings
settings.configure(FORCE_ENV_FOR_DYNACONF='pytest')
```
Running tests with pytest results in the following error:
```python
AttributeError: 'Settings' object has no attribute 'DEFAULT_TABLESPACE'
```
This happens with sqlite. In an environment with postgres I get this error:
```
AttributeError: 'Settings' object has no attribute 'MIGRATION_MODULES'
```
It looks like `set_test_settings` messes with Django's default settings. When I remove `set_test_settings` the tests are running.
I have set up a Django project with pytest and dynaconf to reproduce the error: https://github.com/alexander-jacob/django-pytest-dynaconf:
conftest: https://github.com/alexander-jacob/django-pytest-dynaconf/blob/master/app/tests/conftest.py
test: https://github.com/alexander-jacob/django-pytest-dynaconf/blob/master/app/tests/test_app.py
|
closed
|
2020-12-15T09:33:24Z
|
2021-08-20T19:14:20Z
|
https://github.com/dynaconf/dynaconf/issues/491
|
[
"bug",
"django",
"backport3.1.5"
] |
alexander-jacob
| 4
|
dnouri/nolearn
|
scikit-learn
| 271
|
'RuntimeError: maximum recursion depth exceeded' when trying to serialize model
|
When I try
``` python
joblib.dump(model, 'my_model.pkl', compress=9)
```
or to pickle the model, I get:
```
File "/home/moose/.local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 280, in save
return Pickler.save(self, obj)
File "/usr/lib/python2.7/pickle.py", line 331, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python2.7/pickle.py", line 425, in save_reduce
save(state)
File "/home/moose/.local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 280, in save
return Pickler.save(self, obj)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/pickle.py", line 655, in save_dict
self._batch_setitems(obj.iteritems())
File "/usr/lib/python2.7/pickle.py", line 687, in _batch_setitems
save(v)
File "/home/moose/.local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 280, in save
return Pickler.save(self, obj)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/pickle.py", line 606, in save_list
self._batch_appends(iter(obj))
File "/usr/lib/python2.7/pickle.py", line 639, in _batch_appends
save(x)
File "/home/moose/.local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 280, in save
return Pickler.save(self, obj)
File "/usr/lib/python2.7/pickle.py", line 331, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python2.7/pickle.py", line 425, in save_reduce
save(state)
File "/home/moose/.local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 280, in save
return Pickler.save(self, obj)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/pickle.py", line 655, in save_dict
self._batch_setitems(obj.iteritems())
File "/usr/lib/python2.7/pickle.py", line 687, in _batch_setitems
save(v)
File "/home/moose/.local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 280, in save
return Pickler.save(self, obj)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/pickle.py", line 606, in save_list
self._batch_appends(iter(obj))
File "/usr/lib/python2.7/pickle.py", line 639, in _batch_appends
save(x)
File "/home/moose/.local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 280, in save
return Pickler.save(self, obj)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/pickle.py", line 554, in save_tuple
save(element)
File "/home/moose/.local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 280, in save
return Pickler.save(self, obj)
File "/usr/lib/python2.7/pickle.py", line 331, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python2.7/pickle.py", line 425, in save_reduce
save(state)
File "/home/moose/.local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 280, in save
return Pickler.save(self, obj)
File "/usr/lib/python2.7/pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python2.7/pickle.py", line 655, in save_dict
self._batch_setitems(obj.iteritems())
File "/usr/lib/python2.7/pickle.py", line 686, in _batch_setitems
save(k)
File "/home/moose/.local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 280, in save
return Pickler.save(self, obj)
File "/usr/lib/python2.7/pickle.py", line 271, in save
pid = self.persistent_id(obj)
RuntimeError: maximum recursion depth exceeded
```
|
closed
|
2016-05-27T09:18:04Z
|
2016-11-02T15:37:09Z
|
https://github.com/dnouri/nolearn/issues/271
|
[] |
MartinThoma
| 6
|
deeppavlov/DeepPavlov
|
nlp
| 907
|
ner_model(['Bob Ross lived in Florida']) is giving error
|
raise type(e)(node_def, op, message)
InvalidArgumentError: Requested more than 0 entries, but params is empty. Params shape: [1,7,0]
|
closed
|
2019-06-28T08:34:45Z
|
2020-05-13T09:47:24Z
|
https://github.com/deeppavlov/DeepPavlov/issues/907
|
[] |
puneetkochar016
| 1
|
ploomber/ploomber
|
jupyter
| 660
|
Pipeline can't be run with env file having .yml suffix
|
## Description
Pipeline can't be run with env.yml file. It seems it won't get read at all since variables stated in the env.yml are unknown to the pipeline run. Renaming file to env.yaml solved the issue for me.
## Replication
Files for replication of minimal example are attached [here](https://github.com/ploomber/projects/files/8303640/test.zip).
## Task
It would be nice to not be dependent on the suffix. Adjust the code, so it treats .yaml and .yml suffixes of env file equally.
|
closed
|
2022-03-18T14:42:16Z
|
2022-03-18T14:42:45Z
|
https://github.com/ploomber/ploomber/issues/660
|
[] |
edublancas
| 1
|
ray-project/ray
|
machine-learning
| 51,642
|
[core] Unify `CoreWorker::Exit` and `CoreWorker::Shutdown`
|
### Description
See https://github.com/ray-project/ray/pull/51582#discussion_r2010500080 for more details.
### Use case
_No response_
|
open
|
2025-03-24T16:52:35Z
|
2025-03-24T16:52:44Z
|
https://github.com/ray-project/ray/issues/51642
|
[
"enhancement",
"core"
] |
kevin85421
| 0
|
FlareSolverr/FlareSolverr
|
api
| 802
|
Challenge not detected! - (There is no challenge but the cookies wont work)
|
### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.2.1
- Last working FlareSolverr version: ?
- Operating system: Windows
- Are you using Docker: no
- FlareSolverr User-Agent (see log traces or / endpoint): none
- Are you using a VPN: no
- Are you using a Proxy: no
- Are you using Captcha Solver: no
- If using captcha solver, which one:
- URL to test this issue: https://www.cyberbackgroundchecks.com/email
```
### Description
Just open the binary for windows, then try to send a request for that page, for some reason it will only return __cf_bm and thats all (I mean there is no other cloudflare cookie), then using that cookie with requests wont work.
Tried with a vpn so that I can get a challenge by hand (On the browser), when I get one the other cloudflare cookie gets generated (I dont remember the name but something like clarance or smth).
With that cookie the request works but without it, it wont, even if I take the cookies from the browser when there was no challenge.
### Logged Error Messages
```text
2023-06-16 02:31:08 INFO Incoming request => POST /v1 body: {'cmd': 'request.get', 'url': 'https://www.cyberbackgroundchecks.com/email', 'maxTimeout': 60000}
2023-06-16 02:31:22 INFO Challenge not detected!
2023-06-16 02:31:22 INFO Response in 13.831 s
2023-06-16 02:31:22 INFO 127.0.0.1 POST http://localhost:8191/v1 200 OK
```
### Screenshots
Well thats everything I guess, If anyone knows anything about this, please help me thanks <3
|
closed
|
2023-06-16T00:42:08Z
|
2023-06-17T04:01:12Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/802
|
[
"more information needed"
] |
Nixitov
| 11
|
streamlit/streamlit
|
streamlit
| 10,815
|
st.html inside an st.tab does not render for the inactive tabs at the moment of rendering
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
I have tested it on:
python 3.11.10
streamlit 1.43.1
on Chrome, Safari, and Firefox
You'll notice that the active tab does have content but switching to other tabs achieves nothing. If you go to e.g. Tab 2 and refresh the page, you should have content in Tab 2 but not Tab 1 and 3.
### Reproducible Code Example
```Python
import streamlit as st
tabs = st.tabs(["Tab 1", "Tab 2", "Tab 3"])
for tab in tabs:
tab.html("<h1>Hello</h1><p>This is a test</p>")
```
### Steps To Reproduce
_No response_
### Expected Behavior
That all three tabs have the same content rendered
### Current Behavior
_No response_
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.43.1
- Python version: 3.11.10
- Operating System: MacOS
- Browser: Chrome, Firefox, Safari
### Additional Information
_No response_
|
closed
|
2025-03-18T10:45:22Z
|
2025-03-18T22:13:50Z
|
https://github.com/streamlit/streamlit/issues/10815
|
[
"type:bug",
"status:confirmed",
"priority:P2",
"feature:st.tabs"
] |
colin-kerkhof
| 3
|
giotto-ai/giotto-tda
|
scikit-learn
| 695
|
!! Someone please build python wheels for giotto-tda on aarch64 (Nvidia Jetson)
|
The problem: Can't install giotto-tda, giotto-ph, giotto-time etc on aarch64 (NVIDIA Jetson) architectures. I get the error:
ERROR: Could not build wheels for giotto-tda.
The reason: Allowing us to use giotto-based analysis tools on embedded platforms such as Nvidia Jetson will certainly spotlight this amazing toolbox as a centerpiece for purpose-built machine learning and AI applications going forward in the future. One of the main problems with the install seems to be the turf c platform adapter used in the require install of giotto-ph. It does not appear to include the aarch64 architecture as an option and the compile fails. I do not have enough experience to modify the cmake files however. Please someone adapt this amazing toolbox to aarch64!
Thank you!
Here is my specific setup:
NVIDIA Jetson Orin AGX
L4T 34.1.1 [ JetPack 5.0-b114 ]
Ubuntu 20.04.4 LTS
Kernel Version: 5.10.65-tegra
CUDA NOT_INSTALLED
CUDA Architecture: NONE
OpenCV version: 4.5.0
OpenCV Cuda: YES
CUDNN: 8.3.2.49
TensorRT: 8.4.0.11
Vision Works: NOT_INSTALLED
VPI: ii libnvvpi2 2.0.14 arm64 NVIDIA Vision Programming Interface library
Vulcan: 1.3.203
|
open
|
2024-04-10T15:54:28Z
|
2024-05-31T20:59:27Z
|
https://github.com/giotto-ai/giotto-tda/issues/695
|
[
"enhancement"
] |
silent-code
| 3
|
LAION-AI/Open-Assistant
|
python
| 3,223
|
Cannot open other tabs with custom preset
|
Cannot open other tabs with custom preset
https://www.youtube.com/watch?v=jTUiHbFnbP8
|
open
|
2023-05-24T13:03:30Z
|
2024-05-25T13:47:31Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3223
|
[
"bug",
"website"
] |
echo0x22
| 2
|
PokeAPI/pokeapi
|
graphql
| 948
|
Generation 8 BDSP doesn't have a Pokedex associated to the VersionGroup
|
Similar issue to the other bug I reported, only simpler:
BDSP don't have a Pokedex associated to the Version Group
`api/v2/version-group/brilliant-diamond-and-shining-pearl`
Not sure if it should be 'original-sinnoh' or 'extended-sinnoh' or someething else

|
closed
|
2023-11-01T03:13:15Z
|
2024-01-11T10:41:41Z
|
https://github.com/PokeAPI/pokeapi/issues/948
|
[] |
TonyCollett
| 1
|
pydantic/pydantic-ai
|
pydantic
| 401
|
Error when Passing Message History Between Agents in Multi-Agent System [ openai.BadRequestError: Error code: 400 ]
|
**Description**:

I am working on a multi-agent system using Pydantic AI, where I have two agents: a `dice_game_agent` and a `record_agent`. The `dice_game_agent` calls the `record_agent` when needed. However, when I try to pass the message history as part of the `run` or `run_sync` call, I encounter the following error.
Here’s a simplified example of the code:
```python
@dice_game_agent.tool
async def record_data(ctx: RunContext[PlayerData]) -> str:
# Debugging the context's messages
await ctx.deps.recorder_agent.run(f"Recorded player {ctx.deps.name} with guess {ctx.deps.guessed_number}.",
deps=ctx.deps,
message_history=ctx.messages
)
```
**Error Stack Trace:**
```plaintext
Traceback (most recent call last):
File "pydantic_ai/playground/result_validate.py", line 79, in <module>
main()
File "pydantic_ai/playground/result_validate.py", line 74, in main
dice_result = dice_game_agent.run_sync(f'My guess is {player_data.guessed_number}', deps=player_data)
...
.....
openai.BadRequestError: Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_j7LGnEiLa7v5FNMywXRE7ojw, call_UqTnrlrCcRK7emzE2d3RcAPx", 'type': 'invalid_request_error', 'param': 'messages.[5].role', 'code': None}}
...
openai.BadRequestError: Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'...}}
```
This is because I pass message history during the `run()` method.
**Full Code Output:**
```plaintext
pydantic_ai version: 0.0.13
21:34:31.642 dice_game_agent run prompt=My guess is 4
21:34:31.643 preparing model and tools run_step=1
21:34:31.643 model request
21:34:32.410 handle model response
21:34:32.410 running tools=['get_player_name']
21:34:32.411 preparing model and tools run_step=2
21:34:32.411 model request
21:34:33.411 handle model response
21:34:33.412 running tools=['roll_die', 'record_data']
21:34:33.413 recorder_agent run prompt=Recorded player Alice with guess 4.
21:34:33.414 preparing model and tools run_step=1
21:34:33.414 model request
Traceback (most recent call last):
File "pydantic_ai/playground/result_validate.py", line 79, in <module>
main()
File "pydantic_ai/playground/result_validate.py", line 74, in main
dice_result = dice_game_agent.run_sync(f'My guess is {player_data.guessed_number}', deps=player_data)
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/pydantic_ai/agent.py", line 319, in run_sync
return asyncio.get_event_loop().run_until_complete(
File "miniconda3/envs/pydantic_ai/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/pydantic_ai/agent.py", line 259, in run
final_result, tool_responses = await self._handle_model_response(model_response, deps, messages)
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/pydantic_ai/agent.py", line 854, in _handle_model_response
return await self._handle_structured_response(tool_calls, deps, conv_messages)
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/pydantic_ai/agent.py", line 901, in _handle_structured_response
parts += await self._process_function_tools(
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/pydantic_ai/agent.py", line 963, in _process_function_tools
task_results: Sequence[_messages.ModelRequestPart] = await asyncio.gather(*tasks)
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/pydantic_ai/tools.py", line 256, in run
response_content = await function(*args, **kwargs)
File "pydantic_ai/playground/result_validate.py", line 51, in record_data
await ctx.deps.recorder_agent.run(f"Recorded player {ctx.deps.name} with guess {ctx.deps.guessed_number}.",
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/pydantic_ai/agent.py", line 251, in run
model_response, request_cost = await agent_model.request(messages, model_settings)
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/pydantic_ai/models/openai.py", line 151, in request
response = await self._completions_create(messages, False, model_settings)
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/pydantic_ai/models/openai.py", line 189, in _completions_create
return await self.client.chat.completions.create(
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1661, in create
return await self._post(
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/openai/_base_client.py", line 1843, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/openai/_base_client.py", line 1537, in request
return await self._request(
File "miniconda3/envs/pydantic_ai/lib/python3.10/site-packages/openai/_base_client.py", line 1638, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_j7LGnEiLa7v5FNMywXRE7ojw, call_UqTnrlrCcRK7emzE2d3RcAPx", 'type': 'invalid_request_error', 'param': 'messages.[5].role', 'code': None}}
...
```
However, when I remove `message_history=ctx.messages`, the code runs without any issues, and I get the expected output.
```bash
pydantic_ai version: 0.0.13
21:33:55.956 dice_game_agent run prompt=My guess is 4
21:33:55.956 preparing model and tools run_step=1
21:33:55.956 model request
21:33:56.703 handle model response
21:33:56.704 running tools=['get_player_name']
21:33:56.704 preparing model and tools run_step=2
21:33:56.705 model request
21:33:57.275 handle model response
21:33:57.276 running tools=['roll_die']
21:33:57.277 preparing model and tools run_step=3
21:33:57.278 model request
21:33:57.701 handle model response
21:33:57.701 running tools=['record_data']
21:33:57.702 recorder_agent run prompt=Recorded player Alice with guess 4.
21:33:57.703 preparing model and tools run_step=1
21:33:57.703 model request
21:33:58.583 handle model response
21:33:58.584 running tools=['record_player_data']
Recorded player Alice with guess 4.
21:33:58.585 preparing model and tools run_step=2
21:33:58.586 model request
21:34:00.325 handle model response
21:34:00.328 preparing model and tools run_step=4
21:34:00.328 model request
21:34:01.266 handle model response
Congratulations, Alice! You guessed the number 4, and that’s exactly what I rolled. You're a winner! 🎉
```
# Value of `ctx.messages`
```python
ctx.messages: [
ModelRequest(
parts=[
SystemPromptPart(
content=(
"You're a dice game, you should roll the die and see if the number you get back matches the us"
"er's guess. If so, tell them they're a winner. Use the player's name in the response.Always r"
"ecord the player's name and the number data they guessed."
),
part_kind='system-prompt',
),
UserPromptPart(
content='My guess is 4',
timestamp=datetime.datetime(2024, 12, 17, 21, 56, 56, 11414, tzinfo=datetime.timezone.utc),
part_kind='user-prompt',
),
],
kind='request',
),
ModelResponse(
parts=[
ToolCallPart(
tool_name='get_player_name',
args=ArgsJson(
args_json='{}',
),
tool_call_id='call_WYBTd4dda3qZndrlpCQwrd8z',
part_kind='tool-call',
),
],
timestamp=datetime.datetime(2024, 12, 17, 21, 56, 56, tzinfo=datetime.timezone.utc),
kind='response',
),
ModelRequest(
parts=[
ToolReturnPart(
tool_name='get_player_name',
content='Alice',
tool_call_id='call_WYBTd4dda3qZndrlpCQwrd8z',
timestamp=datetime.datetime(2024, 12, 17, 21, 56, 56, 831623, tzinfo=datetime.timezone.utc),
part_kind='tool-return',
),
],
kind='request',
),
ModelResponse(
parts=[
ToolCallPart(
tool_name='roll_die',
args=ArgsJson(
args_json='{}',
),
tool_call_id='call_xRqsQ6fjZUX0XaRKGMkYq5sr',
part_kind='tool-call',
),
ToolCallPart(
tool_name='record_data',
args=ArgsJson(
args_json='{}',
),
tool_call_id='call_wEMrLYrJnYw5622glg1UaIJT',
part_kind='tool-call',
),
],
timestamp=datetime.datetime(2024, 12, 17, 21, 56, 56, tzinfo=datetime.timezone.utc),
kind='response',
),
] (list) len=4
```
---
**Question:**
- Is this supposed to work - or am I doing something wrrong?
- Is there a better way to pass message history between agents when one agent calls another?
#### Code :
```python
import logfire
import dotenv
import random
import pydantic_ai
from devtools import debug
from dataclasses import dataclass
from pydantic_ai import Agent, RunContext
dotenv.load_dotenv()
# # 'if-token-present' means nothing will be sent (and the example will work) if you don't have logfire configured
logfire.configure()
@dataclass
class PlayerData:
name: str
guessed_number: int
recorder_agent: Agent
recorder_agent = Agent(
'openai:gpt-4o-mini',
deps_type=PlayerData,
system_prompt=(
"""
You record the player's name and the number they guessed.
"""
),
)
@recorder_agent.tool
async def record_player_data(ctx: RunContext[PlayerData]) -> str:
record_str = f"Recorded player {ctx.deps.name} with guess {ctx.deps.guessed_number}."
print(record_str)
return record_str
dice_game_agent = Agent(
'openai:gpt-4o-mini',
deps_type=PlayerData,
system_prompt=(
"You're a dice game, you should roll the die and see if the number "
"you get back matches the user's guess. If so, tell them they're a winner. "
"Use the player's name in the response."
"Always record the player's name and the number data they guessed."
),
)
@dice_game_agent.tool
async def record_data(ctx: RunContext[PlayerData]) -> str:
# debug(ctx.messages)
await ctx.deps.recorder_agent.run(f"Recorded player {ctx.deps.name} with guess {ctx.deps.guessed_number}.",
deps=ctx.deps,
message_history=ctx.messages
)
@dice_game_agent.tool_plain
async def roll_die() -> str:
"""Roll a six-sided die and return the result."""
return str(random.randint(1, 6))
@dice_game_agent.tool
async def get_player_name(ctx: RunContext[PlayerData]) -> str:
"""Get the player's name."""
return ctx.deps.name
def main():
# Print pydantic_ai's version
print(f"pydantic_ai version: {pydantic_ai.__version__}")
player_data = PlayerData(name="Alice", guessed_number=4, recorder_agent=recorder_agent)
dice_result = dice_game_agent.run_sync(f'My guess is {player_data.guessed_number}', deps=player_data)
print(dice_result.data)
if __name__ == '__main__':
main()
```
---
**Environment:**
- Pydantic AI version: 0.0.13
- Python version: 3.10
- Relevant dependencies: Pydantic AI, OpenAI
|
closed
|
2024-12-17T21:53:28Z
|
2024-12-18T13:00:19Z
|
https://github.com/pydantic/pydantic-ai/issues/401
|
[] |
ishswar
| 1
|
ray-project/ray
|
pytorch
| 50,925
|
[Core] Cleanup ray.init() logging related options
|
1. Currently there are multiple log related options in `ray.init()`, they should be reevaluated and be cleaned up if needed
2. We should also reevaluate the `logging_config` options to see whether it should still be experimental
|
open
|
2025-02-26T20:50:31Z
|
2025-02-26T20:50:31Z
|
https://github.com/ray-project/ray/issues/50925
|
[
"P1",
"core",
"observability"
] |
MengjinYan
| 0
|
iperov/DeepFaceLab
|
deep-learning
| 5,572
|
Cant get SAEHD to work
|
THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
Im trying to train with SAEHD. Quick96 works fine, however saehd will not work for me at all. Im using a Ryzen 9 5900x and Nvidia RTX 3060 Ti.
## Actual behavior
Getting error after error. Sometimes it gets stuck on the loading sample stage. Sometimes i get a generic crash. Sometimes it says paging file too small.
## Steps to reproduce
I tried to increase paging file to 64gb then to 90gb. I tried to use cpu instead of gpu. It gives me a paging file size error. If i use gpu, it gets stuck on sample loading. Ive tried changing driver to studio version, Ive tried decreasing batch size to 4. Nothing is fixing it as of yet.
## Other relevant information
- Windows 11
|
open
|
2022-10-25T17:34:32Z
|
2023-06-08T23:19:03Z
|
https://github.com/iperov/DeepFaceLab/issues/5572
|
[] |
fzrdfl
| 1
|
LibrePhotos/librephotos
|
django
| 826
|
librephotos-proxy wrong filename encoding when downloading file
|
# 🐛 Bug Report
Everything works fine, except for downloading images. I get 404 and this in the librephotos-proxy log :
`2023/04/16 14:55:37 [error] 25#25: *63854 open() "/data//xxxx/2023-01-H�st-Vinter-Jul-Taj-10�r/DSC_0240.JPG" failed (2: No such file or directory), client: 172.23.0.1, server: , request: "GET /media/photos/xxxxxx HTTP/1.1", upstream: "http://172.23.0.5:8001/media/photos/xxxxxxx", host: "127.0.0.1:3000", referrer: "https://xxx.yyy.zzz/"
172.23.0.1 - - [16/Apr/2023:14:55:37 +0000] "GET /media/photos/xxxxx HTTP/1.1" 404 153 "https://xxx.yyy.zzz/" "Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/112.0"
`
Notice the mangled characters in the filename. Should have been "/data//xxx/2023-01-Høst-Vinter-Jul-Taj-10år/"
So something goes awry with the filename encoding somewhere
* [ x] 📁 I've Included a ZIP file containing my librephotos `log` files
* [x ] ❌ I have looked for similar issues (including closed ones)
## 📝 Description of issue:
## 🔁 How can we reproduce it:
Create a samba share with filepaths with Norwegian characters, mount it to Librephotos, very that everything works except try to download a file via
https://xxx.yyy.org/media/photos/xxxxx
## Please provide additional information:
- 💻 Operating system: Docker-compose on Linux OpenSuse Leap 15.4
- ⚙ Architecture (x86 or ARM): x86
- 🔢 Librephotos version: latest
- 📸 Librephotos installation method (Docker, Kubernetes, .deb, etc.): Docker-compose
* 🐋 If Docker or Kubernets, provide docker-compose image tag:
- 📁 How is you picture library mounted (Local file system (Type), NFS, SMB, etc.): Mounted SMB share from SynologyNAS
- ☁ If you are virtualizing librephotos, Virtualization platform (Proxmox, Xen, HyperV, etc.):
|
closed
|
2023-04-16T15:09:40Z
|
2023-04-17T18:43:51Z
|
https://github.com/LibrePhotos/librephotos/issues/826
|
[
"bug"
] |
nowheretobefound
| 2
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 13
|
咋回事
|
抖音快捷指令开始让输入文本,最终url错误
|
closed
|
2022-04-05T14:52:14Z
|
2022-04-06T18:50:33Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/13
|
[] |
zsjsleep
| 2
|
pytorch/vision
|
computer-vision
| 8,904
|
Setting more than 2 elements to `ratio` argument of `RandomResizedCrop()` works
|
### 🐛 Describe the bug
Setting more than 2 elements to `ratio` argument of [RandomResizedCrop()](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.RandomResizedCrop.html) works as shown below. *`ratio` argument should accept 2 elements:
```python
from torchvision.transforms.v2 import RandomResizedCrop
rrc = RandomResizedCrop(size=100, ratio=[0.1, 0.2, 0.3, 0.4, 0.5])
rrc
# RandomResizedCrop(size=(100, 100),
# scale=(0.08, 1.0),
# ratio=[0.1, 0.2, 0.3, 0.4, 0.5],
# interpolation=InterpolationMode.BILINEAR,
# antialias=True)
```
In addition, setting 0 or 1 element to `ratio` argument of `RandomResizedCrop()` doesn't work as shown below:
```python
from torchvision.transforms.v2 import RandomResizedCrop
rrc = RandomResizedCrop(size=100, ratio=[])
rrc # Error
rrc = RandomResizedCrop(size=100, ratio=[0.1])
rrc # Error
```
> IndexError: list index out of range
### Versions
```python
import torchvision
torchvision.__version__ # '0.20.1'
```
|
closed
|
2025-02-08T15:49:15Z
|
2025-02-28T11:02:06Z
|
https://github.com/pytorch/vision/issues/8904
|
[] |
hyperkai
| 1
|
plotly/dash
|
dash
| 2,335
|
[BUG] `dash-generate-components` / `react-docgen` do not resolve imported prop types
|
**Describe your context**
```
dash 2.7.0
react-docgen 5.4.3
```
**Describe the bug**
Issue #2096 does not seem to be fixed yet.
Given the code below:
`structure.js`
```javascript
import PropTypes from "prop-types";
export const StructurePropTypes = { test: PropTypes.string.isRequired };
```
`component.jsx`
```javascript
import React from "react";
import PropTypes from "prop-types";
import { StructurePropTypes } from "./structure";
const Component = (props) => {
...
};
Component.propTypes = {
structure: PropTypes.shape(StructurePropTypes).isRequired,
};
```
Running `extract-metadata.js` does not resolve the imported prop-types but stores the import statement:
```
{
structure: {
description: "",
required: true,
type: {
...
name: "shape",
value: 'import { StructurePropTypes } from "./structure";'
}
}
```
When using this metadata object in `component_generator.py` line 136:
```python
components = generate_classes_files(project_shortname, metadata, *generator_methods)
```
and, hence, calling the `generate_class_file(...)` method in line 163 in `_py_components_generation.py` which again calls
```python
class_string = generate_class_string(
typename, props, description, namespace, prop_reorder_exceptions, max_props
)
```
the `collect_nodes` method called in line 143 and defined in `_collect_nodes.py` is trying to iteratively collect the nodes but fails in line 49 when encountering the value of the metadata object returned from `extract-metadata,js` since it is expecting the `value` item to contain a dictionary and not a string ( 'import { StructurePropTypes } from "./structure";'). This is resulting in an `AttributeError`:
```
AttributeError: 'str' object has no attribute 'items'
```
**Expected behavior**
I expect the imported prop type shapes to be resolved and not that the import command is being used as a value.
|
closed
|
2022-11-23T15:49:01Z
|
2023-03-10T21:11:01Z
|
https://github.com/plotly/dash/issues/2335
|
[] |
rubenthoms
| 1
|
FactoryBoy/factory_boy
|
sqlalchemy
| 264
|
RelatedFactory lazy arguments from parent
|
The dot-notation which is in the docs w.r.t Subfactories for getting a parent attribute, does not work for RelatedFactory.
For example:
``` python
class SearchFactory(ActivityFactory):
"""A complete search field factory for testing search"""
class Meta:
model = models.Activity
exclude = ('title_1', 'title_2', 'description_1', 'description_2',)
# narrative contents, override these when required
title_1 = "title narrative1"
title_2 = "title narrative2"
description_1 = "description narrative1"
description_2 = "description narrative2"
title = RelatedFactory(TitleSearchFactory, 'activity',
content_1=factory.SelfAttribute('..title_1'),
content_2=factory.SelfAttribute('..title_2'))
```
This will result in an error:
target = containers[self.depth - 2]
IndexError: tuple index out of range
I was wondering whether this was desired behaviour or a bug? I am looking for a way to pass arguments to the RelatedFactory.
|
closed
|
2016-01-19T14:33:35Z
|
2018-07-06T18:03:47Z
|
https://github.com/FactoryBoy/factory_boy/issues/264
|
[
"Bug",
"Doc"
] |
bryanph
| 2
|
deeppavlov/DeepPavlov
|
tensorflow
| 1,133
|
Unable to load pre-trained model
|
Hi
I'm trying to use a pre-trained model for NER,and once the model got downloaded,
I'm getting the below issue.
```
from deeppavlov import configs, build_model
ner_model = build_model(configs.ner.ner_ontonotes_bert, download=True)
2020-02-18 17:45:34.88 ERROR in 'deeppavlov.core.common.params'['params'] at line 112: Exception in <class 'deeppavlov.models.preprocessors.bert_preprocessor.BertNerPreprocessor'>
Traceback (most recent call last):
File "C:\Users\mchandra\AppData\Local\Continuum\anaconda3\lib\site-packages\deeppavlov\core\common\params.py", line 106, in from_params
component = obj(**dict(config_params, **kwargs))
File "C:\Users\mchandra\AppData\Local\Continuum\anaconda3\lib\site-packages\deeppavlov\models\preprocessors\bert_preprocessor.py", line 119, in __init__
do_lower_case=do_lower_case)
File "C:\Users\mchandra\AppData\Local\Continuum\anaconda3\lib\site-packages\bert_dp\tokenization.py", line 165, in __init__
self.vocab = load_vocab(vocab_file)
File "C:\Users\mchandra\AppData\Local\Continuum\anaconda3\lib\site-packages\bert_dp\tokenization.py", line 127, in load_vocab
token = convert_to_unicode(reader.readline())
File "C:\Users\mchandra\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 178, in readline
self._preread_check()
File "C:\Users\mchandra\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 84, in _preread_check
compat.as_bytes(self.__name), 1024 * 512)
tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: C:\Users\mchandra\.deeppavlov\downloads\bert_models\cased_L-12_H-768_A-12\vocab.txt : The system cannot find the file specified.
; No such file or directory
---------------------------------------------------------------------------
NotFoundError Traceback (most recent call last)
<ipython-input-5-f2cfbde6d24a> in <module>
1 from deeppavlov import configs, build_model
2
----> 3 ner_model = build_model(configs.ner.ner_ontonotes_bert, download=False)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\deeppavlov\core\commands\infer.py in build_model(config, mode, load_trained, download, serialized)
59 component_serialized = None
60
---> 61 component = from_params(component_config, mode=mode, serialized=component_serialized)
62
63 if 'id' in component_config:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\deeppavlov\core\common\params.py in from_params(params, mode, serialized, **kwargs)
104 kwargs['mode'] = mode
105
--> 106 component = obj(**dict(config_params, **kwargs))
107 try:
108 _refs[config_params['id']] = component
~\AppData\Local\Continuum\anaconda3\lib\site-packages\deeppavlov\models\preprocessors\bert_preprocessor.py in __init__(self, vocab_file, do_lower_case, max_seq_length, max_subword_length, token_masking_prob, provide_subword_tags, subword_mask_mode, **kwargs)
117 vocab_file = str(expand_path(vocab_file))
118 self.tokenizer = FullTokenizer(vocab_file=vocab_file,
--> 119 do_lower_case=do_lower_case)
120 self.token_masking_prob = token_masking_prob
121
~\AppData\Local\Continuum\anaconda3\lib\site-packages\bert_dp\tokenization.py in __init__(self, vocab_file, do_lower_case)
163
164 def __init__(self, vocab_file, do_lower_case=True):
--> 165 self.vocab = load_vocab(vocab_file)
166 self.inv_vocab = {v: k for k, v in self.vocab.items()}
167 self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\bert_dp\tokenization.py in load_vocab(vocab_file)
125 with tf.gfile.GFile(vocab_file, "r") as reader:
126 while True:
--> 127 token = convert_to_unicode(reader.readline())
128 if not token:
129 break
~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.py in readline(self)
176 def readline(self):
177 r"""Reads the next line from the file. Leaves the '\n' at the end."""
--> 178 self._preread_check()
179 return self._prepare_value(self._read_buf.ReadLineAsString())
180
~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.py in _preread_check(self)
82 "File isn't open for reading")
83 self._read_buf = pywrap_tensorflow.CreateBufferedInputStream(
---> 84 compat.as_bytes(self.__name), 1024 * 512)
85
86 def _prewrite_check(self):
NotFoundError: NewRandomAccessFile failed to Create/Open: C:\Users\mchandra\.deeppavlov\downloads\bert_models\cased_L-12_H-768_A-12\vocab.txt : The system cannot find the file specified.
; No such file or directory
```
Can you please let me the way of fixing this, I used **ner_ontonotes** as well but its working fine,have the problem with bert models.
Thanks in advance.
|
closed
|
2020-02-18T12:21:35Z
|
2020-05-26T20:32:57Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1133
|
[] |
MaheshChandrra
| 6
|
tqdm/tqdm
|
jupyter
| 1,472
|
”screen“ command environment is different from base environment
|
Hello, thanks for the great library!
I found a very magical thing when using the tqdm library . Since I deal with a lot of data, I use “screen” command. When not using the “screen” environment, the output is a normal progress bar.

But when using the screen environment, there will be many progress bars (17) appearing.

I would like to ask what is the reason for this.
I am using tqdm==4.64.1 with python==3.8.16 on linux.
My core bug code module is
from tqdm import tqdm
for label in tqdm(os.listdir("A")):
......
|
open
|
2023-05-04T07:39:23Z
|
2023-05-04T07:39:23Z
|
https://github.com/tqdm/tqdm/issues/1472
|
[] |
song-cc
| 0
|
mars-project/mars
|
scikit-learn
| 2,414
|
Add support for `label_binarize`.
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Is your feature request related to a problem? Please describe.**
`mars.learn.preprocessing.label_binarize` can be added support.
|
closed
|
2021-09-02T10:21:25Z
|
2021-09-02T15:24:02Z
|
https://github.com/mars-project/mars/issues/2414
|
[
"type: feature",
"mod: learn"
] |
qinxuye
| 0
|
sanic-org/sanic
|
asyncio
| 2,683
|
Type of the variable "file_path" is PosixPath but mimetypes.guess_type() need a string or byte-like object.
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
In the File sanic/mixins/route.py, the type of the variable "file_path"(line:836) is PosixPath but mimetypes.guess_type() need a string or byte-like object.
In python 3.7.2, the Signature of the function guess_type is given below
```python
def guess_type(url: str,
strict: bool = ...) -> Tuple[Optional[str], Optional[str]]
```
### Code snippet
File "sanic/mixins/routes.py", line 874, in _static_request_handler
or guess_type(file_path)[0]
```python
if "content-type" not in headers:
content_type = (
content_type
or guess_type(file_path)[0]
or DEFAULT_HTTP_CONTENT_TYPE
)
```
### Expected Behavior
In the File sanic/mixins/route.py, the type of the variable "file_path"(line:836) should be a str or byte-like object.
I added a line of code to forcefully fix it.
```python
file_path = str(file_path)
```
### How do you run Sanic?
As a script (`app.run` or `Sanic.serve`)
### Operating System
Macos Darwin-20.6.0-x86_64-i386-64bit
### Sanic Version
Sanic v22.12.0
### Additional context
packages: sanic-routing==22.8.0
python: 3.7.2
mode: production, single worker
|
closed
|
2023-02-13T07:32:00Z
|
2023-04-02T11:04:23Z
|
https://github.com/sanic-org/sanic/issues/2683
|
[
"bug"
] |
xSandie
| 4
|
dask/dask
|
pandas
| 11,167
|
Improve documentation for `dd.from_map(...)`
|
There are two main ways to load dataframe partitions via custom functions:
* `dd.from_map`
* `dd.from_delayed`
For some reason, users seem to prefer `dd.from_delayed` over `dd.from_map` even though the latter is much simpler and most often suffices for their use cases. `dd.from_delayed` on the other hand, is more complex and has its implementation has some issues (e.g., https://github.com/dask/dask-expr/issues/1077).
We should improve the documentation for `dd.from_map(...)` and try to funnel most users toward it. `dd.from_delayed(...)` should be depicted as a last resort in case the user's case simply doesn't fit `dd.from_map(...)`.
|
closed
|
2024-06-11T13:19:22Z
|
2024-06-21T12:58:15Z
|
https://github.com/dask/dask/issues/11167
|
[
"documentation",
"enhancement"
] |
hendrikmakait
| 1
|
miguelgrinberg/Flask-SocketIO
|
flask
| 2,095
|
Add support for wildcard events
|
I would like to listen to generic events to be able to handle them myself. Wildcard events are supported by the underlying python socketio library since version 5.4.1
I would love it if we could have "*" events which would trigger every time no other event handler was registered for the event.
```
socketio.on("*")
def catchall(event):
print(event)
```
When emiting an event from the client that the server hasn't registered elswhere we would see something like:
`Client> emit("unregistered_event")`
`Server> "unregistered_event"`
I have considered directly interfacing with the underlying library, but it seems to break some higher level functionalities.
|
closed
|
2024-09-29T05:36:23Z
|
2024-09-30T02:13:24Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/2095
|
[] |
Dubidu1212
| 2
|
electricitymaps/electricitymaps-contrib
|
data-visualization
| 6,957
|
[Data Issue]: Italian data appears to be 100% estimated
|
### When did this happen?
2024-06-22
### What zones are affected?
IT-NO, IT-CNO, IT-CSO, IT-SO, IT-SIC, IT-SAR
### What is the problem?
All the data appears to be estimated since the 22nd of June 2024.
I don't know if Terna updated their transparency section or if this project even uses their API (https://developer.terna.it/#en) or get the data from someone else.
|
closed
|
2024-07-04T16:39:21Z
|
2024-08-06T20:43:30Z
|
https://github.com/electricitymaps/electricitymaps-contrib/issues/6957
|
[
"data",
"external"
] |
Gianfilippo980
| 3
|
pyeve/eve
|
flask
| 549
|
document_link() takes a passed resource, but calls resource_link() which uses the current resource
|
The offending line is here in `document_link(resource, ..)`:
`'href': '%s/%s%s' % (resource_link(), document_id, version_part)}`
This creates a link that is based on the current resource, instead of whichever resource is passed in to `document_link`.
Currently, all uses of `document_link` are called with the resource that is in the current request, so nothing is breaking. Nonetheless this seems incorrect.
|
closed
|
2015-01-19T23:34:59Z
|
2015-01-20T07:59:28Z
|
https://github.com/pyeve/eve/issues/549
|
[
"bug"
] |
mkandalf
| 3
|
akfamily/akshare
|
data-science
| 5,686
|
AKShare 接口问题报告 | AKShare Interface Issue Report stock_board_concept_hist_em 这个接口也存在问题
|
版本Version: 1.16.3

|
closed
|
2025-02-18T04:53:51Z
|
2025-02-18T11:07:52Z
|
https://github.com/akfamily/akshare/issues/5686
|
[
"bug"
] |
fweiger
| 2
|
adamerose/PandasGUI
|
pandas
| 232
|
Filter Export/Import Option
|
When using pandasgui, I generally want to use similar or the same filters. Adding an option to export filters to an external file for re-importing when viewing another dataframe would be fantastic for the way I use pandasgui.
|
open
|
2023-06-28T15:15:21Z
|
2023-06-28T15:15:21Z
|
https://github.com/adamerose/PandasGUI/issues/232
|
[
"enhancement"
] |
ggfiorillo
| 0
|
onnx/onnx
|
pytorch
| 6,562
|
ONNX failing to build from source with external Protobuf and Abseil-cpp
|
# Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
When I tried to built Onnx v1.17.0 with external protobuf v4.25.3 and abseil 20240116.2. I have built protobuf with this version abseil and while trying to build Onnx with this, I see this error.
```
[ 95%] Building CXX object CMakeFiles/onnx.dir/onnx/version_converter/helper.cc.o
In file included from /home/builder/new_scripts/onnx1/onnx/.setuptools-cmake-build/onnx/onnx-ml.pb.h:13,
from /home/builder/new_scripts/onnx1/onnx/onnx/onnx_pb.h:51,
from /home/builder/new_scripts/onnx1/onnx/onnx/onnx-operators_pb.h:7,
from /home/builder/new_scripts/onnx1/onnx/onnx/defs/attr_proto_util.h:12,
from /home/builder/new_scripts/onnx1/onnx/onnx/defs/attr_proto_util.cc:7:
/home/builder/dev/lib/python3.12/site-packages/libprotobuf/include/google/protobuf/port_def.inc:33:10: fatal error: absl/base/attributes.h: No such file or directory
33 | #include "absl/base/attributes.h"
| ^~~~~~~~~~~~~~~~~~~~~~~~
```
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): RHEL 09
- ONNX version (*e.g. 1.13*): v1.17.0
- Python version: 3.12
- GCC/Compiler version (if compiling from source): GCC 13
- CMake version: 3.31.1
- Protobuf version: v4.25.3
- Visual Studio version (if applicable):--> N.A.
### Expected behavior
Get ONNX v1.17.0 build with external protobuf v4.25.3 and Abseil-cpp 20240116.2.
|
closed
|
2024-11-29T14:02:16Z
|
2024-12-03T05:14:01Z
|
https://github.com/onnx/onnx/issues/6562
|
[
"bug"
] |
Aman-Surkar
| 1
|
openapi-generators/openapi-python-client
|
fastapi
| 107
|
Optionally disable validation
|
**Is your feature request related to a problem? Please describe.**
It seems that some OpenAPI specifications aren't always correct, but it might be helpful to allow generation anyway.
For example, I installed the latest version from the `main` branch using:
```bash
poetry add git+https://github.com/triaxtec/openapi-python-client.git@main
pip install importlib_metadata
openapi-python-client generate --url https://api.biocontainers.pro/ga4gh/trs/v2/openapi.json
```
and the warnings I got were:
```
Traceback (most recent call last):
File "/media/michael/Storage2/Programming/BioBackhaul/venv/bin/openapi-python-client", line 33, in <module>
sys.exit(load_entry_point('openapi-python-client', 'console_scripts', 'openapi-python-client')())
File "/media/michael/Storage2/Programming/BioBackhaul/venv/lib/python3.7/site-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/media/michael/Storage2/Programming/BioBackhaul/venv/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/media/michael/Storage2/Programming/BioBackhaul/venv/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/media/michael/Storage2/Programming/BioBackhaul/venv/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/media/michael/Storage2/Programming/BioBackhaul/venv/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/media/michael/Storage2/Programming/BioBackhaul/venv/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/media/michael/Storage2/Programming/BioBackhaul/venv/lib/python3.7/site-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/media/michael/Storage2/Programming/BioBackhaul/venv/src/openapi-python-client/openapi_python_client/cli.py", line 91, in generate
create_new_client(url=url, path=path)
File "/media/michael/Storage2/Programming/BioBackhaul/venv/src/openapi-python-client/openapi_python_client/__init__.py", line 48, in create_new_client
project = _get_project_for_url_or_path(url=url, path=path)
File "/media/michael/Storage2/Programming/BioBackhaul/venv/src/openapi-python-client/openapi_python_client/__init__.py", line 31, in _get_project_for_url_or_path
openapi = GeneratorData.from_dict(data_dict)
File "/media/michael/Storage2/Programming/BioBackhaul/venv/src/openapi-python-client/openapi_python_client/openapi_parser/openapi.py", line 243, in from_dict
openapi = oai.OpenAPI.parse_obj(d)
File "pydantic/main.py", line 455, in pydantic.main.BaseModel.parse_obj
File "pydantic/main.py", line 346, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 93 validation errors for OpenAPI
paths -> /facets -> get -> responses -> 200 -> content -> application/json -> schema -> $ref
field required (type=value_error.missing)
paths -> /facets -> get -> responses -> 200 -> content -> application/json -> schema -> items
extra fields not permitted (type=value_error.extra)
paths -> /facets -> get -> responses -> 200 -> content -> application/json -> schema -> type
extra fields not permitted (type=value_error.extra)
paths -> /facets -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /facets -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /facets -> get -> responses -> 200 -> $ref
field required (type=value_error.missing)
paths -> /facets -> get -> responses -> 200 -> content
extra fields not permitted (type=value_error.extra)
paths -> /facets -> get -> responses -> 200 -> description
extra fields not permitted (type=value_error.extra)
paths -> /facets -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /service-info -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /stats -> get -> responses -> 200 -> content -> application/json -> schema -> $ref
field required (type=value_error.missing)
paths -> /stats -> get -> responses -> 200 -> content -> application/json -> schema -> items
extra fields not permitted (type=value_error.extra)
paths -> /stats -> get -> responses -> 200 -> content -> application/json -> schema -> type
extra fields not permitted (type=value_error.extra)
paths -> /stats -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /stats -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /stats -> get -> responses -> 200 -> $ref
field required (type=value_error.missing)
paths -> /stats -> get -> responses -> 200 -> content
extra fields not permitted (type=value_error.extra)
paths -> /stats -> get -> responses -> 200 -> description
extra fields not permitted (type=value_error.extra)
paths -> /stats -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /toolClasses -> get -> responses -> 200 -> content -> application/json -> schema -> $ref
field required (type=value_error.missing)
paths -> /toolClasses -> get -> responses -> 200 -> content -> application/json -> schema -> items
extra fields not permitted (type=value_error.extra)
paths -> /toolClasses -> get -> responses -> 200 -> content -> application/json -> schema -> type
extra fields not permitted (type=value_error.extra)
paths -> /toolClasses -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /toolClasses -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /toolClasses -> get -> responses -> 200 -> $ref
field required (type=value_error.missing)
paths -> /toolClasses -> get -> responses -> 200 -> content
extra fields not permitted (type=value_error.extra)
paths -> /toolClasses -> get -> responses -> 200 -> description
extra fields not permitted (type=value_error.extra)
paths -> /toolClasses -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /tools -> get -> responses -> 200 -> content -> application/json -> schema -> $ref
field required (type=value_error.missing)
paths -> /tools -> get -> responses -> 200 -> content -> application/json -> schema -> items
extra fields not permitted (type=value_error.extra)
paths -> /tools -> get -> responses -> 200 -> content -> application/json -> schema -> type
extra fields not permitted (type=value_error.extra)
paths -> /tools -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools -> get -> responses -> 200 -> $ref
field required (type=value_error.missing)
paths -> /tools -> get -> responses -> 200 -> content
extra fields not permitted (type=value_error.extra)
paths -> /tools -> get -> responses -> 200 -> description
extra fields not permitted (type=value_error.extra)
paths -> /tools -> get -> responses -> 200 -> headers
extra fields not permitted (type=value_error.extra)
paths -> /tools -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id} -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/similars -> get -> responses -> 200 -> content -> application/json -> schema -> $ref
field required (type=value_error.missing)
paths -> /tools/{id}/similars -> get -> responses -> 200 -> content -> application/json -> schema -> items
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/similars -> get -> responses -> 200 -> content -> application/json -> schema -> type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/similars -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/similars -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/similars -> get -> responses -> 200 -> $ref
field required (type=value_error.missing)
paths -> /tools/{id}/similars -> get -> responses -> 200 -> content
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/similars -> get -> responses -> 200 -> description
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/similars -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions -> get -> responses -> 200 -> content -> application/json -> schema -> $ref
field required (type=value_error.missing)
paths -> /tools/{id}/versions -> get -> responses -> 200 -> content -> application/json -> schema -> items
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions -> get -> responses -> 200 -> content -> application/json -> schema -> type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions -> get -> responses -> 200 -> $ref
field required (type=value_error.missing)
paths -> /tools/{id}/versions -> get -> responses -> 200 -> content
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions -> get -> responses -> 200 -> description
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id} -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/containerfile -> get -> responses -> 200 -> content -> application/json -> schema -> $ref
field required (type=value_error.missing)
paths -> /tools/{id}/versions/{version_id}/containerfile -> get -> responses -> 200 -> content -> application/json -> schema -> items
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/containerfile -> get -> responses -> 200 -> content -> application/json -> schema -> type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/containerfile -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/containerfile -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/containerfile -> get -> responses -> 200 -> $ref
field required (type=value_error.missing)
paths -> /tools/{id}/versions/{version_id}/containerfile -> get -> responses -> 200 -> content
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/containerfile -> get -> responses -> 200 -> description
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/containerfile -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/descriptor -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/descriptor/{relative_path} -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/files -> get -> responses -> 200 -> content -> application/json -> schema -> $ref
field required (type=value_error.missing)
paths -> /tools/{id}/versions/{version_id}/{type}/files -> get -> responses -> 200 -> content -> application/json -> schema -> items
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/files -> get -> responses -> 200 -> content -> application/json -> schema -> type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/files -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/files -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/files -> get -> responses -> 200 -> $ref
field required (type=value_error.missing)
paths -> /tools/{id}/versions/{version_id}/{type}/files -> get -> responses -> 200 -> content
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/files -> get -> responses -> 200 -> description
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/files -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/tests -> get -> responses -> 200 -> content -> application/json -> schema -> $ref
field required (type=value_error.missing)
paths -> /tools/{id}/versions/{version_id}/{type}/tests -> get -> responses -> 200 -> content -> application/json -> schema -> items
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/tests -> get -> responses -> 200 -> content -> application/json -> schema -> type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/tests -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/tests -> get -> responses -> 200 -> content -> application/json -> schema -> x-content-type
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/tests -> get -> responses -> 200 -> $ref
field required (type=value_error.missing)
paths -> /tools/{id}/versions/{version_id}/{type}/tests -> get -> responses -> 200 -> content
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/tests -> get -> responses -> 200 -> description
extra fields not permitted (type=value_error.extra)
paths -> /tools/{id}/versions/{version_id}/{type}/tests -> get -> x-openapi-router-controller
extra fields not permitted (type=value_error.extra)
components -> securitySchemes -> BEARER -> x-apikeyInfoFunc
extra fields not permitted (type=value_error.extra)
components -> securitySchemes -> BEARER -> $ref
field required (type=value_error.missing)
components -> securitySchemes -> BEARER -> in
extra fields not permitted (type=value_error.extra)
components -> securitySchemes -> BEARER -> name
extra fields not permitted (type=value_error.extra)
components -> securitySchemes -> BEARER -> type
extra fields not permitted (type=value_error.extra)
components -> securitySchemes -> BEARER -> x-apikeyInfoFunc
extra fields not permitted (type=value_error.extra)
```
**Describe the solution you'd like**
I would like a flag, for example `--skip-validation`, that disables the validation if the errors aren't fatal. In the above case, they're simply extra fields, so it shouldn't prohibit running the generator.
**Describe alternatives you've considered**
I was able to use [openapi-generator](https://github.com/OpenAPITools/openapi-generator) with the `--skip-validate-spec` flag, which worked, but I would like to be able to do this in pure python.
|
closed
|
2020-07-28T05:58:59Z
|
2023-10-17T15:07:40Z
|
https://github.com/openapi-generators/openapi-python-client/issues/107
|
[
"✨ enhancement"
] |
multimeric
| 5
|
ultralytics/yolov5
|
pytorch
| 13,485
|
Significant Differences in Evaluation Results on the Validation Set Between `train.py` During Training and `test.py` in YOLOv5 5.0
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
# [YOLOv5 5.0版本](https://github.com/ultralytics/yolov5/releases/tag/v5.0)在train.py训练过程中在验证集上的评估结果与test.py在验证集上的评估结果具有显著差异
Why is this happening? The results in `test.py` are much higher than those obtained during validation after adjusting the number of epochs, and this phenomenon occurs in most epochs. The results from `test.py` are extremely ideal and do not match the actual performance. Below is a portion of the output logs.
为什么会这样?test中的结果比改epoch后进行验证的结果高了很多,而且大多数epoch都有这样的现象,test.py的结果极其理想,与实际不符。下面是部分输出日志
## Evaluation Output of `train.py` on the Validation Set at Epoch 115
```bash
2024-12-12 14:37:44,182 - INFO - YOLOv5 🚀 5211d5c torch 2.4.1+cu124 CUDA:0 (NVIDIA GeForce RTX 3090, 24154.375MB)
CUDA:1 (NVIDIA GeForce RTX 3090, 24154.375MB)
CUDA:2 (NVIDIA GeForce RTX 3090, 24154.375MB)
CUDA:3 (NVIDIA GeForce RTX 3090, 24154.375MB)
2024-12-12 14:37:44,192 - INFO - Namespace(adam=False, artifact_alias='latest', batch_size=32, bbox_interval=-1, bucket='', cache_images=True, cfg='', data='data/fankou/EnhancedDataset.yaml', device='0,1,2,3', entity=None, epochs=300, evolve=False, exist_ok=False, global_rank=-1, hyp='data/fankou/hyp.yaml', image_weights=False, img_size=[640, 640], label_smoothing=0.0, linear_lr=False, local_rank=-1, multi_scale=False, name='exp', noautoanchor=False, nosave=False, notest=False, offline=True, project='runs/train', quad=False, rect=False, resume=True, save_dir='runs/train/exp', save_period=1, single_cls=False, sync_bn=False, total_batch_size=32, upload_dataset=False, weights='./runs/train/exp/weights/last.pt', workers=8, world_size=1)
2024-12-12 14:37:44,193 - INFO - [34m[1mtensorboard: [0mStart with 'tensorboard --logdir runs/train', view at http://localhost:6006/
2024-12-12 14:37:44,194 - INFO - [34m[1mhyperparameters: [0mlr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.0375, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, label_smoothing=0.0
2024-12-12 14:37:47,177 - INFO -
from n params module arguments
2024-12-12 14:37:47,181 - INFO - 0 -1 1 7040 models.common.Focus [3, 64, 3]
2024-12-12 14:37:47,182 - INFO - 1 -1 1 73984 models.common.Conv [64, 128, 3, 2]
2024-12-12 14:37:47,185 - INFO - 2 -1 1 156928 models.common.C3 [128, 128, 3]
2024-12-12 14:37:47,187 - INFO - 3 -1 1 295424 models.common.Conv [128, 256, 3, 2]
2024-12-12 14:37:47,199 - INFO - 4 -1 1 1611264 models.common.C3 [256, 256, 9]
2024-12-12 14:37:47,205 - INFO - 5 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
2024-12-12 14:37:47,248 - INFO - 6 -1 1 6433792 models.common.C3 [512, 512, 9]
2024-12-12 14:37:47,277 - INFO - 7 -1 1 4720640 models.common.Conv [512, 1024, 3, 2]
2024-12-12 14:37:47,296 - INFO - 8 -1 1 2624512 models.common.SPP [1024, 1024, [5, 9, 13]]
2024-12-12 14:37:47,359 - INFO - 9 -1 1 9971712 models.common.C3 [1024, 1024, 3, False]
2024-12-12 14:37:47,363 - INFO - 10 -1 1 525312 models.common.Conv [1024, 512, 1, 1]
2024-12-12 14:37:47,363 - INFO - 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
2024-12-12 14:37:47,363 - INFO - 12 [-1, 6] 1 0 models.common.Concat [1]
2024-12-12 14:37:47,382 - INFO - 13 -1 1 2757632 models.common.C3 [1024, 512, 3, False]
2024-12-12 14:37:47,383 - INFO - 14 -1 1 131584 models.common.Conv [512, 256, 1, 1]
2024-12-12 14:37:47,383 - INFO - 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
2024-12-12 14:37:47,383 - INFO - 16 [-1, 4] 1 0 models.common.Concat [1]
2024-12-12 14:37:47,390 - INFO - 17 -1 1 690688 models.common.C3 [512, 256, 3, False]
2024-12-12 14:37:47,394 - INFO - 18 -1 1 590336 models.common.Conv [256, 256, 3, 2]
2024-12-12 14:37:47,394 - INFO - 19 [-1, 14] 1 0 models.common.Concat [1]
2024-12-12 14:37:47,411 - INFO - 20 -1 1 2495488 models.common.C3 [512, 512, 3, False]
2024-12-12 14:37:47,426 - INFO - 21 -1 1 2360320 models.common.Conv [512, 512, 3, 2]
2024-12-12 14:37:47,426 - INFO - 22 [-1, 10] 1 0 models.common.Concat [1]
2024-12-12 14:37:47,490 - INFO - 23 -1 1 9971712 models.common.C3 [1024, 1024, 3, False]
2024-12-12 14:37:47,491 - INFO - 24 [17, 20, 23] 1 59235 models.yolo.Detect [6, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [256, 512, 1024]]
2024-12-12 14:37:47,781 - INFO - Model Summary: 499 layers, 46658275 parameters, 46658275 gradients, 114.6 GFLOPS
2024-12-12 14:37:47,781 - INFO -
2024-12-12 14:37:47,890 - INFO - Transferred 650/650 items from ./runs/train/exp/weights/last.pt
2024-12-12 14:37:47,966 - INFO - Scaled weight_decay = 0.0005
2024-12-12 14:37:47,970 - INFO - Optimizer groups: 110 .bias, 110 conv.weight, 107 other
2024-12-12 14:39:57,377 - INFO - Image sizes 640 train, 640 test
Using 8 dataloader workers
Logging results to runs/train/exp
Starting training for 300 epochs...
2024-12-13 00:16:49,333 - INFO -
[34m[1mtest:[0m data: {'train': ['/home/user/yuanjinmin/数据集/模型训练/train', '/home/user/yuanjinmin/dataset/obj_train_data/train_pro'], 'val': ['/home/user/yuanjinmin/数据集/模型训练/val', '/home/user/yuanjinmin/dataset/obj_train_data/val_pro'], 'nc': 6, 'names': ['unhelmet', 'helmet', 'cigarette', 'fire', 'smoke', 'safebelt']}, weight: None, batch_size: 64, imgsz: 640, conf_thres: 0.001, iou_thres: 0.6, save_json: False, single_cls: False, augment: False, verbose: True, dataloader: <utils.datasets.InfiniteDataLoader object at 0x784bc87e5100>, save_dir: runs/train/exp, save_txt: False, save_hybrid: False, save_conf: False, plots: False, wandb_logger: <utils.wandb_logging.wandb_utils.WandbLogger object at 0x784bd86e1610>, compute_loss: <utils.loss.ComputeLoss object at 0x784bd5c4e790>, half_precision: True, is_coco: False
2024-12-13 00:17:12,163 - INFO - Class Images Labels P R mAP@.5 mAP@.5:.95
2024-12-13 00:17:12,163 - INFO - all 3925 22419 0.745 0.731 0.722 0.406
2024-12-13 00:17:12,164 - INFO - unhelmet 3925 9246 0.903 0.939 0.938 0.538
2024-12-13 00:17:12,164 - INFO - helmet 3925 10645 0.86 0.927 0.943 0.754
2024-12-13 00:17:12,164 - INFO - cigarette 3925 761 0.631 0.618 0.594 0.229
2024-12-13 00:17:12,164 - INFO - fire 3925 808 0.57 0.64 0.6 0.325
2024-12-13 00:17:12,164 - INFO - smoke 3925 717 0.602 0.351 0.35 0.135
2024-12-13 00:17:12,164 - INFO - safebelt 3925 242 0.904 0.913 0.91 0.457
```
## Evaluation Output of `test.py` on the Validation Set for the Model at Epoch 115
```bash
2025-01-09 20:15:37,890 - INFO - Namespace(augment=False, batch_size=64, conf_thres=0.001, data='data/fankou/EnhancedDataset.yaml', device='0,1,2,3', exist_ok=False, img_size=640, iou_thres=0.6, name='exp', project='runs/test', save_conf=True, save_hybrid=True, save_json=False, save_txt=True, single_cls=False, task='val', verbose=True, weights='runs/train/exp/weights/epoch_115.pt')
2025-01-09 20:15:39,100 - INFO - Fusing layers...
2025-01-09 20:15:40,267 - INFO - Model Summary: 392 layers, 46627491 parameters, 0 gradients, 114.0 GFLOPS
2025-01-09 20:17:21,449 - INFO - Class Images Labels P R mAP@.5 mAP@.5:.95
2025-01-09 20:17:21,449 - INFO - all 3925 22419 1 1 0.995 0.995
2025-01-09 20:17:21,449 - INFO - unhelmet 3925 9246 1 1 0.996 0.996
2025-01-09 20:17:21,449 - INFO - helmet 3925 10645 1 0.999 0.996 0.996
2025-01-09 20:17:21,449 - INFO - cigarette 3925 761 1 1 0.995 0.995
2025-01-09 20:17:21,450 - INFO - fire 3925 808 1 1 0.995 0.995
2025-01-09 20:17:21,450 - INFO - smoke 3925 717 1 1 0.995 0.995
2025-01-09 20:17:21,450 - INFO - safebelt 3925 242 1 1 0.995 0.995
2025-01-09 20:17:21,450 - INFO - Speed: 3.2/1.8/4.9 ms inference/NMS/total per 640x640 image at batch-size 64
2025-01-09 20:17:22,088 - INFO - Results saved to runs/test/exp
3925 labels saved to runs/test/exp/labels
```
### Additional
_No response_
|
open
|
2025-01-09T12:21:19Z
|
2025-01-10T02:56:45Z
|
https://github.com/ultralytics/yolov5/issues/13485
|
[
"question",
"detect"
] |
3210448723
| 2
|
aimhubio/aim
|
tensorflow
| 3,230
|
Visualize multiple images of the same step
|
## 🚀 Feature
It would be super useful to visualize on the same view multiple stored images of the same step.
### Motivation
Often you need to visually compare input, GT and prediction and it seems currently not possible to visualize all these images for a specific step on the same page.
### Pitch
It would be nice to have in the same view multiple images relative to the same step.
### Alternatives
An ugly software tiling of all the image in a very single large image before tracking.
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
|
open
|
2024-09-29T10:21:30Z
|
2024-09-29T10:21:30Z
|
https://github.com/aimhubio/aim/issues/3230
|
[
"type / enhancement"
] |
bhack
| 0
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 1,040
|
Text 2 Speech
|
closed
|
2022-03-17T17:24:32Z
|
2022-03-17T17:24:43Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1040
|
[] |
domoskanonos
| 0
|
|
widgetti/solara
|
jupyter
| 827
|
Version check crashes with ValueError: invalid literal for int() with base 10: '0a0' for ipykernel 7.0.0a0
|
This line broke when pre-release is available (e.g., `ipykernel==7.0.0a0`):
https://github.com/widgetti/solara/blob/eb8b827a20586a7e1db716ea532b0d686ef94cee/solara/server/kernel.py#L76
```
> ipykernel_version = tuple(map(int, ipykernel.__version__.split(".")))
E ValueError: invalid literal for int() with base 10: '0a0'
```
If you switch to, say, `packaging.version.Version` check and take account of pre- or dev releases, then it would guard against such breakage.
Thanks.
|
closed
|
2024-10-22T14:06:14Z
|
2024-12-03T13:35:08Z
|
https://github.com/widgetti/solara/issues/827
|
[] |
pllim
| 5
|
DistrictDataLabs/yellowbrick
|
scikit-learn
| 722
|
data folder not created
|
As said earlier, python -m yellowbrick.download downloads datasets to yellowbrick/yellowbrick/datasets/fixtures rather not creating data folder in the current directory.The path of download is already set.when i type in the following command
data = pd.read_csv('data/concrete/concrete.csv')
error occurs
python
Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> data =pd.read_csv('Data/concrete/concrete.csv')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/chad7/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 678, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/chad7/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 440, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/home/chad7/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 787, in __init__
self._make_engine(self.engine)
File "/home/chad7/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 1014, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/home/chad7/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 1708, in __init__
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 384, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 695, in pandas._libs.parsers.TextReader._setup_parser_source
FileNotFoundError: File b'Data/concrete/concrete.csv' does not exist
As,it says its in here.
chad7@superuser:~/yellowbrick$ python -m yellowbrick.download
Traceback (most recent call last):
File "/home/chad7/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/chad7/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/chad7/yellowbrick/yellowbrick/download.py", line 94, in <module>
download_all(data_home=args.data_home, replace=args.overwrite)
File "/home/chad7/yellowbrick/yellowbrick/download.py", line 43, in download_all
meta['url'], meta['signature'], data_home=data_home, replace=replace
File "/home/chad7/yellowbrick/yellowbrick/datasets/download.py", line 88, in download_data
).format(archive))
yellowbrick.exceptions.DatasetsError: dataset already exists at /home/chad7/yellowbrick/yellowbrick/datasets/fixtures/bikeshare.zip, set replace=False to overwrite
|
closed
|
2019-02-03T12:18:58Z
|
2019-02-13T14:40:30Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/722
|
[
"type: question"
] |
dnabanita7
| 5
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 1,167
|
Fine tuning a downloaded pre-trained cyclegan model
|
Hello,
First of all, thanks for this amazing repo!
After going through tips&tricks, and the first few pages of issues I haven't found out how I can start with one of your pretrained cyclegan models and then resume training on my own dataset.
Specifically, it seems that for a given pretrained model (here style_monet_pretrained) we can only download the generator Photo -> Monet. By downloading monet2photo I guess it is possible to access the second generator Monet -> Photo. However, to resume training D_A and D_B are necessary. Are these available anywhere? Is it possible to resume training with slightly different datasets on any of your pretrained, available, models?
I'm trying to obtain decent results with style transfer on 360° images for visualization in VR. I've tried just applying your already pretrained models with the right preprocessing and it works quite well but I was thinking of resuming training with 360° images for the photos database as this might get better results. I'm not sure it's feasible to carry out the training process entirely in a reasonable timeframe as this is a personal project and I'm running it on my own computer equipped with just the one gpu and a relatively small 360° image database.
Many thanks,
|
open
|
2020-10-21T10:31:55Z
|
2020-10-30T09:31:32Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1167
|
[] |
SebastianPartarrieu
| 0
|
Asabeneh/30-Days-Of-Python
|
numpy
| 177
|
Hi Asabeneh,
|
I have not been able to do the course for some time now due to unforeseen circumstances, but I am now ready and willing to commit. I hope I can still get into this course and do my best and complete it. Please let me know if there are any issues. If not I'm going full steam ahead. I hope you are ready for all my questions 😃
Kind regards,
Aaron Kennedy.
|
open
|
2021-09-26T02:50:01Z
|
2021-09-26T02:50:01Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/177
|
[] |
Aaron-1974
| 0
|
google-deepmind/graph_nets
|
tensorflow
| 136
|
Support Apple Silicon
|
Hi, is it possible that this library one day will be usable in the new generation of architecture Apple Silicon M1?
I tried to install it in a _conda_ environment (through _miniforge_) after having installed also TensorFlow 2.4 benchmark on Mac M1, but it doesn't work! It gives an error related with _bazel_.
|
closed
|
2021-01-28T19:17:59Z
|
2021-07-29T09:28:33Z
|
https://github.com/google-deepmind/graph_nets/issues/136
|
[] |
tribustale
| 6
|
JoeanAmier/TikTokDownloader
|
api
| 66
|
建议增加批零评论采集!
|
可以批量采集主页作品,那肯定也就爬取了所有得作品链接,有了作品链接,做个批量采集评论,应该可以把。🙆♂️
|
open
|
2023-09-18T04:42:29Z
|
2023-09-18T13:14:23Z
|
https://github.com/JoeanAmier/TikTokDownloader/issues/66
|
[] |
aminggoodboy
| 1
|
google-research/bert
|
tensorflow
| 830
|
Loss keep going up and down
|

this is my train command
```
python run_pretraining.py
--input_file="data/wikipedia_corpus/final_tfrecords_sharded/tf_examples.tfrecord000*"
--output_dir=results/
--do_train=True
--do_eval=True
--bert_config_file=data/bert_config.json
--train_batch_size=32
--max_seq_length=128
--max_predictions_per_seq=20
--num_train_steps=1000000
--num_warmup_steps=40000
--save_checkpoints_steps=5000
--learning_rate=1e-5
```
any ideas?
|
open
|
2019-08-30T05:02:14Z
|
2019-11-15T05:04:26Z
|
https://github.com/google-research/bert/issues/830
|
[] |
AnakTeka
| 1
|
pydantic/pydantic-ai
|
pydantic
| 581
|
Import "pydantic_ai.usage" could not be resolved
|
In the flight booking example:
```python
from pydantic_ai.usage import Usage, UsageLimits
```
|
closed
|
2025-01-01T13:54:56Z
|
2025-01-13T17:20:52Z
|
https://github.com/pydantic/pydantic-ai/issues/581
|
[
"question",
"more info"
] |
HamzaFarhan
| 5
|
pytorch/pytorch
|
numpy
| 149,501
|
Inductor produce significantly different inference results with the originl original model
|
### 🐛 Describe the bug
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = torch.nn.Linear(3, 3)
self.linear.weight = torch.nn.Parameter(torch.eye(3))
self.linear.bias = torch.nn.Parameter(torch.zeros(3))
def forward(self, x):
x = self.linear(x)
x = torch.nn.functional.tanh(x)
inv_x, info = torch.linalg.inv_ex(x, check_errors=True)
return inv_x
model = Model()
inputs = torch.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]])
res = model(inputs)
res2 = model(inputs)
torch.testing.assert_close(res, res2)
compiled_model = torch.compile(model, backend='inductor')
with torch.no_grad():
compiled_out = compiled_model(inputs)
torch.testing.assert_close(res, compiled_out)
```
### Error logs
Traceback (most recent call last):
File "/data/qshenaf/remote_pc/LLM4Converter/bugs/0319/torch.nn.functional.tanh.py", line 25, in <module>
torch.testing.assert_close(res, compiled_out)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
File "/data/qshenaf/miniconda3/envs/torch/lib/python3.13/site-packages/torch/testing/_comparison.py", line 1519, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 9 / 9 (100.0%)
Greatest absolute difference: 640.109375 at index (1, 1) (up to 1e-05 allowed)
Greatest relative difference: 0.006599230691790581 at index (0, 0) (up to 1.3e-06 allowed)
### Versions
PyTorch version: 2.7.0.dev20250308+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9)
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 81%
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.16
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Notaffected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250308+cu126
[pip3] torchaudio==2.6.0.dev20250308+cu126
[pip3] torchvision==0.22.0.dev20250308+cu126
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.25.1 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.7.0.dev20250308+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250308+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250308+cu126 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @chauhang @penguinwu
|
open
|
2025-03-19T10:42:37Z
|
2025-03-20T08:53:28Z
|
https://github.com/pytorch/pytorch/issues/149501
|
[
"oncall: pt2",
"oncall: cpu inductor"
] |
Cookiee235
| 2
|
browser-use/browser-use
|
python
| 661
|
Request Multiple times to model
|
### Bug Description
**### IMPORTANT!**
I am directly addressing the developer and I'd like to let you know.
While I was developing my solution, I came to know that At each step it sends separate request to the AI model,
I am using Anthropic Claude sonnet 3.5, which is costly, as it supports images,
Why is that so? Why it is sending requests to the model at each step,
Is there any solution, It is very alarming, We have used 2.5 million tokens in just 1 month,
Here is my code,
````
import asyncio
import os
from dotenv import load_dotenv
from langchain_aws import ChatBedrock
from browser_use import Agent, Controller
from browser_use.browser.context import BrowserContext
# Load environment variables
load_dotenv()
response = ""
# Initialize Controller and Browser
controller = Controller()
# Initialize the ChatBedrock model for AI task execution
model = ChatBedrock(
model_id="anthropic.claude-3-5-sonnet-20240620-v1:0",
region_name=os.getenv("AWS_REGION"),
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY"),
max_tokens=200000
)
username = "faizan.hameed@virtual-force.com"
password = "**********"
url = "https://kualitee_defects.kualitee.com/"
testcases = """
Test Case 1: Successful Login with Valid Credentials
Test Case ID: TC_LOGIN_001
Title: Verify that a registered user can log in successfully
Preconditions:
The application is accessible at https://kualitee_defects.kualitee.com.
The user has a valid, pre-registered username and password.
A supported web browser is open.
Test Steps:
Navigate to Login Page: Open the URL https://kualitee_defects.kualitee.com in the browser.
Verify UI Elements: Ensure that the login page displays the "Username" and "Password" input fields along with the "Login" button.
Enter Credentials: Input a valid username into the Username field and a valid password into the Password field.
Submit Login: Click the "Login" button.
Expected Results:
The system validates the credentials.
The user is redirected to the dashboard or home page, indicating a successful login.
Optionally, a welcome message or user-specific information is displayed.
Postconditions:
A session is created and the user is logged into the application.
Test Case 2: Login Attempt with Invalid Credentials
Test Case ID: TC_LOGIN_002
Title: Verify that the system rejects invalid credentials
Preconditions:
The application is accessible at https://kualitee_defects.kualitee.com.
Use of credentials that are not registered or are incorrect.
Test Steps:
Navigate to Login Page: Open the URL in a browser.
Enter Invalid Credentials: Input an incorrect username and/or incorrect password.
Submit Login: Click the "Login" button.
Expected Results:
The system does not grant access and remains on the login page.
An error message (e.g., "Invalid username or password") is displayed, prompting the user to try again.
Postconditions:
No user session is created.
Test Case 3: Login Attempt with Blank Fields
Test Case ID: TC_LOGIN_003
Title: Verify login behavior when username and/or password fields are left blank
Preconditions:
The application is accessible at https://kualitee_defects.kualitee.com.
Test Steps:
Navigate to Login Page: Open the URL in a browser.
Leave Fields Blank: Do not enter any data in the Username and Password fields.
Attempt to Submit: Click the "Login" button (if enabled) or observe if the button remains disabled.
Expected Results:
The system should prompt the user to enter the required fields.
The "Login" button may be disabled until both fields are filled.
An inline error or validation message appears indicating that the fields cannot be empty.
Test Case 4: Verify Password Field Masking
Test Case ID: TC_LOGIN_004
Title: Verify that the password input is masked
Preconditions:
The application is accessible at https://kualitee_defects.kualitee.com.
Test Steps:
Navigate to Login Page: Open the URL in a browser.
Enter Text in Password Field: Type any characters into the Password field.
Expected Results:
The entered characters are masked (e.g., displayed as dots or asterisks), ensuring that the password is not visible on the screen.
Test Case 5: Verify "Forgot Password" Functionality
Test Case ID: TC_LOGIN_005
Title: Verify that the "Forgot Password" link works correctly
Preconditions:
The application is accessible at https://kualitee_defects.kualitee.com.
The login page displays a "Forgot Password" link.
Test Steps:
Navigate to Login Page: Open the URL in a browser.
Click "Forgot Password": Click the "Forgot Password" link provided on the login page.
Password Recovery: On the password recovery page, enter the registered email address and submit the form.
Expected Results:
The user is redirected to a password recovery page.
A confirmation message is displayed indicating that password reset instructions have been sent to the provided email address.
"""
task = f"""Your task is to generate **structured, valid, and executable Java Selenium test scripts** using TestNG for testing the website functionality at **{url}** based on the **provided test cases:
{testcases}**.
### **Strict Instructions:**
1. **DOM Analysis & Locator Selection:**
- **Analyze the DOM carefully** and choose the **most reliable locator strategy** for each element.
- **Use ID or Name or Xpath, which is more reliable suiatble**
- **Do NOT use tag-based locators (e.g., by TagName or CSS selectors) unless necessary**.
2. **Login Handling:**
- If **{url} requires login**, first **log in** using:
- **Username:** {username}
- **Password:** {password}
- Then navigate to **{url}** and execute the test cases.
- If **{url} itself is a login page**, generate **only login test cases** as per the provided test cases.
- If **no login is required**, directly generate test scripts for the given test cases.
3. **Test Case Adherence:**
- **Strictly follow the provided test cases**—do not create additional test cases.
- Ensure all test scripts **directly align with the given test case scenarios** without unnecessary assumptions.
4. **Comprehensive Coverage:**
- **Form field validations** (valid/invalid inputs, error messages).
- **Navigation tests** (clicking links, verifying correct URLs).
- **Session handling**, including "Remember Me" functionality.
- **Alternative login options** (Google, Azure AD, SAML) if applicable.
- **Forgot Password functionality** if part of the test cases.
- **Error scenarios and edge cases.**
5. **Test Script Structure & Best Practices:**
- Use **TestNG** with **Selenium WebDriver**.
- Follow **modular programming** (e.g., reusable methods for login, navigation, validation).
- Use **explicit waits** where necessary to handle dynamic elements.
- Include **proper assertions** (e.g., `Assert.assertEquals()`).
- **Write each test case as a separate Java function** with meaningful comments.
- Maintain **proper indentation, brackets, and formatting**.
- **Include all required imports** and setup instructions for a fully executable script.
- If any step is skipped or assumed, document it clearly in comments.
**Important Note:**
Ensure accuracy in one step—**take your time** to generate **ready-to-run, structured, and well-annotated Java Selenium test scripts** that adhere **strictly to the provided test cases**.
"""
@controller.action("Append generated Java test scripts to a file")
async def append_test_scripts(result: str):
#print("Response is \n "+result)
with open("kuaLogin.java", "a") as java_file:
java_file.write(result) # Ensure each test case is on a new line or separated properly
return "Test script appended to kuaLogin.java"
async def run_agent():
"""
Function to run the agent and get the test cases.
"""
agent = Agent(
task=task,
llm=model,
controller=controller,
)
# Run the agent to generate the test cases
result = await agent.run(max_steps=100)
print(result)
if result:
print("Agent successfully generated the test cases.")
await append_test_scripts(result)
else:
print("No result generated by the agent.")
# Run the main function to start the process
async def Kualitee():
print("Running the agent to generate test cases...")
await run_agent()
if __name__ == "__main__":
asyncio.run(Kualitee())
````
### Reproduction Steps
just run the code.
### Code Sample
```python
async def run_agent():
"""
Function to run the agent and get the test cases.
"""
agent = Agent(
task=task,
llm=model,
controller=controller,
)
# Run the agent to generate the test cases
result = await agent.run(max_steps=100)
print(result)
if result:
print("Agent successfully generated the test cases.")
await append_test_scripts(result)
else:
print("No result generated by the agent.")
# Run the main function to start the process
async def Kualitee():
print("Running the agent to generate test cases...")
await run_agent()
if __name__ == "__main__":
asyncio.run(Kualitee())
```
### Version
latest
### LLM Model
Claude 3.5 Sonnet
### Operating System
windows 11
### Relevant Log Output
```shell
```
|
open
|
2025-02-11T06:55:05Z
|
2025-02-24T05:05:53Z
|
https://github.com/browser-use/browser-use/issues/661
|
[
"bug"
] |
faizanhameed-vf
| 8
|
FactoryBoy/factory_boy
|
sqlalchemy
| 972
|
bug with django 4.1
|
#### Description
we are using factory boy with django unit test but when we upgraded from 4.0.4 to 4.1.1
i got the following errors
```
Traceback (most recent call last):
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 928, in get_or_create
return self.get(**kwargs), False
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/cacheops/query.py", line 353, in get
return qs._no_monkey.get(qs, *args, **kwargs)
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 650, in get
raise self.model.DoesNotExist(
src.core.timezone.models.TimeZone.DoesNotExist: TimeZone matching query does not exist.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "core_timezone_pkey"
DETAIL: Key (id)=(4) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/django.py", line 143, in _get_or_create
instance, _created = manager.get_or_create(*args, **key_fields)
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 935, in get_or_create
return self.create(**params), True
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 671, in create
obj.save(force_insert=True, using=self.db)
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 831, in save
self.save_base(
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 882, in save_base
updated = self._save_table(
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 1025, in _save_table
results = self._do_insert(
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 1066, in _do_insert
return manager._insert(
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 1790, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1657, in execute_sql
cursor.execute(sql, params)
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 103, in execute
return super().execute(sql, params)
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/cacheops/transaction.py", line 98, in execute
result = self._no_monkey.execute(self, sql, params)
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: duplicate key value violates unique constraint "core_timezone_pkey"
DETAIL: Key (id)=(4) already exists.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/atlassian/pipelines/agent/build/.venv/lib/python3.9/site-packages/django/test/testcases.py", line 1448, in setUpClass
cls.setUpTestData()
File "/opt/atlassian/pipelines/agent/build/adpp_backend/src/tests.py", line 47, in setUpTestData
cls._admin_user = cls._create_user()
File "/opt/atlassian/pipelines/agent/build/adpp_backend/src/tests.py", line 41, in _create_user
return AdminUserFactory()
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/base.py", line 40, in __call__
return cls.create(**kwargs)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/base.py", line 528, in create
return cls._generate(enums.CREATE_STRATEGY, kwargs)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/django.py", line 120, in _generate
return super()._generate(strategy, params)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/base.py", line 465, in _generate
return step.build()
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 260, in build
step.resolve(pre)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 201, in resolve
self.attributes[field_name] = getattr(self.stub, field_name)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 346, in __getattr__
value = value.evaluate_pre(
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/declarations.py", line 48, in evaluate_pre
return self.evaluate(instance, step, context)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/declarations.py", line 411, in evaluate
return step.recurse(subfactory, extra, force_sequence=force_sequence)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 218, in recurse
return builder.build(parent_step=self, force_sequence=force_sequence)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 260, in build
step.resolve(pre)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 201, in resolve
self.attributes[field_name] = getattr(self.stub, field_name)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 346, in __getattr__
value = value.evaluate_pre(
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/declarations.py", line 48, in evaluate_pre
return self.evaluate(instance, step, context)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/declarations.py", line 411, in evaluate
return step.recurse(subfactory, extra, force_sequence=force_sequence)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 218, in recurse
return builder.build(parent_step=self, force_sequence=force_sequence)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 260, in build
step.resolve(pre)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 201, in resolve
self.attributes[field_name] = getattr(self.stub, field_name)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 346, in __getattr__
value = value.evaluate_pre(
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/declarations.py", line 48, in evaluate_pre
return self.evaluate(instance, step, context)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/declarations.py", line 411, in evaluate
return step.recurse(subfactory, extra, force_sequence=force_sequence)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 218, in recurse
return builder.build(parent_step=self, force_sequence=force_sequence)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/builder.py", line 264, in build
instance = self.factory_meta.instantiate(
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/base.py", line 317, in instantiate
return self.factory._create(model, *args, **kwargs)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/django.py", line 166, in _create
return cls._get_or_create(model_class, *args, **kwargs)
File "/opt/atlassian/pipelines/agent/build/.venv/src/factory-boy/factory/django.py", line 147, in _get_or_create
for lookup, value in cls._original_params.items()
AttributeError: type object 'TimeZoneFactory' has no attribute '_original_params'
```
#### To Reproduce
*Share how the bug happened:*
##### Model / Factory code
```python
# Include your factories and models here
import factory
from factory.faker import faker
from .models import TimeZone
fake = faker.Faker()
class TimeZoneFactory(factory.django.DjangoModelFactory):
class Meta:
model = TimeZone
name = factory.Sequence(lambda n: f'{fake.name()}_{n}')
#models
class TimeZone(BaseModel):
"""
TimeZone
"""
class Meta:
db_table = 'core_timezone'
verbose_name = _('time zone')
verbose_name_plural = _('time zones')
name = models.TextField(_('name'), unique=True)
```
##### The issue
this factory is used everywhere in the project, not sure why this error appeared after upgrading to django 4.1.1
|
closed
|
2022-09-19T06:07:36Z
|
2022-11-16T08:00:40Z
|
https://github.com/FactoryBoy/factory_boy/issues/972
|
[] |
hishamkaram
| 14
|
asacristani/fastapi-rocket-boilerplate
|
pytest
| 27
|
Feature Suggestion: Adapter for multiples autentications
|
A suggested feature would be to add support for adapters that would allow integrating various forms of authentication, jwt, oauth, open id connect (oidc) and others
|
open
|
2023-10-11T11:58:41Z
|
2023-10-11T11:58:41Z
|
https://github.com/asacristani/fastapi-rocket-boilerplate/issues/27
|
[] |
dyohan9
| 0
|
microsoft/unilm
|
nlp
| 850
|
BEiT v2 pretrained checkpoints links are not working
|
Hey,
Thanks for releasing the BEiT v2 code. However, the pretrained checkpoints are not working, I am getting an error that says "blob doesn't exist" when I try to download any of the four variants of the pretrained models. Below are the links that are not working:
- BEiT-base: [beitv2_base_patch16_224_pt1k_ft21k](https://conversationhub.blob.core.windows.net/beit-share-public/beitv2/pt_cls_pt_base_early_9_head_2_vqkd_encoder_base_decoder_3x768x12_clip_1600e-6d32a5f7.pth)
- BEiT-large: [beitv2_large_patch16_224_pt1k_ft21k](https://conversationhub.blob.core.windows.net/beit-share-public/beitv2/pt_cls_pt_large_early_21_head_2_vqkd_encoder_base_decoder_3x768x12_clip_1600e-726bb6d5.pth)
- BEiT-base: [beitv2_base_patch16_224_pt1k](https://conversationhub.blob.core.windows.net/beit-share-public/beitv2/pt_cls_pt_base_early_9_head_2_vqkd_encoder_base_decoder_3x768x12_clip_1600e-6d32a5f7.pth)
- BEiT-large: [beitv2_large_patch16_224_pt1k](https://conversationhub.blob.core.windows.net/beit-share-public/beitv2/pt_cls_pt_large_early_21_head_2_vqkd_encoder_base_decoder_3x768x12_clip_1600e-726bb6d5.pth)
Thanks,
Eliahu
|
closed
|
2022-09-04T08:11:43Z
|
2022-09-05T03:13:23Z
|
https://github.com/microsoft/unilm/issues/850
|
[] |
eliahuhorwitz
| 2
|
albumentations-team/albumentations
|
machine-learning
| 1,876
|
affine with fit_output set to true does not scale bounding boxes correctly
|
## Describe the bug
Bounding boxes are not augmented as expected when fit_output=True.
### To Reproduce
perform affine augmentation with fit_output=True and observe the decoupling of bounding boxes and objects
### Expected behavior
Bounding boxes should be augmented such that the final image is correctly described by them.
### Actual behavior
Bounding boxes are not to be trusted
### Screenshots

|
closed
|
2024-08-13T21:45:47Z
|
2024-08-15T23:16:59Z
|
https://github.com/albumentations-team/albumentations/issues/1876
|
[
"bug"
] |
dominicdill
| 1
|
Avaiga/taipy
|
data-visualization
| 2,022
|
I was visit the website
|
**updates new things**
●Updates new fonts
●Create new features
●Add the white mood
●add chatbot
|
closed
|
2024-10-11T12:52:25Z
|
2024-10-11T13:06:09Z
|
https://github.com/Avaiga/taipy/issues/2022
|
[] |
kamali1331
| 0
|
nl8590687/ASRT_SpeechRecognition
|
tensorflow
| 15
|
找不到文件问题
|
您好,上次我发现一直出现找不到文件的错误,我发现是不是readdata22-2.py 文件中GetData函数没有当self.type==dev时读取文件的程序,才导致speechmodel.py210行测试程序出错,找不到文件。当我把210行注释掉后就可以在window下运行了。但是在Linux下,我直接把程序复制过去在服务器gpu跑时,又出现找不到路径的问题FileNotFoundError: [Errno 2] No such file or directory: 'dataset/wav/train/A11/A11_183.WAV'
,可是路径下是有该音频的。想问下window和Linux下有什么需要注意修改的地方吗
|
closed
|
2018-05-12T14:08:25Z
|
2018-05-27T17:16:41Z
|
https://github.com/nl8590687/ASRT_SpeechRecognition/issues/15
|
[] |
huanzoey
| 3
|
lorien/grab
|
web-scraping
| 218
|
Error installing
|
When you run the code gives error:
> Requirement already satisfied: grab in /usr/local/lib/python2.7/dist-packages
> Requirement already satisfied: weblib>=0.1.23 in /usr/local/lib/python2.7/dist-packages (from grab)
> Requirement already satisfied: six in /usr/local/lib/python2.7/dist-packages (from grab)
> Requirement already satisfied: user_agent in /usr/local/lib/python2.7/dist-packages (from grab)
> Requirement already satisfied: pytils in /usr/local/lib/python2.7/dist-packages (from weblib>=0.1.23->grab)
> Requirement already satisfied: lxml; platform_system != "Windows" in /usr/local/lib/python2.7/dist-packages (from weblib>=0.1.23->grab)
Installed everything as written in the instructions. What could I do wrong?
|
closed
|
2017-01-30T22:33:30Z
|
2017-02-03T08:35:37Z
|
https://github.com/lorien/grab/issues/218
|
[] |
DenoBY
| 7
|
biolab/orange3
|
pandas
| 6,099
|
EntropyMDL._normalize does not account if X is 0's
|
<!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
[`EntropyMDL._normalize()`](https://github.com/biolab/orange3/blob/71b790662cfb5c712be16c067060eaddc671b034/Orange/preprocess/discretize.py#L691) divides by the scale which is the sum.
When X (across the axis) is 0's scale is 0 and therefore the denominator is 0 and you get a warning and a nan back.
`python3.9/site-packages/Orange/preprocess/discretize.py:551: RuntimeWarning: invalid value encountered in true_divide
return X / scale`
I'm not sure if we'd want to enforce a warning, or just return 0 for those.
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
```python
from Orange.preprocess import EntropyMDL
EntropyMDL._normalize( [[1., 0.], [0., 0.]], axis=1)
```
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Ubuntu/Linux
- Orange version: 3.32.0
- How you installed Orange: mamba/conda
|
closed
|
2022-08-18T22:42:23Z
|
2022-08-22T09:33:10Z
|
https://github.com/biolab/orange3/issues/6099
|
[
"bug report"
] |
davzaman
| 3
|
JaidedAI/EasyOCR
|
machine-learning
| 1,103
|
Japanese Kanji character
|
I found that the Japanese character "絆" is missing in the file "ja_char.txt".And I cannot recognize this Japanese Kanji character using the Japanese model.
|
open
|
2023-08-02T15:20:34Z
|
2023-08-02T15:20:34Z
|
https://github.com/JaidedAI/EasyOCR/issues/1103
|
[] |
98andpeople
| 0
|
google-research/bert
|
nlp
| 799
|
Why estimator.predict didn't really run inference graph?
|
I want to evaluate inference time of bert. I found wired things when I executed code below
```python
t_start = time.time()
result = estimator.predict(input_fn=predict_input_fn)
t_consumed = time.time() - t_start
print("Time consumed after 'estimator.predict' is %f" % t_consumed)
output_predict_file = os.path.join(FLAGS.output_dir, "test_results.tsv")
with tf.gfile.GFile(output_predict_file, "w") as writer:
num_written_lines = 0
tf.logging.info("***** Predict results *****")
for (i, prediction) in enumerate(result):
probabilities = prediction["probabilities"]
if i >= num_actual_predict_examples:
break
output_line = "\t".join(
str(class_probability)
for class_probability in probabilities) + "\n"
writer.write(output_line)
num_written_lines += 1
assert num_written_lines == num_actual_predict_examples
t_consumed = time.time() - t_start
print("Time consumed on %d examples is %f" % (len(predict_examples),
t_consumed))
```
Which outputed
```
Time consumed after 'estimator.predict' is 0.000005
Time consumed on 1199 examples is 12.387969
```
Seems that `estimator.predict` didn't run inference at all until python read values of `results`. I'm wondering why?
|
open
|
2019-08-12T05:04:04Z
|
2019-08-14T02:30:44Z
|
https://github.com/google-research/bert/issues/799
|
[] |
DataTerminatorX
| 1
|
keras-team/keras
|
python
| 20,335
|
Floating point exception (core dumped) with onednn opt on tensorflow backend
|
As shown in this [colab](https://colab.research.google.com/drive/1XjoAtDP4SC2qyLWslW8qWzqQusn9eDOu?usp=sharing), the kernel, not the program, crashes if the OneDNN OPT is on and the output tensor shape contains a zero dimension.
As discussed in tensorflow/tensorflow#77131, and also shown in the above colab, we found that the error disapeared after downgrading from keras 3.0 to keras 2.0.
Therefore, I think some errors are introduced when updating from keras 2.0 to keras 3.0.
|
closed
|
2024-10-09T08:41:45Z
|
2024-11-14T02:01:52Z
|
https://github.com/keras-team/keras/issues/20335
|
[
"type:bug/performance",
"stat:awaiting response from contributor",
"stale",
"backend:tensorflow"
] |
Shuo-Sun20
| 4
|
ExpDev07/coronavirus-tracker-api
|
rest-api
| 235
|
Thank you for your great API!
|
Hi, thank you for your great API
I have develop simple Android app
https://github.com/kiyosuke/corona-grapher
Thank you :)
|
closed
|
2020-03-30T08:56:43Z
|
2020-04-19T18:26:29Z
|
https://github.com/ExpDev07/coronavirus-tracker-api/issues/235
|
[
"user-created"
] |
kiyosuke
| 0
|
pandas-dev/pandas
|
data-science
| 60,573
|
BUG: NameError: name 'pa' is not defined despite `pyarrow` is installed
|
### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import pyarrow as pa
data = {
"foo": ["A", "B", "C"],
"prop": [0.5, 0.8, 0.7]
}
df = pd.DataFrame(data)
df["prop"].astype('float64[pyarrow]')
```
### Issue Description
Cannot change a column type to `float64[pyarrow]`. The error I get is:
```
NameError: name 'pa' is not defined
```
### Expected Behavior
I'd expect to get the column converted properly
```
0 0.5
1 0.8
2 0.7
Name: prop, dtype: double[pyarrow]
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.8
python-bits : 64
OS : Darwin
OS-release : 24.1.0
Version : Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:45 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T8112
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : 8.27.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : 3.9.2
numba : None
numexpr : 2.10.1
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 17.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
|
open
|
2024-12-15T11:15:59Z
|
2025-02-21T02:22:42Z
|
https://github.com/pandas-dev/pandas/issues/60573
|
[
"Bug",
"Arrow"
] |
emmansh
| 10
|
Lightning-AI/pytorch-lightning
|
deep-learning
| 19,696
|
LightningCLI docs example for EarlyStopping missing required args in config file
|
### 📚 Documentation
# Error Description
In the lightning CLI tutorial (https://lightning.ai/docs/pytorch/stable/cli/lightning_cli_advanced_3.html), misleading error exists in the tutorial on how to set callbacks in yaml files.
## How to fix?
The original tutorial gives the following simple configuration file that defines two callbacks
```
# wrong sample configuration file in tutorial
trainer:
callbacks:
- class_path: lightning.pytorch.callbacks.EarlyStopping
init_args:
patience: 5
- class_path: lightning.pytorch.callbacks.LearningRateMonitor
init_args:
...
```
However, this example yaml file does not work correctly. The following yaml file can get the correct results.
```
# Fixed config.yaml
trainer:
callbacks:
- class_path: EarlyStopping
init_args:
patience: 5
- class_path: LearningRateMonitor
init_args:
logging_interval: 'epoch'
```
## What version are you seeing the problem on?
lightning v2.1.2 & v2.2.1
## How to reproduce the bug
```
# main.py
from lightning.pytorch.cli import LightningCLI
from lightning.pytorch.demos.boring_classes import DemoModel, BoringDataModule
def cli_main():
cli = LightningCLI(DemoModel, BoringDataModule)
if __name__ == "__main__":
cli_main()
```
```
# config.yaml
trainer:
callbacks:
- class_path: lightning.pytorch.callbacks.EarlyStopping
init_args:
patience: 5
- class_path: lightning.pytorch.callbacks.LearningRateMonitor
init_args:
logging_interval: 'epoch'
```
## Error messages and logs
```
usage: main.py [-h] [-c CONFIG] [--print_config[=flags]] {fit,validate,test,predict} ...
error: Parser key "trainer.callbacks":
Does not validate against any of the Union subtypes
Subtypes: (typing.List[lightning.pytorch.callbacks.callback.Callback], <class 'lightning.pytorch.callbacks.callback.Callback'>, <class 'NoneType'>)
Errors:
- Problem with given class_path 'lightning.pytorch.callbacks.EarlyStopping':
Validation failed: Key "monitor" is required but not included in config object or its value is None.
- Not a valid subclass of Callback
Subclass types expect one of:
- a class path (str)
- a dict with class_path entry
- a dict without class_path but with init_args entry (class path given previously)
- Expected a <class 'NoneType'>
Given value type: <class 'list'>
Given value: [Namespace(class_path='lightning.pytorch.callbacks.EarlyStopping', init_args=Namespace(monitor=None, min_delta=0.0, patience=5, verbose=False, mode='min', strict=True, check_finite=True, stopping_threshold=None, divergence_threshold=None, check_on_train_epoch_end=None, log_rank_zero_only=False)), Namespace(class_path='lightning.pytorch.callbacks.LearningRateMonitor', init_args=Namespace(logging_interval='epoch', log_momentum=False, log_weight_decay=False))]
```
## More Info
I confirmed that I have installed the latest jsonargparse using `pip install "jsonargparse[signatures]"`
This tutorial should have existed for a long time. If this error did not exist before, I suspect that jsonargparse may have updated the method of parsing callback configuration.
cc @borda @carmocca @mauvilsa
|
closed
|
2024-03-25T13:37:37Z
|
2024-03-27T22:16:30Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19696
|
[
"help wanted",
"docs",
"lightningcli"
] |
Lunamos
| 2
|
sanic-org/sanic
|
asyncio
| 2,832
|
Outdated docs on http to https redirection
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
I needed to set up http-to-https redirection for my app and came across this:
https://sanic.dev/en/guide/how-to/tls.html#redirect-http-to-https-with-certificate-requests-still-over-http
But it fails because `is_running` isn't a member of `Sanic`. I changed it to instead use `app.state.is_running` and then the second app would start ok, but my catchall route (or any route) could be found. I then replaced the `signalize` and `finalize` lines with:
```py
await server.startup()
```
and now everything works.
### Code snippet
My final runner function that works looks like this:
```py
async def _RunServer(app, server):
app.state.is_running = True
try:
await server.startup()
await server.serve_forever()
finally:
app.state.is_running = False
app.state.is_stopping = True
```
### Expected Behavior
I think the docs need to be updated to something like the above changes, but maybe even this approach is outdated? I found #2348 but am not sure if that's what I should be doing instead. Thanks!
### How do you run Sanic?
Sanic CLI
### Operating System
Linux
### Sanic Version
23.6.0
### Additional context
_No response_
|
open
|
2023-10-05T17:07:00Z
|
2023-10-05T17:07:00Z
|
https://github.com/sanic-org/sanic/issues/2832
|
[
"bug"
] |
dfb
| 0
|
tableau/server-client-python
|
rest-api
| 1,430
|
TSC.SubscriptionItem, Payload is either malformed or incomplete. (0x5CE10192 : Specifying subscription attachments not allowed.)
|
Hello, I'm trying to create a subscription for a user on a view. I put this piece of code,
schedule_id = '60c392fc-d902-44b0-92cf-cfa3b512630f'
user_id = df[0].values[i]
# Create the new SubscriptionItem object with variables from above.
new_sub = TSC.SubscriptionItem('EssaiGD', schedule_id, user_id, target)
# (Optional) Set other fields. Any of these can be added or removed.
new_sub.attach_image = True
new_sub.attach_pdf = True
new_sub.message = "You have an alert!"
new_sub.page_orientation = TSC.PDFRequestOptions.Orientation.Portrait
new_sub.page_size_option = TSC.PDFRequestOptions.PageType.A4
new_sub.send_if_view_empty = True
# Create the new subscription on the site you are logged in.
new_sub = server.subscriptions.create(new_sub)
I launch it and when I have the error it gives me 0x5CE10192 : Specifying subscription attachments not allowed.
THanks
Version :
Product version: 2024.2.0
REST API version: 3.23
Build number: 20242.24.0711.1807
Python 3.10.6
|
closed
|
2024-07-29T14:45:30Z
|
2024-10-04T23:35:15Z
|
https://github.com/tableau/server-client-python/issues/1430
|
[
"help wanted"
] |
gdrau
| 11
|
iperov/DeepFaceLab
|
deep-learning
| 883
|
Error when applying XSeg Mask. Help would be appreciated ASAP
|
Full terminal window & error message:
Applying trained XSeg model to aligned/ folder.
Traceback (most recent call last):
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _run_fn
self._extend_graph()
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1352, in _extend_graph
tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation XSeg/conv01/conv/weight: {{node XSeg/conv01/conv/weight}}was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[{{node XSeg/conv01/conv/weight}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 324, in <module>
arguments.func(arguments)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 285, in process_xsegapply
XSegUtil.apply_xseg (Path(arguments.input_dir), Path(arguments.model_dir))
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\XSegUtil.py", line 32, in apply_xseg
raise_on_no_model_files=True)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\facelib\XSegNet.py", line 68, in __init__
do_init = not model.load_weights( model_file_path )
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Saveable.py", line 96, in load_weights
nn.batch_set_value(tuples)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 29, in batch_set_value
nn.tf_sess.run(assign_ops, feed_dict=feed_dict)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation XSeg/conv01/conv/weight: node XSeg/conv01/conv/weight (defined at C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:76) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[node XSeg/conv01/conv/weight (defined at C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:76) ]]
Caused by op 'XSeg/conv01/conv/weight', defined at:
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 324, in <module>
arguments.func(arguments)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 285, in process_xsegapply
XSegUtil.apply_xseg (Path(arguments.input_dir), Path(arguments.model_dir))
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\XSegUtil.py", line 32, in apply_xseg
raise_on_no_model_files=True)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\facelib\XSegNet.py", line 41, in __init__
self.model_weights = self.model.get_weights()
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 77, in get_weights
self.build()
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 65, in build
self._build_sub(v[name],name)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 35, in _build_sub
layer.build()
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 65, in build
self._build_sub(v[name],name)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 33, in _build_sub
layer.build_weights()
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 76, in build_weights
self.weight = tf.get_variable("weight", (self.kernel_size,self.kernel_size,self.in_ch,self.out_ch), dtype=self.dtype, initializer=kernel_initializer, trainable=self.trainable )
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1479, in get_variable
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1220, in get_variable
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 547, in get_variable
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 499, in _true_getter
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 911, in _get_single_variable
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 213, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 176, in _variable_v1_call
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 155, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2495, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 217, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1395, in __init__
constraint=constraint)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1509, in _init_from_args
name=name)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\state_ops.py", line 79, in variable_op_v2
shared_name=shared_name)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 1424, in variable_v2
shared_name=shared_name, name=name)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
op_def=op_def)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Cannot assign a device for operation XSeg/conv01/conv/weight: node XSeg/conv01/conv/weight (defined at C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:76) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[node XSeg/conv01/conv/weight (defined at C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:76) ]]
Please send help ASAP. I really need to learn how to do a full head deepfake for my English Assessment. Thanks
|
open
|
2020-09-04T06:30:57Z
|
2023-06-08T21:18:14Z
|
https://github.com/iperov/DeepFaceLab/issues/883
|
[] |
Xlectron
| 6
|
graphistry/pygraphistry
|
jupyter
| 546
|
[BUG] gfql e not exported
|
**Describe the bug**
```
import pandas as pd
import graphistry
from graphistry import (
# graph operators
n, e_undirected, e_forward, e_reverse, e,
# attribute predicates
is_in, ge, startswith, contains, match as match_re
)
```
==>
```
ImportError: cannot import name 'e' from 'graphistry' (/opt/conda/envs/rapids/lib/python3.8/site-packages/graphistry/__init__.py)
```
===
pygraphistry 0.33.0
|
closed
|
2024-02-22T23:22:54Z
|
2024-02-25T01:04:50Z
|
https://github.com/graphistry/pygraphistry/issues/546
|
[
"bug",
"p3",
"gfql"
] |
lmeyerov
| 0
|
TencentARC/GFPGAN
|
deep-learning
| 589
|
not running
|
python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.4 -s 2
Traceback (most recent call last):
File "/home/MKN/GFPGAN/inference_gfpgan.py", line 7, in <module>
from basicsr.utils import imwrite
File "/home/MKN/GFPGAN/myenv/lib/python3.12/site-packages/basicsr-1.4.2-py3.12.egg/basicsr/__init__.py", line 4, in <module>
from .data import *
File "/home/MKN/GFPGAN/myenv/lib/python3.12/site-packages/basicsr-1.4.2-py3.12.egg/basicsr/data/__init__.py", line 22, in <module>
_dataset_modules = [importlib.import_module(f'basicsr.data.{file_name}') for file_name in dataset_filenames]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/MKN/GFPGAN/myenv/lib/python3.12/site-packages/basicsr-1.4.2-py3.12.egg/basicsr/data/realesrgan_dataset.py", line 11, in <module>
from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels
File "/home/MKN/GFPGAN/myenv/lib/python3.12/site-packages/basicsr-1.4.2-py3.12.egg/basicsr/data/degradations.py", line 8, in <module>
from torchvision.transforms.functional_tensor import rgb_to_grayscale
ModuleNotFoundError: No module named 'torchvision.transforms.functional_tensor'python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.4 -s 2
Traceback (most recent call last):
File "/home/MKN/GFPGAN/inference_gfpgan.py", line 7, in <module>
from basicsr.utils import imwrite
File "/home/MKN/GFPGAN/myenv/lib/python3.12/site-packages/basicsr-1.4.2-py3.12.egg/basicsr/__init__.py", line 4, in <module>
from .data import *
File "/home/MKN/GFPGAN/myenv/lib/python3.12/site-packages/basicsr-1.4.2-py3.12.egg/basicsr/data/__init__.py", line 22, in <module>
_dataset_modules = [importlib.import_module(f'basicsr.data.{file_name}') for file_name in dataset_filenames]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/MKN/GFPGAN/myenv/lib/python3.12/site-packages/basicsr-1.4.2-py3.12.egg/basicsr/data/realesrgan_dataset.py", line 11, in <module>
from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels
File "/home/MKN/GFPGAN/myenv/lib/python3.12/site-packages/basicsr-1.4.2-py3.12.egg/basicsr/data/degradations.py", line 8, in <module>
from torchvision.transforms.functional_tensor import rgb_to_grayscale
ModuleNotFoundError: No module named 'torchvision.transforms.functional_tensor'
|
open
|
2024-10-24T14:25:04Z
|
2024-10-30T20:52:07Z
|
https://github.com/TencentARC/GFPGAN/issues/589
|
[] |
mkn1212
| 2
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 1,262
|
Can you provide the pretrained model in pytorch?
|
I'm sorry. I have tried many times, but none of them succeeded.
These days,I want to use cyclegan to transfer GTA5 images to CityScapes style.But I don't know the hyperparameters,such as epoches,crop size,load size etc.
By the way,due to insufficient storage space, I just used 7500 GTA5 pictures.Will this have an impact?
And I find you have provided the torch(lua) pretrained model.But this is useless in pytorch.So could you provide again?
Thanks a lot!
|
open
|
2021-03-27T15:10:19Z
|
2021-03-27T15:10:19Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1262
|
[] |
ALEX13679173326
| 0
|
pytest-dev/pytest-xdist
|
pytest
| 855
|
New `LoadScheduling` batch distribution logic doesn't work for long-running tests
|
The PR https://github.com/pytest-dev/pytest-xdist/pull/812 that implemented new batch distribution logic (and that is now released in 3.0.2) disregards long-running tests.
I have several modules with long-running tests and many of these tests are now scheduled on single worker, instead of being distributed evenly across multiple workers. As a result, total runtime of my testsuite more than doubled from ~ 2.5 hours to 5+ hours.
Are there any workarounds to bring back the original round-robin scheduling?
cc @amezin
|
closed
|
2022-12-10T01:15:18Z
|
2023-01-13T11:52:57Z
|
https://github.com/pytest-dev/pytest-xdist/issues/855
|
[] |
mkoura
| 8
|
rougier/numpy-100
|
numpy
| 9
|
Better way of showing the numpy configuration
|
In exercise 2, I think `np.show_config()` is a better way of showing the configuration than `np.__config__.show()`.
|
closed
|
2016-03-09T07:24:06Z
|
2016-03-09T09:31:42Z
|
https://github.com/rougier/numpy-100/issues/9
|
[] |
Dapid
| 1
|
tox-dev/tox
|
automation
| 3,071
|
Stale egg-info used to build packages?
|
First of all, thanks for the great tool!
I'm not sure if this is an issue in tox, something that can be configured, or my misunderstanding of how tox and/or Python packaging work - I would appreciate some clarity.
When an egg-info directory is present, tox seems to use that to build the package, rather than generating afresh. This can mask issues during development (which is how we discovered this behaviour!): a previous working build leaves a 'good' egg-info directory in place; a developer makes some changes which should break the build (e.g. changing pyproject.toml, see MWE below); but tox does not catch these issues because it uses the 'good' egg-info directory rather than the 'bad' source.
Tested with Python version 3.9.16 and tox version 4.6.4. Tox used virtualenv 20.24.2, setuptools 67.8.0, wheel 0.40.0, and pip 23.1.2.
To reproduce:
1. Clone the MWE from https://github.com/njiles/tox-egg-info-demo and run `tox -r`. The test should pass.
2. Edit pyproject.toml to remove the `[tool.setuptools.package-data]` section (or change the "hello.txt" filename therein to something incorrect) and run `tox -r` again.
* We expect the test to fail, as without the correct package-data configuration the hello.txt file should not be part of the package.
* However, the test passes.
3. Remove the src/tox_egg_info_demo.egg-info directory and rerun `tox -r` (keeping the 'broken' pyproject.toml from step 2). Now the test fails with a FileNotFoundError as expected.
The outputs of `tox -rvvv` for steps 1, 2 and 3 are attached (too many characters to include in issue body!)
[step-1-output.txt](https://github.com/tox-dev/tox/files/12186005/step-1-output.txt)
[step-2-output.txt](https://github.com/tox-dev/tox/files/12186006/step-2-output.txt)
[step-3-output.txt](https://github.com/tox-dev/tox/files/12186007/step-3-output.txt)
|
closed
|
2023-07-27T16:59:02Z
|
2023-07-27T19:57:34Z
|
https://github.com/tox-dev/tox/issues/3071
|
[] |
njiles
| 2
|
pywinauto/pywinauto
|
automation
| 504
|
How to retrieve the content of an item in Combobox in pywinauto ?
|
closed
|
2018-06-04T01:10:51Z
|
2019-05-12T11:58:24Z
|
https://github.com/pywinauto/pywinauto/issues/504
|
[
"invalid",
"question"
] |
hulei545
| 5
|
|
holoviz/panel
|
jupyter
| 7,630
|
Examples on homepage don't work
|
The examples at the top of the homepage https://panel.holoviz.org/ don't work, they return "404 page not found" errors, for example,
https://panel-gallery.holoviz-demo.anaconda.com/portfolio_analyzer
|
closed
|
2025-01-18T12:54:56Z
|
2025-01-20T19:03:59Z
|
https://github.com/holoviz/panel/issues/7630
|
[] |
faysou
| 2
|
feature-engine/feature_engine
|
scikit-learn
| 524
|
reduce running time of tests for feature selection module
|
open
|
2022-09-21T16:21:29Z
|
2023-03-01T12:38:03Z
|
https://github.com/feature-engine/feature_engine/issues/524
|
[] |
solegalli
| 6
|
|
aiortc/aioquic
|
asyncio
| 163
|
WebTransport over HTTP/3 support
|
Hi,
We are developing WebTransport over HTTP/3 which replaces WebTransport over QUIC (a.k.a. QUICTransport). I would like to use aioquic to [implement a test server for web-platform tests](https://github.com/w3c/webtransport/issues/125) (similar to what I did, writing a QuicTransport server on top of aioquic for testing).
https://tools.ietf.org/html/draft-ietf-webtrans-http3-00 is the spec draft.
To write a test server, I would like to implement some features specific to WebTransport over HTTP/3. Specifically,
1. We would like to handle SETTINGS frames.
1. WebTransport over HTTP/3 defines some custom frames (0x54 and 0x41).
To support WebTransport over HTTP/3, we need to...
1. support WebTransport over HTTP/3 directly in H3Connection,
2. make H3Connection a bit more flexisible, or
3. re-implement H3Connection out of aioquic.
I would like to avoid the last solution if possible.
Do you have any suggestions? I'm happy to make PRs if needed. Thank you!
cc: @vasilvv
|
closed
|
2021-03-02T09:30:02Z
|
2024-06-23T17:24:04Z
|
https://github.com/aiortc/aioquic/issues/163
|
[] |
yutakahirano
| 19
|
graphql-python/graphene-mongo
|
graphql
| 230
|
outdated version number in package
|
The version of the package specified in \_\_init__.py is still [version 0.1.1](https://github.com/graphql-python/graphene-mongo/blob/a084895d270cb8f9865ee33daab5356fc3ab3ba1/graphene_mongo/__init__.py#L7). While this can be fixed simply, fixing it right now does not change much (though suggested). Rather, there is a need to make sure the version is bumped before each release.
|
closed
|
2024-01-01T17:27:29Z
|
2024-02-24T09:43:57Z
|
https://github.com/graphql-python/graphene-mongo/issues/230
|
[] |
WeepingClown13
| 0
|
Kanaries/pygwalker
|
pandas
| 210
|
I've got 2 warning message
|

## 1st Warning
```
WARNING: parse invoke code failed, This may affect feature of export code.
```
I am unsure about the cause of the first message.
## 2nd Warning
```
WARNING: parse invoke code failed, This may affect feature of export code.
If you are using pygwalker on Jupyter Notebook(version<7) and it can't display properly, please execute code to fix it: `pip install "pygwalker[notebook]" --pre`.(close after 15 seconds)
```
I am using Jupyter-lab 4.0.5, so maybe the mistakes are being recognized as version<7?
I tried installing the `--pre` version, but encountered more errors.
|
closed
|
2023-08-25T09:03:31Z
|
2023-08-28T04:06:39Z
|
https://github.com/Kanaries/pygwalker/issues/210
|
[] |
Erimus-Koo
| 16
|
idealo/image-super-resolution
|
computer-vision
| 118
|
strange error on training
|
I am trying to run the sample training, as per provided notebbok. I use a dataset of 3361 training images (both lr and hr) and 40 validation images (900x900, 3 band RGB)
I declared the trainer as follows:
```
trainer = Trainer(
generator=rrdn,
discriminator=discr,
feature_extractor=f_ext,
lr_train_dir='../lr-images',
hr_train_dir='../hr-images',
lr_valid_dir='../lr-val-images',
hr_valid_dir='../hr-val-images',
loss_weights=loss_weights,
learning_rate=learning_rate,
flatness=flatness,
dataname='spacenet',
log_dirs=log_dirs,
weights_generator=None,
weights_discriminator=None,
n_validation=40
)
```
When I run the sample code I get the following error:
```
File "C:\pix2pix\p2p\lib\site-packages\ISR\utils\datahandler.py", line 44, in _make_img_list
range(len(self.img_list['hr'])), self.n_validation_samples, replace=False
File "mtrand.pyx", line 900, in numpy.random.mtrand.RandomState.choice
ValueError: 'a' cannot be empty unless no samples are taken
```
|
open
|
2020-04-15T10:31:45Z
|
2023-05-27T14:58:56Z
|
https://github.com/idealo/image-super-resolution/issues/118
|
[] |
procton
| 2
|
matplotlib/matplotlib
|
matplotlib
| 29,050
|
[Bug]: LogFormatter(minor_thresholds=...) should count actually drawn ticks, not number of decades spanned by axes
|
### Bug summary
The first minor_threshold (called `subset`, defaulting to 1) in LogFormatter is documented as "If ``numdec > subset`` then no minor ticks will be labeled" (where `numdec`) is the number of decades spanned by the axes. Unfortunately, even if an axes spans more than one decade, there may be a single major tick in it, in which case I think we still want to show some minor ticks.
### Code for reproduction
```Python
subplot(121).set(yscale="log", ylim=(.55, 4.5))
subplot(122).set(yscale="log", ylim=(.45, 5.5))
```
### Actual outcome

### Expected outcome
The second axes (spanning from 0.45 to 5.5, i.e. slightly more than one decade, but with only one major tick) should also have minor ticks drawn.
### Additional information
This can probably(?) be fixed by reinterpreting `subset` as "if there is no more than `subset` major ticks drawn, then label the minor ticks". (The minor tick formatter doesn't actually know about the major tickers, so the logic is more something like `int(log(vmax)/log(b)) - int(log(vmin)/log(b)) <= threshold`, up to off-by-1 errors...)
(@efiring wrote the original logic, I think, per https://github.com/matplotlib/matplotlib/pull/7438#issuecomment-260753221.)
### Operating system
macos
### Matplotlib Version
3.9
### Matplotlib Backend
any
### Python version
3.12
### Jupyter version
no
### Installation
git checkout
|
closed
|
2024-10-31T10:58:43Z
|
2024-12-19T03:34:04Z
|
https://github.com/matplotlib/matplotlib/issues/29050
|
[
"topic: ticks axis labels"
] |
anntzer
| 0
|
dynaconf/dynaconf
|
django
| 583
|
Allow disabling the toml parsing [was:Environment variables in JSON format get parsed as TOML]
|
**Describe the bug**
Environment variables that contain valid TOML are parsed with [parse_with_toml()](https://github.com/rochacbruno/dynaconf/blob/6c6fda4950a1a403b71503c8940692aeb3725cdf/dynaconf/utils/parse_conf.py#L244) even if it is not desired.
**To Reproduce**
Steps to reproduce the behavior:
The easiest way to reproduce the issue is to have an environment variable that contains a JSON object with `=` at the end of it's value:
```bash
$ DYNACONF_VALUE='{"key":"value="}' python3 -c "from dynaconf import LazySettings; print(LazySettings().VALUE)"
```
It prints out:
```
{'"key":"value': ''}
```
(that is, the key is `'"key":"value'`, and the value is an empty string).
**Expected behavior**
I would expect environment variables to not be parsed at all. Or at least I would like to have an option to explicitly disable `tomlify` parameter for [load_from_env](https://github.com/rochacbruno/dynaconf/blob/6c6fda4950a1a403b71503c8940692aeb3725cdf/dynaconf/loaders/env_loader.py#L24) function.
**Environment (please complete the following information):**
- Dynaconf Version: 3.1.4
**Additional context**
The issue with `=` in the end of a config value comes from a real application that uses base64-encoded strings. To be more specific, it's a way [Azure service operator](https://github.com/Azure/azure-service-operator) provides secrets to Kubernetes applications.
|
open
|
2021-05-11T09:21:42Z
|
2022-09-06T17:45:01Z
|
https://github.com/dynaconf/dynaconf/issues/583
|
[
"RFC"
] |
o-fedorov
| 1
|
huggingface/datasets
|
pytorch
| 6,642
|
Differently dataset object saved than it is loaded.
|
### Describe the bug
Differently sized object is saved than it is loaded.
### Steps to reproduce the bug
Hi, I save dataset in a following way:
```
dataset = load_dataset("json",
data_files={
"train": os.path.join(input_folder, f"{task_meta_type}_{task_type}_train.jsonl"),
"test": os.path.join(input_folder, f"{task_meta_type}_{task_type}_test.jsonl")})
print(os.path.join(output_folder, f"{task_meta_type}_{task_type}"))
print(f"Length of train dataset: {len(dataset['train'])}")
print(f"Length of test dataset: {len(dataset['test'])}")
dataset.save_to_disk(os.path.join(output_folder, f"{task_meta_type}_{task_type}"))
```
this yields output
```
.data/hf_dataset/propaganda_zanr
Length of train dataset: 7642
Length of test dataset: 1000
```
Everything looks fine.
Then I load the dataset
```python
from datasets import load_dataset
dataset_path = ".data/hf_dataset/propaganda_zanr"
dataset = load_dataset(dataset_path)
print(f"Length of train dataset: {len(dataset['train'])}")
print(f"Length of test dataset: {len(dataset['test'])}")
```
this prints
```
Generating train split: 1 examples [00:00, 72.10 examples/s]
Generating test split: 1 examples [00:00, 100.69 examples/s]
Length of train dataset: 1
Length of test dataset: 1
```
I dont' understand :(
### Expected behavior
same object is loaded
### Environment info
datasets==2.16.1
|
closed
|
2024-02-05T17:28:57Z
|
2024-02-06T09:50:19Z
|
https://github.com/huggingface/datasets/issues/6642
|
[] |
MFajcik
| 2
|
davidsandberg/facenet
|
tensorflow
| 452
|
An issue with the accuracy of facenet
|
Hi- I use a pipeline based on mtcnn for face detection and then facenet for face recognition (I did test both 20170511-185253.pb and 20170512-110547.pb). I perform this process in a while loop for 2 minutes in which frames/images are captured using opencv video capture function! Finally, I compare my own images/faces during this period of time using cosine and euclidean functions! But the results are not promising at all. Indeed, the error can reach up to 50% sometime among frames! Any idea why is that and how I can improve it?
1) Do I need to do normalization, or rgb2gray before giving my frames to mtcnn?
2) Does mtccn already perform alignment?
3) Based on align_dlib.py, I use the following line: scaled = misc.imresize(img, prealigned_scale, interp='bilinear')
rather than using: align.align(image_size, img, landmarkIndices=landmarkIndices, skipMulti=False, scale=scale)
4) What about alignment ( 3d alignment such as fb deep face)?
5) I use the following code in the while loop
input_image_size = 160
ret, frame = video_capture.read()
bounding_boxes, _ = detect_face.detect_face(frame, minsize, pnet, rnet, onet, threshold, factor)
nrof_faces = bounding_boxes.shape[0]
if nrof_faces > 0:
det = bounding_boxes[:, 0:4]
img_size = np.asarray(frame.shape)[0:2]
cropped = []
scaled = []
scaled_reshape = []
bb = np.zeros((nrof_faces,4), dtype=np.int32)
for i in range(nrof_faces):
emb_array = np.zeros((1, embedding_size))
bb[i][0] = det[i][0]
bb[i][1] = det[i][1]
bb[i][2] = det[i][2]
bb[i][3] = det[i][3]
cv2.rectangle(frame, (bb[i][0], bb[i][1]), (bb[i][2], bb[i][3]), (0, 255, 0), 2) #boxing face
cropped.append(frame[bb[i][1]:bb[i][3], bb[i][0]:bb[i][2], :])
cropped[i] = facenet.flip(cropped[i], False)
scaled.append(misc.imresize(cropped[i], (image_size, image_size), interp='bilinear'))
scaled[i] = cv2.resize(scaled[i], (input_image_size,input_image_size), interpolation=cv2.INTER_CUBIC)
scaled[i] = facenet.prewhiten(scaled[i])
scaled_reshape.append(scaled[i].reshape(-1,input_image_size,input_image_size,3))
feed_dict = {images_placeholder: scaled_reshape[i], phase_train_placeholder: False}
emb_array[0, :] = sess.run(embeddings, feed_dict=feed_dict)
#then I compare (cosine, etc.) with embeddings of my own face pictures (captured with same camera, same background, etc.):
dist_cosine= distance.cosine(saved_embedding ,emb_array[0, :])
dist_euclidean = distance.euclidean(embedding,emb_array[0, :])
|
open
|
2017-09-08T11:00:35Z
|
2018-11-05T22:32:30Z
|
https://github.com/davidsandberg/facenet/issues/452
|
[] |
github4f
| 16
|
hzwer/ECCV2022-RIFE
|
computer-vision
| 340
|
How to avoid interpolating scene changes?
|
Hello, I noticed that sometimes the model attempts to interpolate scene changes, which leads to bad results. This is very noticeable when watching the videos (see the example below). Simply not doing the interpolation in these cases would be enough to greatly improve results. Is there any parameter to avoid interpolation in these cases?
<div float="left">
<img src="https://github.com/nihui/rife-ncnn-vulkan/assets/9905080/44054cc4-74ab-428c-84e3-c6c637e08d47" width="300" />
<img src="https://github.com/nihui/rife-ncnn-vulkan/assets/9905080/6e1218d5-60fa-42d5-a8d3-2420954e9da4" width="300" />
<img src="https://github.com/nihui/rife-ncnn-vulkan/assets/9905080/01a59725-56e6-4b99-b7f0-8b688a030b02" width="300" />
<img src="https://github.com/nihui/rife-ncnn-vulkan/assets/9905080/31f5948d-7609-4b3d-892c-07400f7ccaa9" width="300" />
</div>
|
closed
|
2023-10-21T20:33:56Z
|
2023-10-23T21:20:13Z
|
https://github.com/hzwer/ECCV2022-RIFE/issues/340
|
[] |
hsilva664
| 2
|
microsoft/JARVIS
|
deep-learning
| 22
|
Can Nvidia's 40 series graphics card be used for the current project?
|
Can Nvidia's 40 series graphics card be used for the current project?
as I can see that the System Requirements is as follows:
Ubuntu 16.04 LTS
NVIDIA GeForce RTX 3090 * 1
RAM > 24GB
|
closed
|
2023-04-04T11:56:24Z
|
2023-04-06T19:41:52Z
|
https://github.com/microsoft/JARVIS/issues/22
|
[] |
GothicFox
| 5
|
microsoft/hummingbird
|
scikit-learn
| 778
|
backorder message after product into cart
|
hello
in hummingdird 0.2, when product add to cart in out of stock, then message shows.
I think like with classic, when we increase the quantity, the increase message should be displayed over stock, not after adding to cart.
![Uploading dddd.png…]()
|
closed
|
2024-06-09T11:21:39Z
|
2024-06-10T14:47:37Z
|
https://github.com/microsoft/hummingbird/issues/778
|
[] |
hogo20
| 1
|
jadore801120/attention-is-all-you-need-pytorch
|
nlp
| 10
|
TypeError: cat() takes no keyword arguments
|
Traceback (most recent call last):
File "train.py", line 266, in <module>
main()
File "train.py", line 263, in main
train(transformer, training_data, validation_data, crit, optimizer, opt)
File "train.py", line 124, in train
train_loss, train_accu = train_epoch(model, training_data, crit, optimizer)
File "train.py", line 55, in train_epoch
pred = model(src, tgt)
File "/home/sushuting/local/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/sushuting/workspace/attention-is-all-you-need-pytorch/transformer/Models.py", line 179, in forward
enc_outputs, enc_slf_attns = self.encoder(src_seq, src_pos)
File "/home/sushuting/local/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/sushuting/workspace/attention-is-all-you-need-pytorch/transformer/Models.py", line 76, in forward
enc_output, slf_attn_mask=enc_slf_attn_mask)
File "/home/sushuting/local/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/sushuting/workspace/attention-is-all-you-need-pytorch/transformer/Layers.py", line 18, in forward
enc_input, enc_input, enc_input, attn_mask=slf_attn_mask)
File "/home/sushuting/local/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/sushuting/workspace/attention-is-all-you-need-pytorch/transformer/SubLayers.py", line 62, in forward
outputs = torch.cat(torch.split(outputs, mb_size, dim=0), dim=-1)
TypeError: cat() takes no keyword arguments
|
closed
|
2017-07-05T06:42:02Z
|
2017-08-14T12:29:04Z
|
https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/10
|
[] |
susht3
| 2
|
Yorko/mlcourse.ai
|
numpy
| 169
|
Missing image on Lesson 3 notebook
|
Hey,
Image _credit_scoring_toy_tree_english.png_ is missing on the topic3_decision_trees_kNN notebook.
|
closed
|
2018-02-19T11:17:20Z
|
2018-08-04T16:08:25Z
|
https://github.com/Yorko/mlcourse.ai/issues/169
|
[
"minor_fix"
] |
henriqueribeiro
| 3
|
dgtlmoon/changedetection.io
|
web-scraping
| 2,909
|
[feature] Comments for steps
|
**Version and OS**
v0.48.5 on docker
**Is your feature request related to a problem? Please describe.**
I would love to attach comments to each step.
**Describe the solution you'd like**
An optional textbox just above the three buttons in each step. Even expandable, so it doesn't take too much step. Happy for it to be limited in length so as to not blow up the db.
**Describe the use-case and give concrete real-world examples**
To provide more context, help me remember the why behind the step or record possible gotchas. Think DNS records' comments on Cloudflare.
**Additional context**

Thanks!
|
open
|
2025-01-17T22:16:48Z
|
2025-01-20T11:38:31Z
|
https://github.com/dgtlmoon/changedetection.io/issues/2909
|
[
"enhancement",
"browser-steps"
] |
hoopyfrood
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.