repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
mlfoundations/open_clip
|
computer-vision
| 434
|
Unused arguments: n_queries and attn_pooler_heads in MultimodalCfg
|
Hi, thanks @gpucce for implementing CoCa! I am reading the code, and under MultimodalCfg (used by CoCa to construct the multimodal decoder) there is n_queries and attn_pooler_heads ([here](https://github.com/mlfoundations/open_clip/blob/7ae3e7a9853b1aa2fe7825e4272f3b169f8e65af/src/open_clip/coca_model.py#L45)), but correct me if I am wrong, I don't see they are used anywhere? It seems currently the cross-attention layers in the multimodal decoder simply attends to a sequence of arbitrarily long img embedding tokens. Whereas I believe in lucidrains implementation, they're first reduced to a sequence of fixed length by using a set number of queries (e.g. 256). Any clarification would be appreciated! Thanks.
|
open
|
2023-02-18T02:42:12Z
|
2023-02-18T19:28:42Z
|
https://github.com/mlfoundations/open_clip/issues/434
|
[] |
fedshyvana
| 4
|
erdewit/ib_insync
|
asyncio
| 7
|
Example in README.rst doesn't work in read-only mode
|
I've got my API access in IB set to read-only mode. The example script, when run, warned me that the call could not be satisfied in read-only mode.
However, the example appears to be a request for historical data, which as far as I know shouldn't require write access.
|
closed
|
2017-08-12T10:08:18Z
|
2019-10-12T12:15:47Z
|
https://github.com/erdewit/ib_insync/issues/7
|
[] |
perpetualcrayon
| 3
|
marcelo-earth/generative-manim
|
streamlit
| 1
|
The App Starts Rendering with Partial Code
|
Your app is good and to some extent, brilliant.
However, it seems Streamlit starts rendering having only received partial code, and not the complete code for an animation.
This obviously, cause the animation not to render.
Here try this for example:
swing a double pendulum
|
closed
|
2023-03-23T22:27:40Z
|
2023-08-26T06:10:01Z
|
https://github.com/marcelo-earth/generative-manim/issues/1
|
[
"bug"
] |
AnonymoZ
| 10
|
pallets-eco/flask-wtf
|
flask
| 166
|
Cannot set language for RECAPTCHA
|
Hi, I'm trying to set the language for the RECAPTCHA widget but it looks like it does not work by using the following:
RECAPTCHA_OPTIONS = { lang: 'fr' }
Any hint please ?
|
closed
|
2015-01-21T11:18:55Z
|
2021-05-29T01:15:59Z
|
https://github.com/pallets-eco/flask-wtf/issues/166
|
[] |
truff77
| 7
|
JaidedAI/EasyOCR
|
pytorch
| 1,326
|
How to limit VRam GPU usage
|
Is there anyway to limit the GPU VRam usage?
|
open
|
2024-10-22T12:21:47Z
|
2024-12-09T11:06:36Z
|
https://github.com/JaidedAI/EasyOCR/issues/1326
|
[] |
Pabloferex
| 2
|
DistrictDataLabs/yellowbrick
|
matplotlib
| 725
|
Repair hotfix on v1.0 merge to master
|
When we released hotfix v0.9.1 we created some potential conflicts in master when merging the v1.0 release that will need to be repaired. Namely:
1. The requirements files must match the `develop` version
2. The xfails in several tests must be removed
The details of these changes can be found in #724 and 3f20db
Also, while here - please also review the classifiers in `setup.py` -- particularly the language classifiers and remove any reference to Python 2 (including and especially the `python_requires`)
|
closed
|
2019-02-06T02:27:00Z
|
2019-08-28T23:03:39Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/725
|
[
"priority: high",
"type: technical debt"
] |
bbengfort
| 1
|
pydantic/pydantic
|
pydantic
| 10,895
|
model_dump fails to serialize `ipaddress` objects
|
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Update Pydantic to 2.9.2 broke a lot functionality where objects from `ipaddress` like `IPv4Address` was used.
On version 2.8.0 it was working correct.
Is this action intended?
### Example Code
```Python
from ipaddress import IPv4Network
from attrs import define
from pydantic import BaseModel
class Foo(BaseModel):
cidr: IPv4Network
@define
class Bar:
cidr: IPv4Network
foo = Foo(cidr=IPv4Network("192.168.0.0/24"))
bar = Bar(**foo.model_dump())
print(type(bar.cidr))
# <class 'str'>
# pydantic==2.9.2
# pydantic-settings==2.6.0
# pydantic_core==2.23.4
# <class 'ipaddress.IPv4Network'>
# pydantic==2.8.0
# pydantic-settings==2.6.0
# pydantic_core==2.20.0
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: .venv/lib/python3.11/site-packages/pydantic
python version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0]
platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
related packages: fastapi-0.109.2 typing_extensions-4.12.2 pydantic-extra-types-2.10.0 pydantic-settings-2.6.1 mypy-1.13.0
commit: unknown
```
|
closed
|
2024-11-20T13:36:04Z
|
2024-11-20T15:19:38Z
|
https://github.com/pydantic/pydantic/issues/10895
|
[
"bug V2",
"pending"
] |
veNNNx
| 1
|
pykaldi/pykaldi
|
numpy
| 49
|
The compatibility of pykaldi and kaldi5.1 issues
|
Calculating GOP requires kaldi5.1 and the mfcc calculated by kaldi 5.1 and the latest version of kaldi
is different. So i must use kaldi 5.1 to calculate mfcc. I have tried two method:
(1)Replace the whole tools/kaldi with kaldi5.1.
(2)Only replace tools/kaldi/src/featbins/compute-mfcc-feats with compute-mfcc-feats in kaldi5.1.
But neither of these methods works. Is there any way to make pykaldi compatible with kaldi5.1, I only need the the module of calculating mfcc.
Thank you for help!
|
closed
|
2018-07-24T01:32:13Z
|
2018-07-24T20:35:37Z
|
https://github.com/pykaldi/pykaldi/issues/49
|
[] |
KaiDiamant
| 3
|
paperless-ngx/paperless-ngx
|
machine-learning
| 8,245
|
[BUG] PDF/A-B "archive" files have inconsistent date display presentation formats
|
### Description
Just started with paperless-ngx and while reviewing some "consumed" document information, the metadata for the archived PDF/A-2B has:
`xmp:CreateDate 2024-10-19T15:30:50-04:00`
- which looks like the right date/time when the original pdf was created; and displaying it as my local time with an offset from UTC.
`xmp:ModifyDate 2024-11-10T03:24:01+00:00`
- which looks like the right time (very recent to this post), but it is differently presenting the date/time -- it's UTC showing the +00:00 offset from UTC.
`xmp:MetadataDate 2024-11-10T03:24:01.323978+00:00`
- this is like the ModifyDate; ie, a UTC-like presentation
3 dates and 2 formats --- I was seriously thinking there was a problem with the upstream OCRmyPDF, pikepdf, and the system time zone, but I finally realized all 3 date *values* are very similar and correct. I was thinking the "ModifyDate" and "MetadataDate" fields were 5 hours in the future!
It's the inconsistency of the formats that completely derailed me.
Are the data values stored as strings and paperless is just displaying those strings? (Possibly a case that the upstream tools are also inconsistent.) Regardless, the paperless GUI should be providing a consistent presentation of date/time because the **values** are correct.
fyi - the host date / time setting:
```# timedatectl status
Local time: Sat 2024-11-09 22:34:05 EST
Universal time: Sun 2024-11-10 03:34:05 UTC
RTC time: n/a
Time zone: America/New_York (EST, -0500)
System clock synchronized: yes
NTP service: inactive
RTC in local TZ: no
```
### Steps to reproduce
Go to a Documents.
Double-click on a document.
Go to the Metadata tab for the document.
If needed, expand the "Archived document metadata" section.
Screenshot of same:

### Webserver logs
```bash
n.a.
```
### Browser logs
```bash
n.a.
```
### Paperless-ngx version
2.13.4
### Host OS
Proxmox helper-script installed as an LXC (linux container)
### Installation method
Other (please describe above)
### System status
```json
{
"pngx_version": "2.13.4",
"server_os": "Linux-6.8.12-3-pve-x86_64-with-glibc2.36",
"install_type": "bare-metal",
"storage": {
"total": 7659816484864,
"available": 4560667148288
},
"database": {
"type": "postgresql",
"url": "paperlessdb",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0028_alter_mailaccount_password_and_more",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://localhost:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-11-09T22:24:03.510649-05:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2024-11-10T04:05:00.025948Z",
"classifier_error": null
}
}
**NICE** even this has inconsistent date/time presentation. ;-p
```
### Browser
not relevant
### Configuration changes
n.a.
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description.
|
closed
|
2024-11-10T04:22:09Z
|
2024-12-11T03:17:29Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/8245
|
[
"not a bug"
] |
easyas314
| 2
|
alpacahq/alpaca-trade-api-python
|
rest-api
| 611
|
[Feature Request] api.cancel_all_orders(side='buy')
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
According to de documentation here:
https://alpaca.markets/docs/api-references/trading-api/orders/
You can only cancel all orders, there is no parameters to specify the side you want to cancel. (Buy or Sell)
The following optional params does not work here:
`api.cancel_all_orders(params={'side':'buy'})`
`api.cancel_all_orders(params={'side':'sell'})`
### Describe the solution you'd like.
We need an easy way to cancel all orders filtered by the side (Cancel all buy orders or cancel or sell orders)
We need something like these:
`api.cancel_all_orders(side='buy')`
`api.cancel_all_orders(side='sell')`
### Describe an alternate solution.
As a workaround you could try to perform this task by adding extra Python code to your program.
But IMHO, simple is better than complex
### Anything else? (Additional Context)
Hopefully, we can have this feature in alpaca-trade-api 2.0.1 along with the PR https://github.com/alpacahq/alpaca-trade-api-python/pull/610
|
closed
|
2022-04-25T00:12:06Z
|
2022-04-26T01:04:30Z
|
https://github.com/alpacahq/alpaca-trade-api-python/issues/611
|
[
"invalid",
"API issue"
] |
sshcli
| 2
|
15r10nk/inline-snapshot
|
pytest
| 77
|
Issue when parsing explicit tuples for trimming
|
The snapshots, once created, will contain explicit tuples with a single element. It looks like inline-snapshot has trouble parsing them when trimming, because it removes too much code, leaving the test module in a broken state.
```python
import pytest
from inline_snapshot import snapshot, outsource, external
options = {
"very_very_very_long_option_name": ["value1", "value2"],
"another_super_extra_long_option_name": [True, False, None],
}
@pytest.mark.parametrize("option1", options["options1"])
@pytest.mark.parametrize("option2", options["options2"])
def test_combinations(option1, option2):
final_options = {"option1": option1, "option2": option2}
result = "whatever, not important here"
assert outsource(result) == snapshot(tuple(final_options.items()))
```
Run `pytest --inline-snapshot=create`. The snapshots will look something like this:
```python
snapshots = snapshot(
{(
("very_very_very_long_option_name", "value1"),
("another_super_extra_long_option_name", True),
): external("36a1a03a6364*.html"), (
("very_very_very_long_option_name", "value1"),
("another_super_extra_long_option_name", False),
): external("bd14e6a60af2*.html"), (
("very_very_very_long_option_name", "value1"),
("another_super_extra_long_option_name", None),
): external("d552c9951139*.html"), (
("very_very_very_long_option_name", "value2"),
("another_super_extra_long_option_name", True),
): external("36a1a03a6364*.html"), (
("very_very_very_long_option_name", "value2"),
("another_super_extra_long_option_name", False),
): external("bd14e6a60af2*.html"), (
("very_very_very_long_option_name", "value2"),
("another_super_extra_long_option_name", None),
): external("d552c9951139*.html")},
)
```
Optionally format the test module with `ruff`, and run `pytest --inline-snaphost=trim`.
It leaves me with something like this in my own test module:
```python
snapshots = snapshot({(
("annotations_path", "brief"),
("show_signature", True),
("show_signature_annotations", False),
("signature_crossrefs", True),
("separate_signature", True),
("unwrap_annotated", False),
): external("c7fb0a797254*.html"), urce", options["show_source"])
# @pytest.mark.parametrize("preload_modules", options["preload_modules"])
# Heading options.
# @pytest.mark.parametrize("heading_level", options["heading_level"])
# @pytest.mark.parametrize("show_root_heading", options["show_root_heading"])
# @pytest.mark.parametrize("show_root_toc_entry", options["show_root_toc_entry"])
# @pytest.mark.parametrize("show_root_full_path", options["show_root_full_path"])
# @pytest.mark.parametrize("show_, eme's templates.
Parameters:
identifier: Parametrized identifier.
handler: Python handler (fixture).
"""
final_options = {**locals()}
final_options.pop("handler")
, for k, v in final_options.items())
assert outsource(html, suffix=".html") == snapshots[snapshot_key]
, , , , ,
```
I'll try to create a proper reproduction a bit later.
|
closed
|
2024-04-29T18:37:13Z
|
2024-04-30T09:08:49Z
|
https://github.com/15r10nk/inline-snapshot/issues/77
|
[] |
pawamoy
| 7
|
AUTOMATIC1111/stable-diffusion-webui
|
deep-learning
| 16,040
|
I used the same paramters and same safetensor,but my result is much worse than WebUI,ppppplease give me some advises! [Bug]:
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I used the same paramters and same safetensor,but my result(by my code) is much worse than WebUI,ppppplease give me some advises! the way that i load the safetensor is wrong??? help!(all the safetensor or pt are download in my laptop)(here is code):
**import torch
from diffusers import StableDiffusionPipeline
from transformers import CLIPTextModel,CLIPModel,CLIPProcessor,CLIPTokenizer
from safetensors.torch import load_file # 用于加载 .safetensors 文件
import os
pipe = StableDiffusionPipeline.from_single_file("./AI-ModelScope/anyloraCheckpoint_bakedvaeFp16NOT.safetensors",local_files_only=True,use_safetensors=True,load_safety_checker=False)
pipe = pipe.to("cuda")
lora_path = "./Pokemon_LoRA/pokemon_v3_offset.safetensors"
lora_w = 1.0
pipe._lora_scale = lora_w
state_dict, network_alphas = pipe.lora_state_dict(
lora_path
)
for key in network_alphas:
network_alphas[key] = network_alphas[key] * lora_w
#network_alpha = network_alpha * lora_w
pipe.load_lora_into_unet(
state_dict = state_dict
, network_alphas = network_alphas
, unet = pipe.unet
)
pipe.load_lora_into_text_encoder(
state_dict = state_dict
, network_alphas = network_alphas
, text_encoder = pipe.text_encoder
)
pipe.load_textual_inversion("./AI-ModelScope/By bad artist -neg.pt")
# 设置随机种子
seed = int(3187489596)
generator = torch.Generator("cuda").manual_seed(seed)
# 生成图像
poke_prompt="sugimori ken (style), ghost and ground pokemon (creature), full body, gengar, marowak, solo, grin, half-closed eye, happy, highres, no humans, other focus, pokemon, purple eyes, simple background, smile, solo, standing, teeth, uneven eyes, white background , ((masterpiece))"
tokenizer = CLIPTokenizer.from_pretrained("./AI-ModelScope/clip-vit-large-patch14")
text_encoder = CLIPTextModel.from_pretrained("./AI-ModelScope/clip-vit-large-patch14")
pipe.text_encoder = text_encoder.to('cuda')
pipe.tokenizer = tokenizer
image = pipe(
prompt = poke_prompt,
negative_prompt="(painting by bad-artist-anime:0.9), (painting by bad-artist:0.9), watermark, text, error, blurry, jpeg artifacts, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, artist name, (worst quality, low quality:1.4), bad anatomy",
guidance_scale=9,
num_inference_steps=200,
generator=generator,
sampler="dpm++_sde_karras",
clip_skip=2,
).images[0]
# 保存生成的图像
output_path = f"./out.png"
print(os.path.abspath("./out.png"))
image.save(output_path)**
### Steps to reproduce the problem
None
### What should have happened?
None
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
None
### Console logs
```Shell
None
```
### Additional information
_No response_
|
closed
|
2024-06-18T06:19:34Z
|
2024-06-19T19:49:57Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16040
|
[] |
OMTHSJUHW
| 1
|
ultralytics/ultralytics
|
machine-learning
| 19,195
|
YOLOv11 models are slightly lower than the reported results
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi, thanks for your great work! I meet a question: I validated the YOLOv11's pretrained models (your released models, not trained by me) while the results are slightly slower than the reported results. For example, for YOLOv11-N/S/M/L/X models, I got 39.4/46.9/51.5/53.3/54.6 mAP, respectively (you reported: 39.5/47.0/51.5/53.4/54.7 mAP). Could you please explain this? Thanks!
Here is the code:
```python
from ultralytics import YOLO
# # Load a model
# model = YOLO("yolo11n.pt")
# model = YOLO("yolo11s.pt")
# model = YOLO("yolo11m.pt")
# model = YOLO("yolo11l.pt")
model = YOLO("yolo11x.pt")
model.val(data='coco.yaml', batch=32)
```
### Additional
_No response_
|
open
|
2025-02-12T03:25:53Z
|
2025-02-13T06:50:55Z
|
https://github.com/ultralytics/ultralytics/issues/19195
|
[
"question"
] |
sunsmarterjie
| 7
|
aio-libs/aiomysql
|
sqlalchemy
| 428
|
SSCursor can't close while raise Error
|
if server send a error to SSCurosr, SSCursor try to call self._result._finish_unbuffered_query in close function
```python
# SSCursor close funcion in cursor.py line 604
if self._result is not None and self._result is conn._result:
await self._result._finish_unbuffered_query()
````
but there is not a byte to read after recived a error packge, so the loop is allways waiting in the time, the error could't raise out .
```python
# _finish_unbuffered_query function in connection.py line 1197
while self.unbuffered_active:
packet = await self.connection._read_packet()
```
I met the bug after I seted a mysql variable
```shell
SET GLOBAL MAX_EXECUTION_TIME=3000
```
a SELECT statement will be aborted if it takes more than 3s.
the connection received a error package with erorno 3024, but the loop is waiting packge to finitshed unbuffered_query
|
closed
|
2019-08-15T10:20:15Z
|
2022-04-11T00:09:45Z
|
https://github.com/aio-libs/aiomysql/issues/428
|
[
"bug"
] |
ppd0705
| 17
|
jazzband/django-oauth-toolkit
|
django
| 722
|
oauth2_provider_accesstoken_source_refresh_token_id_key constraint violation
|
Stack trace:
```plaintext
UniqueViolation: duplicate key value violates unique constraint "oauth2_provider_accesstoken_source_refresh_token_id_key"
DETAIL: Key (source_refresh_token_id)=(155) already exists.
File "django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
IntegrityError: duplicate key value violates unique constraint "oauth2_provider_accesstoken_source_refresh_token_id_key"
DETAIL: Key (source_refresh_token_id)=(155) already exists.
File "django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "django/views/generic/base.py", line 71, in view
return self.dispatch(request, *args, **kwargs)
File "django/utils/decorators.py", line 45, in _wrapper
return bound_method(*args, **kwargs)
File "django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "django/views/generic/base.py", line 97, in dispatch
return handler(request, *args, **kwargs)
File "appname/apps/auth/views.py", line 45, in post
return super().post(request, *args, **kwargs)
File "django/utils/decorators.py", line 45, in _wrapper
return bound_method(*args, **kwargs)
File "django/views/decorators/debug.py", line 76, in sensitive_post_parameters_wrapper
return view(request, *args, **kwargs)
File "oauth2_provider/views/base.py", line 206, in post
url, headers, body, status = self.create_token_response(request)
File "oauth2_provider/views/mixins.py", line 123, in create_token_response
return core.create_token_response(request)
File "oauth2_provider/oauth2_backends.py", line 138, in create_token_response
headers, extra_credentials)
File "oauthlib/oauth2/rfc6749/endpoints/base.py", line 87, in wrapper
return f(endpoint, uri, *args, **kwargs)
File "oauthlib/oauth2/rfc6749/endpoints/token.py", line 117, in create_token_response
request, self.default_token_type)
File "oauthlib/oauth2/rfc6749/grant_types/refresh_token.py", line 72, in create_token_response
self.request_validator.save_token(token, request)
File "oauthlib/oauth2/rfc6749/request_validator.py", line 334, in save_token
return self.save_bearer_token(token, request, *args, **kwargs)
File "python3.6/contextlib.py", line 52, in inner
return func(*args, **kwds)
File "oauth2_provider/oauth2_validators.py", line 531, in save_bearer_token
source_refresh_token=refresh_token_instance,
File "oauth2_provider/oauth2_validators.py", line 563, in _create_access_token
access_token.save()
File "django/db/models/base.py", line 741, in save
force_update=force_update, update_fields=update_fields)
File "django/db/models/base.py", line 779, in save_base
force_update, using, update_fields,
File "django/db/models/base.py", line 870, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "django/db/models/base.py", line 908, in _do_insert
using=using, raw=raw)
File "django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "django/db/models/query.py", line 1186, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "django/db/models/sql/compiler.py", line 1335, in execute_sql
cursor.execute(sql, params)
File "raven/contrib/django/client.py", line 127, in execute
return real_execute(self, sql, params)
File "django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
```
Packages:
- django-oauth-toolkit==1.2.0
- oauthlib==3.0.2
|
open
|
2019-07-17T01:31:17Z
|
2020-04-06T15:15:53Z
|
https://github.com/jazzband/django-oauth-toolkit/issues/722
|
[] |
ghost
| 7
|
hankcs/HanLP
|
nlp
| 1,301
|
中文文件名的自定义词库加载失败
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
hanlp-1.7.4-release
当前最新版本号是:1.7.4
我使用的版本是:1.7.4
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
**中文文件名**的自定义词库加载失败(**详情见截图**)
**英文名**的词库可以加载成功,比如:CustomDictionary.txt可以加载成功。
然后我将**现代汉语补充词库.txt**改成**hdhybcck.txt**就可以加载成功。
系统:centos6.10
系统的语言使用的是 zh_CN.UTF-8
编程语言使用的是python:pyhanlp
java:openjdk version "1.8.0_222"
hanlp.properties
root=/usr/local/python3/lib/python3.7/site-packages/pyhanlp/static/
CustomDictionaryPath=data/dictionary/custom/CustomDictionary.txt; 现代汉语补充词库.txt; 全国地名大全.txt ns; 人名词典.txt; 机构名词典.txt; 上海地名.txt ns;data/dictionary/person/nrf.txt nrf;
## 复现问题
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
======================
摸索了半天问题已经解决了。
**原因是:我在本地解压了data-for-1.7.4.zip再上传到服务器导致的~**
|
closed
|
2019-10-15T08:11:49Z
|
2019-10-15T09:07:18Z
|
https://github.com/hankcs/HanLP/issues/1301
|
[] |
e282486518
| 0
|
explosion/spacy-course
|
jupyter
| 21
|
How to remove a component from nlp pipeline? or should I create(maybe load) nlp object with same statistical model for every different pipeline?
|
I am a newbie for spacy...
|
closed
|
2019-04-27T08:09:58Z
|
2019-04-27T11:53:49Z
|
https://github.com/explosion/spacy-course/issues/21
|
[] |
SparkleBo
| 2
|
deepset-ai/haystack
|
machine-learning
| 8,315
|
haystack.utils has issues, can't do imports
|
**Describe the bug**
I just cannot do imports, Idk whats going on here
**Error message**
ModuleNotFoundError: No module named 'haystack.utils.auth'
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here, like document types / preprocessing steps / settings of reader etc.
**To Reproduce**
install poetry python3.12 and poetry add 'farm-haystack'
**System:**
- OS: MAC
- GPU/CPU: 8GB
- Haystack version (commit or version number):1.26.3
|
closed
|
2024-09-01T18:27:17Z
|
2024-09-03T04:20:12Z
|
https://github.com/deepset-ai/haystack/issues/8315
|
[] |
pdwytr
| 2
|
xonsh/xonsh
|
data-science
| 4,886
|
xonsh: subprocess mode: permission denied when overriding a PATH command with an alias on Windows
|
<!--- Provide a general summary of the issue in the Title above -->
<!--- If you have a question along the lines of "How do I do this Bash command in xonsh"
please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html
If you don't find an answer there, please do open an issue! -->
## xonfig
<details>
```
$ xonfig
+------------------+----------------------+
| xonsh | 0.13.0.dev4 |
| Git SHA | 0214b878 |
| Commit Date | Jul 14 14:51:40 2022 |
| Python | 3.10.5 |
| PLY | 3.11 |
| have readline | False |
| prompt toolkit | 3.0.29 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.10.0 |
| on posix | False |
| on linux | False |
| on darwin | False |
| on windows | True |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file | [] |
+------------------+----------------------+
```
</details>
## Expected Behavior
<!--- Tell us what should happen -->
Overriding a command in PATH with an alias (such as to add default arguments) should work as it does on Linux and macOS.
```xonsh
$ aliases['python'] = 'python -u'
$ python --help
Usage: [snip]
```
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
<!--- If part of your bug report is a traceback, please first enter debug mode before triggering the error
To enter debug mode, set the environment variable `XONSH_DEBUG=1` _before_ starting `xonsh`.
On Linux and OSX, an easy way to to do this is to run `env XONSH_DEBUG=1 xonsh` -->
```xonsh
$ aliases['python'] = 'python -u'
$ python --help
xonsh: subprocess mode: permission denied
```
## Steps to Reproduce
<!--- Please try to write out a minimal reproducible snippet to trigger the bug, it will help us fix it! -->
```xonsh
$ aliases['python'] = 'python -u'
$ python --help
```
## Notes
Interestingly, this *does* work if you append the extension in the alias, eg: `aliases['python'] = 'python.exe -u'` does work.
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
|
closed
|
2022-07-20T10:42:06Z
|
2022-11-23T17:45:20Z
|
https://github.com/xonsh/xonsh/issues/4886
|
[
"windows",
"aliases"
] |
Qyriad
| 3
|
yinkaisheng/Python-UIAutomation-for-Windows
|
automation
| 135
|
查看网易云音乐客户端页面结构报错
|
**Uiautomation version:** 2.0.7
**Python version:** 3.9.0
**Error log:**
D:\Python3.9.0\Scripts>python automation.py -t 3
UIAutomation 2.0.7 (Python 3.9.0, 64 bit)
please wait for 3 seconds
2020-10-27 20:37:43.552 automation.py[75] main -> Starts, Current Cursor Position: (818, 353)
ControlType: PaneControl ClassName: #32769 AutomationId: Rect: (0,0,1280,800)[1280x800] Name: 桌面 1 Handle: 0x10010(65552) Depth: 0 SupportedPattern: LegacyIAccessiblePattern
ControlType: WindowControl ClassName: OrpheusBrowserHost AutomationId: Rect: (-8,-8,1288,768)[1296x776] Name: 回头太难 - 张学友 Handle: 0x50644(329284) Depth: 1 SupportedPattern: LegacyIAccessiblePattern TransformPattern WindowPattern
ControlType: PaneControl ClassName: CefBrowserWindow AutomationId: Rect: (0,0,1280,760)[1280x760] Name: Handle: 0x7062E(460334) Depth: 2 SupportedPattern: LegacyIAccessiblePattern
ControlType: WindowControl ClassName: Chrome_WidgetWin_0 AutomationId: Rect: (0,0,1280,760)[1280x760] Name: Handle: 0x70638(460344) Depth: 3 SupportedPattern: LegacyIAccessiblePattern
ControlType: CustomControl ClassName: AutomationId: Rect: (0,0,1280,760)[1280x760] Name: Handle: 0x0(0) Depth: 4 SupportedPattern: LegacyIAccessiblePattern
ControlType: DocumentControl ClassName: AutomationId: Rect: (0,0,1022,670)[1022x670] Name: Handle: 0x0(0) Depth: 5 TextPattern.Text: Traceback (most recent call last):
File "D:\Python3.9.0\Scripts\automation.py", line 113, in <module>
main()
File "D:\Python3.9.0\Scripts\automation.py", line 109, in main
auto.EnumAndLogControl(control, depth, showAllName, startDepth=indent)
File "D:\Python3.9.0\lib\site-packages\uiautomation\uiautomation.py", line 7706, in EnumAndLogControl
LogControl(c, d + startDepth, showAllName)
File "D:\Python3.9.0\lib\site-packages\uiautomation\uiautomation.py", line 7690, in LogControl
Logger.Write(pt.DocumentRange.GetText(30), ConsoleColor.DarkGreen)
File "D:\Python3.9.0\lib\site-packages\uiautomation\uiautomation.py", line 4754, in DocumentRange
return TextRange(self.pattern.DocumentRange)
_ctypes.COMError: (-2147467259, '未指定的错误', (None, None, None, 0, None))
**UISpy Structure:**

|
closed
|
2020-10-27T12:55:18Z
|
2021-05-11T05:08:32Z
|
https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/135
|
[] |
yangyechi
| 1
|
coleifer/sqlite-web
|
flask
| 36
|
Content Tab - Attribute Error
|
Hi,
I just installed sqlite-web using pip and started to use it. Most of the functionality seems to work. However, whenever I try to access the Content tab, it throws an error:
```
AttributeError: 'AutoField' object has no attribute 'db_column'
```
I tried it on using both Python 2.7 and Python 3.6 and both threw the same error.
This behavior was consistent across all SQLite databases I tried (including the http://www.sqlitetutorial.net/download/sqlite-sample-database/?wpdmdl=94 and https://github.com/jpwhite3/northwind-SQLite3). Any ideas? Thanks.
|
closed
|
2018-02-14T01:03:46Z
|
2018-02-14T02:43:07Z
|
https://github.com/coleifer/sqlite-web/issues/36
|
[] |
ossie-git
| 3
|
pytorch/pytorch
|
deep-learning
| 149,250
|
Implement batching for torch.isin operator
|
### 🚀 The feature, motivation and pitch
Received the following error while invoking vmap() on a function making use of `torch.inis` operator.
```
...Temp/ipykernel_20808/3722652185.py#line=49)
: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::isin.Tensor_Tensor.
Please file us an issue on GitHub so that we can prioritize its implementation.
(Triggered internally at [C:\actions-runner\_work\pytorch\pytorch\pytorch\aten\src\ATen\functorch\BatchedFallback.cpp:84]
(file:///C:/actions-runner/_work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/BatchedFallback.cpp#line=83).)
return torch.where(torch.isin(uniqs, doublesT), 0, 1)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @kshitij12345
|
open
|
2025-03-15T08:09:33Z
|
2025-03-17T15:26:55Z
|
https://github.com/pytorch/pytorch/issues/149250
|
[
"triaged",
"module: vmap",
"module: functorch"
] |
nikhilgv9
| 0
|
huggingface/datasets
|
pandas
| 6,560
|
Support Video
|
### Feature request
HF datasets are awesome in supporting text and images. Will be great to see such a support in videos :)
### Motivation
Video generation :)
### Your contribution
Will probably be limited to raising this feature request ;)
|
closed
|
2024-01-04T13:10:58Z
|
2024-08-23T09:51:27Z
|
https://github.com/huggingface/datasets/issues/6560
|
[
"duplicate",
"enhancement"
] |
yuvalkirstain
| 1
|
koxudaxi/fastapi-code-generator
|
fastapi
| 54
|
Unable to install fastapi-code-generator with python 3.7
|
I cannot install fastapi-code-generator as described in the readme on `debian 10` with `python 3.7.3`:
```
$ docker run -i -t --rm python:3.7.3 pip install fastapi-code-generator
ERROR: Could not find a version that satisfies the requirement fastapi-code-generator (from versions: none)
ERROR: No matching distribution found for fastapi-code-generator
```
|
closed
|
2020-11-11T14:43:18Z
|
2020-11-25T09:14:44Z
|
https://github.com/koxudaxi/fastapi-code-generator/issues/54
|
[
"released"
] |
bluecatchbird
| 2
|
d2l-ai/d2l-en
|
pytorch
| 2,406
|
CI: CUDA & CUDNN need upgrade
|
The latest JAX release v0.4.1 is not compatible with the CUDA and CUDNN version we have in our CI.
This also goes for PyTorch, latest release v1.13 has deprecated support for older CUDA 10.2 which we had before.
Our machine right now uses two different versions (CUDA 10.2 and 11.2) which are pretty old given 2023 is here.
Marking this issue to track the degradation on the CI Worker machine. This can be clubbed with the CI migration plan soon.
cc @astonzhang
|
closed
|
2022-12-13T22:38:29Z
|
2023-05-05T19:19:24Z
|
https://github.com/d2l-ai/d2l-en/issues/2406
|
[
"pytorch",
"jax",
"CI"
] |
AnirudhDagar
| 0
|
deeppavlov/DeepPavlov
|
tensorflow
| 909
|
Where deeppavlov/utils/settings/server_config.json?
|
In http://docs.deeppavlov.ai/en/master/devguides/amazon_alexa.html described `deeppavlov/utils/settings/server_config.json`.
Where is it in the repository?
|
closed
|
2019-06-29T10:44:16Z
|
2019-06-29T10:45:27Z
|
https://github.com/deeppavlov/DeepPavlov/issues/909
|
[] |
sld
| 1
|
dynaconf/dynaconf
|
flask
| 1,165
|
Environment variable for array
|
I have this kind of YAML config:
```
default:
SETTINGS:
- NAME: test1
VALUE: value1
- NAME: test2
VALUe: value2
```
I'm wondering how I can override the settings "test1", "value1", etc. via environment variables?
Or if it is possible to add a new setting with environment variable for default_SETTINGS__0__NEW_SETTING="value".
I was checking documentation and some GitHub Issues but couldn't make it work.
Thank you!
|
open
|
2024-07-22T08:12:57Z
|
2024-07-22T12:41:29Z
|
https://github.com/dynaconf/dynaconf/issues/1165
|
[
"question"
] |
molntamas
| 3
|
saulpw/visidata
|
pandas
| 2,223
|
[docs] search-and-replace in a column
|
Also, an example of my confusion between the different "channels" (docs, tuts & manpage) is actually what lead me to my [latest question](https://github.com/saulpw/visidata/discussions/2222): I looked at the ["How to" about Columns](https://www.visidata.org/docs/columns/) and was hoping to find something about search-&-replace in a column, but could find anything there, but maybe I just didn't look properly..
_Originally posted by @halloleo in https://github.com/saulpw/visidata/discussions/2221#discussioncomment-8000137_
|
closed
|
2024-01-03T08:13:23Z
|
2024-10-12T04:55:38Z
|
https://github.com/saulpw/visidata/issues/2223
|
[
"documentation"
] |
saulpw
| 1
|
hankcs/HanLP
|
nlp
| 1,442
|
依存句法分析加载模型出错
|
<!--
Thank you for reporting a possible bug in HanLP.
Please fill in the template below to bypass our spam filter.
以下必填,否则直接关闭。
-->
**Describe the bug**
从`hanlp.pretrained.dep.CTB7_BIAFFINE_DEP_ZH`加载依存分析模型出错
**Code to reproduce the issue**
```python
import hanlp
hanlp.load(hanlp.pretrained.dep.CTB7_BIAFFINE_DEP_ZH)
```
**Describe the current behavior**
从`hanlp.pretrained.dep.CTB7_BIAFFINE_DEP_ZH`加载依存分析模型出错
**Expected behavior**
成功加载
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Python version: Linux xxx 4.15.0-72-generic #81-Ubuntu SMP Tue Nov 26 12:20:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
- HanLP version: 2.0.0a39
**Other info / logs**
```bash
Failed to load https://file.hankcs.com/hanlp/dep/biaffine_ctb7_20200109_022431.zip. See stack trace below
Traceback (most recent call last):
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/hanlp/utils/component_util.py", line 48, in load_from_meta_file
obj.load(save_dir, **load_kwargs)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/hanlp/common/component.py", line 244, in load
self.build(**merge_dict(self.config, training=False, logger=logger, **kwargs, overwrite=True, inplace=True))
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/hanlp/common/component.py", line 269, in build
self.model(sample_inputs)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/hanlp/components/parsers/biaffine/model.py", line 92, in call
x = self.lstm(embed, mask=mask)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/sequential.py", line 281, in call
outputs = layer(inputs, **kwargs)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/wrappers.py", line 543, in __call__
return super(Bidirectional, self).__call__(inputs, **kwargs)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/wrappers.py", line 657, in call
initial_state=forward_state, **kwargs)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/recurrent.py", line 644, in __call__
return super(RNN, self).__call__(inputs, **kwargs)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/recurrent_v2.py", line 1144, in call
**cudnn_lstm_kwargs)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/recurrent_v2.py", line 1362, in cudnn_lstm
time_major=time_major)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_cudnn_rnn_ops.py", line 1906, in cudnn_rnnv3
ctx=_ctx)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_cudnn_rnn_ops.py", line 2002, in cudnn_rnnv3_eager_fallback
attrs=_attrs, ctx=ctx, name=name)
File "/data/humeng/projs/extract_spo/.venv/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.UnknownError: Fail to find the dnn implementation. [Op:CudnnRNNV3]
https://file.hankcs.com/hanlp/dep/biaffine_ctb7_20200109_022431.zip was created with hanlp-2.0.0, while you are running 2.0.0-alpha.39. Try to upgrade hanlp with
pip install --upgrade hanlp
If the problem persists, please submit an issue to https://github.com/hankcs/HanLP/issues .
```
* [x] I've completed this form and searched the web for solutions.
|
closed
|
2020-03-26T10:25:07Z
|
2020-10-23T13:04:13Z
|
https://github.com/hankcs/HanLP/issues/1442
|
[
"bug"
] |
menghuu
| 3
|
jofpin/trape
|
flask
| 325
|
proess hook
|
Not able to redirect pages after connecting client server ............ total process hook is not working......
|
open
|
2021-07-12T07:36:45Z
|
2021-07-12T07:36:45Z
|
https://github.com/jofpin/trape/issues/325
|
[] |
preetkukreja
| 0
|
plotly/plotly.py
|
plotly
| 4,549
|
v5.20.0 New fillgradient not working
|
Hi all,
Been looking forward for this for a long time. New 5.20.0 version does not work for me:
```
import pandas as pd
import plotly.graph_objects as go
import numpy as np
x = np.linspace(0, 10, 100)
y = np.sin(x)
trace = go.Scatter(
x=x,
y=y,
mode='none',
# fillcolor="rgba(0, 255, 0, .5)",
fillgradient=dict(
colorscale=[
[0.0, "rgba(0, 0, 255, 1)"],
[0.5, "rgba(0, 255, 0, 0)"],
[1.0, "rgba(0, 255, 0, 1)"]
],
start=0,
stop=1,
type='horizontal',
),
fill='tonexty',
hoveron='fills',
)
layout = go.Layout(
plot_bgcolor='white',
paper_bgcolor='white',
)
fig = go.Figure(trace, layout)
fig
data = {
"data": [
{
"x": [1, 1, 2, 2],
"y": [1, 2, 2, 1],
"type": "scatter",
"mode": "none",
"fill": "tonext",
"hoveron": "points+fills",
"fillgradient": {
"type": "horizontal",
"colorscale": [
[0.0, "rgba(0, 255, 0, 1)"],
[0.5, "rgba(0, 255, 0, 0)"],
[1.0, "rgba(0, 255, 0, 1)"]
]
}
},
{
"x": [0, 0, 3, 3],
"y": [0, 4, 4, 1],
"type": "scatter",
"mode": "none",
"fill": "tonext",
"hoveron": "fills+points",
"fillgradient": {
"type": "radial",
"colorscale": [
[0.0, "rgba(255, 255, 0, 0.0)"],
[0.8, "rgba(255, 0, 0, 0.3)"],
[1.0, "rgba(255, 255, 0, 1.0)"]
]
}
}
],
"layout": {
"autosize": True,
"title": { "text": "Scatter traces with radial color gradient fills" },
"showlegend": True
}
}
fig = go.Figure(data=data["data"], layout=data["layout"])
```
Any help will be appreciated.
This is my second ´fig´, I already checked and I am using v5.20.0
<img width="992" alt="Screenshot 2024-03-14 at 18 00 57" src="https://github.com/plotly/plotly.py/assets/22829290/fc27e3c5-3bfe-4eb5-8f5c-cae2b8b49852">
|
closed
|
2024-03-14T21:02:32Z
|
2024-03-15T14:35:53Z
|
https://github.com/plotly/plotly.py/issues/4549
|
[] |
etiennecelery
| 6
|
Lightning-AI/pytorch-lightning
|
pytorch
| 20,395
|
Gradient checkpointing and ddp do not work together
|
### Bug description
Am launching a script taht trains a model which works well when trained without ddp and using gradient checkpointing, or using ddp but no gradient checkpointing, using fabric too. However, when setting both ddp and gradient checkpointing, activate thorugh gradient_checkpointing_enable() function of huggingface, we get error
```
[rank0]: File "/home/.../v2/lib/python3.10/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: RuntimeError: expect_autograd_hooks_ INTERNAL ASSERT FAILED at "../torch/csrc/distributed/c10d/reducer.cpp":1591, please report a bug to PyTorch.
```
Scripts where launched with
```
fabric = Fabric(accelerator="gpu",
loggers=loggers,
precision=opt.precision,
strategy=DDPStrategy(process_group_backend="nccl", find_unused_parameters=False, static_graph=True)
)
```
When i launch with options `strategy=DDPStrategy(process_group_backend="nccl", find_unused_parameters=True, static_graph=False)`, I get error instead:
```
[rank0]: Parameter at index 560 with name reader.decoder.transformer.h.11.mlp.c_proj.bias has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.
```
Thanks in advance for your help.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_
|
open
|
2024-11-04T22:04:43Z
|
2024-11-18T23:34:20Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20395
|
[
"bug",
"repro needed",
"ver: 2.4.x"
] |
rubenweitzman
| 1
|
ydataai/ydata-profiling
|
pandas
| 1,059
|
Compatibility issue due to pydantic version upper bound
|
### Current Behaviour
Pandas-Profiling cannot be installed alongside another package that requires the latest version of Pydantic (`~=1.10`).
The requirements of Pandas-Profiling are currently `pydantic>=1.8.1, <1.10`.
Manually upgrading Pydantic doesn't break the first code sample I tried (create a `ProfileReport` from a dataframe, then write this report to HTML), but I suppose other features could be broken by the latest version of pydantic... I haven't be able to locate an issue specifically related to Pydantic 1.10.
### Expected Behaviour
It should be possible to install the latest stable release of pandas-profiling and pydantic together.
### Data Description
https://github.com/ydataai/pandas-profiling/blob/develop/requirements.txt#L5
### Code that reproduces the bug
```Python
"pandas-profiling~=3.3" > requirements.txt
"pydantic~=1.10" >> requirements.txt
pip install -r requirements.txt
```
### pandas-profiling version
v3.3.0
### Dependencies
```Text
pandas-profiling~=3.3
pydantic~=1.10
```
### OS
Windows 11
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
|
closed
|
2022-09-19T18:28:15Z
|
2022-09-19T19:39:27Z
|
https://github.com/ydataai/ydata-profiling/issues/1059
|
[
"needs-triage"
] |
PhilMacKay
| 1
|
Kludex/mangum
|
fastapi
| 217
|
errorMessage: Unable to determine handler from trigger event
|
Hey guys,
I am trying to build d coker iamge to deploy on aws lambda following the doc and:
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-create-from-base
https://docs.aws.amazon.com/lambda/latest/dg/python-image.html#python-image-base
My Dockerfile:
```
FROM public.ecr.aws/lambda/python:3.8
# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}
# Install the function's dependencies using file requirements.txt
# from your project folder.
COPY requirements.txt .
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ]
```
requirement.txt:
```
fastapi == 0.71.*
uvicorn[standard]
mangum == 0.12
```
app.py:
```
from fastapi import FastAPI
from mangum import Mangum
app = FastAPI()
@app.get("/")
def read_root():
return {"Hello": "World"}
@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
return {"item_id": item_id, "q": q}
handler = Mangum(app, spec_version=2)
```
then:
```
docker build -t hello-world .
docker run -p 9000:8080 hello-world
```
But when I try to query locally (following the doc):
```
$ curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
{
"errorMessage": "Unable to determine handler from trigger event",
"errorType": "TypeError",
"stackTrace": [
" File \"/var/task/mangum/adapter.py\", line 86, in __call__\n handler = AbstractHandler.from_trigger(\n",
" File \"/var/task/mangum/handlers/abstract_handler.py\", line 113, in from_trigger\n raise TypeError(\"Unable to determine handler from trigger event\")\n"
]
}
```
I tried using (found in some discussions)
```
handler = Mangum(app, spec_version=2)
```
and even from another discussion:
```
import uvicorn
from fastapi import FastAPI
from mangum import Mangum
# TODO: Add this line
from starlette.middleware.cors import CORSMiddleware
app = FastAPI(title="FastAPI Mangum Example", version='1.0.0')
# TODO: Add these lines
app.add_middleware(
CORSMiddleware,
allow_origins='*',
allow_credentials=False,
allow_methods=["GET", "POST", "OPTIONS"],
allow_headers=["x-apigateway-header", "Content-Type", "X-Amz-Date"],
)
handler = Mangum(app)
@app.get('/', name='Hello World', tags=['Hello'])
def hello_world():
return {"Hello": "Python"}
if __name__ == '__main__':
uvicorn.run(app)
```
it never works. (same error when deployed to aws lamda and not run locally)
I tried to use previous version but also had issue...
does anyone has a working example for this ?
|
closed
|
2022-01-10T02:53:25Z
|
2022-01-10T05:05:04Z
|
https://github.com/Kludex/mangum/issues/217
|
[] |
VictorBac
| 0
|
ivy-llc/ivy
|
numpy
| 28,075
|
Fix Ivy Failing Test: jax - statistical.cumsum
|
closed
|
2024-01-27T13:14:03Z
|
2024-01-28T19:59:41Z
|
https://github.com/ivy-llc/ivy/issues/28075
|
[
"Sub Task"
] |
samthakur587
| 0
|
|
Avaiga/taipy
|
automation
| 2,083
|
[🐛 BUG] Session state being shared in separate app instances
|
### What went wrong? 🤔
Hi, when you run the app. The first instance works fine, the second also works fine but when you start the third instance, everything is shared between the second and the third instance. If you create more instances, all instances starting from the 2nd exhibit this behaviour.
### Expected Behavior
Each instance should have a separate state.
### Steps to Reproduce Issue
1. Start the app
2. Open first instance in a browser tab
3. Open second instance
4. Open third and check second and third tab behaviour.
Python Code:
```
import os
from taipy.gui import Gui, State, notify
from dotenv import load_dotenv
# Declare variables at module level
context = None
conversation = None
current_user_message = None
show_feedback_buttons = None
is_processing = None
def on_init(state: State) -> None:
"""
Initialize the app.
"""
state.context = ""
state.conversation = {"Conversation": []}
state.current_user_message = ""
state.show_feedback_buttons = False
state.is_processing = False
def reset_app(state: State) -> None:
# Reset the app state
on_init(state)
notify(state, "info", "App has been reset!")
def response_generator(query: str) -> str:
print(f"Generating response for query: {query}")
import time
time.sleep(2) # Simulate processing time
response = f"This is a custom response to: {query}"
print(f"Response generated: {response}")
return response
def send_message(state: State) -> None:
if state.is_processing:
notify(state, "warning", "Please wait for the current response to be processed.")
return
state.show_feedback_buttons = False
if not state.current_user_message.strip():
return
state.is_processing = True
try:
user_message = state.current_user_message
state.conversation["Conversation"].append(user_message)
state.current_user_message = ""
state.conversation = state.conversation.copy()
notify(state, "info", "Waiting for AI response...")
answer = response_generator(user_message)
state.context += f"Human: {user_message}\n\nAI: {answer}\n\n"
state.conversation["Conversation"].append(answer)
state.conversation = state.conversation.copy()
state.show_feedback_buttons = True
notify(state, "success", "Response received!")
except Exception as e:
notify(state, "error", f"An error occurred: {e}")
finally:
state.is_processing = False
def style_conv(state: State, idx: int, row: int) -> str:
if idx is None:
return None
elif idx % 2 == 0:
return "user_message"
else:
return "gpt_message"
def on_exception(state, function_name: str, ex: Exception) -> None:
notify(state, "error", f"An error occurred in {function_name}: {ex}")
page = """
<|layout|columns=300px 1|
<|part|class_name=sidebar|
# Taipy **Chat**{: .color-primary} # {: .logo-text}
<|Reset|button|class_name=fullwidth plain mt2|id=reset_app_button|on_action=reset_app|>
|>
<|part|class_name=chat-container|
<|part|class_name=message-list|
<|{conversation}|table|show_all|row_class_name=style_conv|>
|>
<|part|class_name=input-container|
<|{current_user_message}|input|label=Write your message here...|on_action=send_message|class_name=fullwidth|change_delay=-1|disabled={is_processing}|>
|>
|>
|>
"""
if __name__ == "__main__":
load_dotenv()
gui = Gui(page=page)
gui.run(
debug=False,
dark_mode=True,
use_reloader=True,
title="💬 Taipy Chat",
port=8503,
run_server=True,
)
```
Css:
```
body {
overflow: hidden;
}
.chat-container {
display: flex;
flex-direction: column;
height: calc(100vh - 40px);
}
.message-list {
flex-grow: 1;
overflow-y: auto;
padding: 20px;
margin-bottom: 10px;
}
.input-container {
padding: 15px;
background-color: var(--color-background);
border-top: 1px solid var(--color-paper);
position: sticky;
bottom: 0;
}
.gpt_message td {
margin-left: 30px;
margin-bottom: 20px;
margin-top: 20px;
position: relative;
display: inline-block;
padding: 20px;
background-color: #ff462b;
border-radius: 20px;
max-width: 80%;
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
font-size: large;
}
.user_message td {
margin-right: 30px;
margin-bottom: 20px;
margin-top: 20px;
position: relative;
display: inline-block;
padding: 20px;
background-color: #140a1e;
border-radius: 20px;
max-width: 80%;
float: right;
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
font-size: large;
}
:root {
--element-padding: 1.5rem;
}
#root > .MuiBox-root {
min-height: 100%;
}
main {
flex: 0 0 100%;
}
/* Hide the arrow next to "Conversation" */
.message-list .MuiTableHead-root .MuiTableCell-head .MuiTableSortLabel-icon {
display: none;
}
/* Disable clicking on the "Conversation" header */
.message-list .MuiTableHead-root .MuiTableCell-head {
pointer-events: none;
cursor: default;
}
/* Remove the hover effect on the header */
.message-list .MuiTableHead-root .MuiTableCell-head:hover {
background-color: inherit;
}
/* Remove the focus outline on the header */
.message-list .MuiTableHead-root .MuiTableCell-head:focus {
outline: none;
}
```
### Screenshots

### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional)
|
closed
|
2024-10-17T13:13:16Z
|
2024-10-17T14:23:33Z
|
https://github.com/Avaiga/taipy/issues/2083
|
[
"💥Malfunction"
] |
IshanRattan
| 2
|
pytorch/vision
|
computer-vision
| 8,293
|
Add 3.12 CI jobs
|
The next version should support 3.12 so we should make sure to cover that in our CI jobs.
Let's also merge https://github.com/pytorch/vision/pull/8119 and https://github.com/pytorch/vision/pull/8280
Just delaying this a bit until https://github.com/pytorch/vision/issues/8292 is managed.
Note: 3.8 removal won't happen before 2.4 https://github.com/pytorch/pytorch/issues/120718
|
closed
|
2024-03-04T14:06:14Z
|
2024-03-13T15:00:06Z
|
https://github.com/pytorch/vision/issues/8293
|
[] |
NicolasHug
| 3
|
plotly/dash
|
jupyter
| 2,941
|
Implementing Dual Scrollbars with AG Grid in Dash Application
|
I'm working on a Dash application that uses AG Grid, and I'm trying to implement a dual scrollbar system - one at the top and one at the bottom of the grid. However, I do not know if it is possible. Is there any guidance available to this? I tried replicating the navbar, and connecting it through clientside callbacks from dash without any success.
|
closed
|
2024-08-06T16:01:40Z
|
2024-08-06T21:39:13Z
|
https://github.com/plotly/dash/issues/2941
|
[] |
jankrans
| 2
|
JaidedAI/EasyOCR
|
machine-learning
| 601
|
Downloading detection model too slow
|
Hi, I when I run a code in Windows, it display "Downloading detection model, please wait. This may take several minutes depending upon your network connection."
Then it keep downloading for a long time.
even though I plug a VPN , its progress is very slow
I just use "pip install easyocr" to install
And the code is here:
#
import easyocr
#
reader = easyocr.Reader(['en'])
#
result = reader.readtext('1.jpg')
#
result
|
closed
|
2021-11-27T07:12:11Z
|
2022-08-07T05:01:23Z
|
https://github.com/JaidedAI/EasyOCR/issues/601
|
[] |
LiHangBing
| 2
|
google-research/bert
|
nlp
| 452
|
Different accuracy when predicting the dev set with do_predict
|
I train my dataset(the same format with CoLA dataset) for sentence classification model and I take the evaluation results in eval mode after training which is
eval_accuracy = 0.79638886
eval_loss = 1.4008523
global_step = 128711
loss = 1.4008523
Afterthat, I copied this dev.tsv into test.tsv by removing the labels and the other columns in order to construct the same format of test.csv of CoLa. After I took the prediction results, I compare the predictions with their own labels. When I calculate the accuracy ((TP + TN) / (TP + TN + FP + FN)), the accuracy is equal to 0.48138888888888887.
In brief, the accuracy scores are different when I predict the same dataset in eval mode and as test.csv predict by using do_predict option. Is it possible? Thanks for all comments.
|
closed
|
2019-02-23T22:27:42Z
|
2019-12-18T18:45:27Z
|
https://github.com/google-research/bert/issues/452
|
[] |
atifemreyuksel
| 5
|
LAION-AI/Open-Assistant
|
python
| 3,292
|
Issue creating my own dataset
|
Hey there,
I just checked Open Assistant. Love the initiative.
I just run `docker compose --profile ci up --build --attach-dependencies` and got the UI to start locally.
<img width="1668" alt="Screenshot 2023-06-04 at 00 26 58" src="https://github.com/LAION-AI/Open-Assistant/assets/12861981/60048111-16f0-40f6-b8cd-26893a52854b">
I would like to collaborate with some friends to create some prompts for your own usage. However, It seems I create a task or anything.
<img width="1663" alt="Screenshot 2023-06-04 at 00 27 51" src="https://github.com/LAION-AI/Open-Assistant/assets/12861981/9ba06bf1-0f5d-4d81-8687-add201cd8f0d">
What are the next steps to get started ?
Best,
T.C
|
closed
|
2023-06-03T23:27:35Z
|
2023-06-05T16:04:31Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3292
|
[
"question"
] |
tchaton
| 1
|
davidteather/TikTok-Api
|
api
| 876
|
Accounts creator?
|
Hi,
I am looking for a tool that can create valid tiktok accounts via Emails.
Telegram me at: https://t.me/JosecLee
Thank you,
pr41s3r
|
closed
|
2022-04-18T21:25:18Z
|
2022-07-03T23:15:31Z
|
https://github.com/davidteather/TikTok-Api/issues/876
|
[
"feature_request"
] |
h4ck1th4rd
| 0
|
plotly/dash
|
jupyter
| 2,798
|
[Feature Request] openssf scorecard
|
Would be good to get added https://securityscorecards.dev/ to better know where next improvements could happen and when evaluating the risk of using a component like this.
`scorecard --repo=https://github.com/plotly/dash
Starting [Packaging]
Starting [Security-Policy]
Starting [Pinned-Dependencies]
Starting [Signed-Releases]
Starting [Code-Review]
Starting [CI-Tests]
Starting [CII-Best-Practices]
Starting [Token-Permissions]
Starting [License]
Starting [Maintained]
Starting [SAST]
Starting [Binary-Artifacts]
Starting [Branch-Protection]
Starting [Contributors]
Starting [Fuzzing]
Starting [Vulnerabilities]
Starting [Dependency-Update-Tool]
Starting [Dangerous-Workflow]
Finished [Code-Review]
Finished [CI-Tests]
Finished [CII-Best-Practices]
Finished [Token-Permissions]
Finished [Packaging]
Finished [Security-Policy]
Finished [Pinned-Dependencies]
Finished [Signed-Releases]
Finished [SAST]
Finished [Binary-Artifacts]
Finished [License]
Finished [Maintained]
Finished [Branch-Protection]
Finished [Contributors]
Finished [Fuzzing]
Finished [Vulnerabilities]
Finished [Dependency-Update-Tool]
Finished [Dangerous-Workflow]
RESULTS
-------
Aggregate score: 5.4 / 10
Check scores:
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| SCORE | NAME | REASON | DOCUMENTATION/REMEDIATION |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | Binary-Artifacts | no binaries found in the repo | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#binary-artifacts |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 8 / 10 | Branch-Protection | branch protection is not | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#branch-protection |
| | | maximal on development and all | |
| | | release branches | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | CI-Tests | 7 out of 7 merged PRs | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#ci-tests |
| | | checked by a CI test -- score | |
| | | normalized to 10 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | CII-Best-Practices | no effort to earn an OpenSSF | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#cii-best-practices |
| | | best practices badge detected | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 8 / 10 | Code-Review | found 1 unreviewed changesets | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#code-review |
| | | out of 7 -- score normalized | |
| | | to 8 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | Contributors | 25 different organizations | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#contributors |
| | | found -- score normalized to | |
| | | 10 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | Dangerous-Workflow | no dangerous workflow patterns | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#dangerous-workflow |
| | | detected | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | Dependency-Update-Tool | update tool detected | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#dependency-update-tool |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | Fuzzing | project is not fuzzed | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#fuzzing |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | License | license file detected | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#license |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 10 / 10 | Maintained | 30 commit(s) out of 30 and 1 | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#maintained |
| | | issue activity out of 30 found | |
| | | in the last 90 days -- score | |
| | | normalized to 10 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| ? | Packaging | no published package detected | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#packaging |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 4 / 10 | Pinned-Dependencies | dependency not pinned by hash | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#pinned-dependencies |
| | | detected -- score normalized | |
| | | to 4 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | SAST | SAST tool is not run on all | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#sast |
| | | commits -- score normalized to | |
| | | 0 | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | Security-Policy | security policy file not | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#security-policy |
| | | detected | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | Signed-Releases | 0 out of 1 artifacts are | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#signed-releases |
| | | signed or have provenance | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | Token-Permissions | detected GitHub workflow | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#token-permissions |
| | | tokens with excessive | |
| | | permissions | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| 0 / 10 | Vulnerabilities | 46 existing vulnerabilities | https://github.com/ossf/scorecard/blob/b6f48b370e61367600eafb257b4ad07feb9578ab/docs/checks.md#vulnerabilities |
| | | detected | |
|---------|------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------|
`
|
closed
|
2024-03-16T14:04:22Z
|
2024-07-26T13:07:28Z
|
https://github.com/plotly/dash/issues/2798
|
[] |
andy778
| 1
|
plotly/dash
|
data-visualization
| 2,311
|
Deletion of last row containing delete-button which is target of dbc.Tooltip leads to invalid children output
|
**Describe your context**
```
dash_bootstrap_components==1.2.1
dash==2.7.0
```
**Describe the bug**
I have implemented a simple editor, which enables you to add and delete rows using a pattern-matching-callback. Each row contains a delete-button, which is target of a `dbc.Tooltip`.

When deleting the very last row, an error occurs when returning adjusted children for div `selection-area`:
```
An object was provided as `children` instead of a component, string, or number (or list of those). Check the children property that looks something like:
{
"props": {
"children": [
{
"props": {
"children": [
null,
{
"props": {
"is_open": false
}
}
]
}
}
]
}
}
```
* According to the console prints in my code below, the callback is unexpectedly called twice by the delete-button-pattern-input when clicking delete on the very last row.
* The second time the callback is called, the last element of the input `selection_elements` is the object from the above error.
* If we set a breakpoint on the return command at the end of the callback or apply `time.sleep(1)`, the bug doesn't occur.
* The bug doesn't occur when deleting rows from the middle or top.
* The bug doesn't occur if we don't use the delete-button as target of the `dbc.Tooltip`.
Full runnable code for investigation of the bug:
```
import dash
from dash import Dash, dcc, html
from dash.exceptions import PreventUpdate
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State, ALL
import json
app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
def make_input(i):
return dbc.Row([dbc.Col(
dcc.Input(id={"index": i, "type": "selection-input"},
type='text',
value='',
placeholder='e.g. c1-c0',
className='w-100'), lg=9)])
def make_input_row(i):
elements = (dbc.Col([html.Div(dbc.Button(children=['Delete'], id={"index": i, "type": 'remove-row-button'}), id=f"remove-row-button-tooltip-wrapper-{i}"),
dbc.Tooltip(
'Delete Row',
target=f"remove-row-button-tooltip-wrapper-{i}",
placement='left')
], lg=1),)
elements = elements + (dbc.Col(make_input(i), lg=4),)
return elements
app.layout = html.Div([html.Div(id='selection-area', children=[
dbc.Row(children=[
*make_input_row(0)
], id={"index": 0, "type": "selection-area-row"}, className="mt-2"),
dbc.Row(children=[
*make_input_row(1)
], id={"index": 1, "type": "selection-area-row"}, className="mt-2"),
dbc.Row(children=[
*make_input_row(2)
], id={"index": 2, "type": "selection-area-row"}, className="mt-2"),
dbc.Row(children=[
*make_input_row(3)
], id={"index": 3, "type": "selection-area-row"}, className="mt-2")
]), dbc.Row(children=[
dbc.Col(lg=1),
dbc.Col(dbc.Button('Add', id='add-row-button', outline=True, color='primary', style={'width': '100%'}), lg=3)
], className="mt-2"), ])
def edit_selection_area(remove_click, selection_elements):
trigger_id = dash.callback_context.triggered[0]['prop_id'].split('.')[0]
if trigger_id == 'add-row-button' and len(selection_elements) < 10:
i = max([channel_element['props']['id']['index'] for channel_element in selection_elements]) + 1 if selection_elements else 0
selection_elements.append(dbc.Row(children=[
*make_input_row(i)
], id={"index": i, "type": "selection-area-row"}, className="mt-2"))
elif 'remove-row-button' in trigger_id and len(selection_elements) > 1 and any(remove_click):
i = json.loads(trigger_id)['index']
selection_elements = [selection_element for selection_element in selection_elements if selection_element['props']['id']['index'] != i]
return selection_elements
@app.callback(
Output('selection-area', 'children'),
[Input('add-row-button', 'n_clicks'),
Input({'type': 'remove-row-button', 'index': ALL}, 'n_clicks')],
[State('selection-area', 'children')]
)
def edit_selection_area_callback(add_click, remove_click, selection_elements):
trigger_id = dash.callback_context.triggered[0]['prop_id']
if "remove-row-button" in trigger_id and not any(remove_click):
print("TRIGGERED A SECOND TIME WHEN CLICKING THE LAST ELEMENT:")
print("trigger_id: ", trigger_id)
print("remove_click: ", remove_click)
print(f"input selection elements: {len(selection_elements)} ", selection_elements)
raise PreventUpdate
print("trigger_id: ", trigger_id)
print("remove_click: ", remove_click)
print(f"input PreventUpdate elements: {len(selection_elements)} ", selection_elements)
children = edit_selection_area(remove_click, selection_elements)
print(f"output PreventUpdate elements: {len(children)} ", children)
print()
return children
if __name__ == "__main__":
app.run_server(debug=True)
```
**Expected behavior**
Deletion of the last row containing a delete-button which is target of a `dbc.Tooltip` should behave the same way as deletion of rows from the middle or top.
|
open
|
2022-11-11T15:42:48Z
|
2024-08-13T19:22:30Z
|
https://github.com/plotly/dash/issues/2311
|
[
"bug",
"dash-data-table",
"P3"
] |
timbrnbrr
| 1
|
JaidedAI/EasyOCR
|
pytorch
| 1,324
|
Fine-tuned CRAFT model works much slower on CPU than default one.
|
I fine-tuned CRAFT model according to this guide: https://github.com/JaidedAI/EasyOCR/tree/master/trainer/craft
But this model works 5 times slower than default model 'craft_mlt_25k' on some server CPUs (on some CPUs speeds are same). What can it be? Is 'craft_mlt_25k' quantized in some way?
|
open
|
2024-10-18T09:21:37Z
|
2024-12-09T02:27:33Z
|
https://github.com/JaidedAI/EasyOCR/issues/1324
|
[] |
romanvelichkin
| 1
|
tortoise/tortoise-orm
|
asyncio
| 1,206
|
Additional JSON Functions for Postgres
|
While interacting with a Postgres database, I am unavailable to do most basic queries on JSON objects. The only ones tortoise allows are `equals`, `not`, `isnull` and `notnull`, which is extremely limiting and somewhat frustrating
I am hoping that you could add operations like `in`, `gte`, etc. to the JSON functions to extend filtering.
Currently, I've been limited to using Raw SQL queries, made with Pypika, when trying to use functions that use `in` with JSON functions. If you have any other alternatives, I'd love to hear about them
|
closed
|
2022-08-01T08:26:53Z
|
2024-11-28T15:49:14Z
|
https://github.com/tortoise/tortoise-orm/issues/1206
|
[] |
sid-m-1mg
| 1
|
Josh-XT/AGiXT
|
automation
| 1,026
|
Check for kwargs["USE_STREAMLABS_TTS"].lower() doesn't work as bool object does not have lower
|
### Description
```
agixt-agixt-1 | File "/agixt/extensions/voice_chat.py", line 17, in __init__
agixt-agixt-1 | if kwargs["USE_STREAMLABS_TTS"].lower() == "true":
agixt-agixt-1 | AttributeError: 'bool' object has no attribute 'lower'
```
### Steps to Reproduce the Bug
1. Follow quick start instructions (clone, run shell script, it runs docker containers).
2. Go to the OpenAI Agent to add an API key to the settings.
3. Click save.
4. In future error appears whenever trying to load settings page.
### Expected Behavior
Agent should save normally, error should not appear, perhaps there should be a way to reset this file to default?
### Operating System
- [ ] Linux
- [ ] Microsoft Windows
- [X] Apple MacOS
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [ ] Python 3.10
- [X] Python 3.11
### Environment Type - Connection
- [X] Local - You run AGiXT in your home network
- [ ] Remote - You access AGiXT through the internet
### Runtime environment
- [X] Using docker compose
- [ ] Using local
- [ ] Custom setup (please describe above!)
### Acknowledgements
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue.
|
closed
|
2023-10-02T16:46:58Z
|
2023-10-03T11:55:40Z
|
https://github.com/Josh-XT/AGiXT/issues/1026
|
[
"type | report | bug",
"needs triage"
] |
bertybuttface
| 1
|
babysor/MockingBird
|
pytorch
| 200
|
合成失败
|
麻烦请问一下,合成播放一次例句后,修改文字再合成就出现“Exception:Sizes of tensors must match except in dimension 1.Expected size 2but got size 1 for tensor number in the list ”怎么办?
|
closed
|
2021-11-08T10:01:49Z
|
2021-11-08T12:50:07Z
|
https://github.com/babysor/MockingBird/issues/200
|
[] |
JUNQ930809
| 0
|
RobertCraigie/prisma-client-py
|
pydantic
| 461
|
Can't generate prisma client with docker alpine
|
<!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
When running prisma generate:
Downloading binaries [####################################] 100%
Traceback (most recent call last):
File "/usr/local/bin/prisma", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/site-packages/prisma/cli/cli.py", line 39, in main
sys.exit(prisma.run(args[1:]))
File "/usr/local/lib/python3.10/site-packages/prisma/cli/prisma.py", line 76, in run
process = subprocess.run(
File "/usr/local/lib/python3.10/subprocess.py", line 501, in run
with Popen(*popenargs, **kwargs) as process:
File "/usr/local/lib/python3.10/subprocess.py", line 969, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/local/lib/python3.10/subprocess.py", line 1845, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/prisma/binaries/engines/efdf9b1183dddfd4258cd181a72125755215ab7b/prisma-cli-linux'
## How to reproduce
Run a python container with base image python:3.10.5-alpine3.16 and run prisma generate
## Hot to fix
Change to a debian based image example:
FROM python:3.10.5-slim-buster
|
closed
|
2022-08-09T22:16:30Z
|
2022-12-03T17:03:45Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/461
|
[
"bug/1-repro-available",
"kind/bug",
"priority/low",
"level/unknown",
"topic: binaries"
] |
dualmacops
| 4
|
donnemartin/system-design-primer
|
python
| 410
|
please ignore this and just close it
|
closed
|
2020-05-06T01:21:12Z
|
2020-05-06T06:06:08Z
|
https://github.com/donnemartin/system-design-primer/issues/410
|
[] |
YuanhuiLouis
| 1
|
|
awesto/django-shop
|
django
| 25
|
We need a show_order templatetag
|
To let people get creative with django-SHOP on the template fronts, we should add some mechanism to let them display orders easily.
Since this will be django-SHOP specific, it should go in-tree, and not in a plugin.
|
closed
|
2011-04-10T11:45:35Z
|
2011-04-18T21:04:39Z
|
https://github.com/awesto/django-shop/issues/25
|
[] |
chrisglass
| 2
|
plotly/plotly.py
|
plotly
| 4,718
|
Permanent plotly marker text?
|
I have this code. I’m trying to create a map with the plotly_express markers showing a certain text (from the type column, which is included) right on top of the markers, hence the text = 'Type' and textposition = 'top center'. (See code below.) But when I run the code, the text only appears when I hover, which is the opposite of what I’m looking for. How to fix? Is it because I am missing a mapbox token?
fig = px.scatter_mapbox(
daily_average_exceedances_df,
lat = 'Latitude',
lon = 'Longitude',
color='Daily Exceedances',
color_continuous_scale=px.colors.sequential.Sunset,
range_color=(0,30),
hover_data=['siteId'],
text=daily_average_exceedances_df['Type'],
).update_layout(
mapbox={"style": "carto-positron", "zoom":11}, margin={"l": 0, "r": 0, "t": 0, "b": 0}
)
Thanks,
Buford Bufordson
|
closed
|
2024-08-15T22:06:53Z
|
2024-08-16T13:28:50Z
|
https://github.com/plotly/plotly.py/issues/4718
|
[] |
bufordbufordson
| 1
|
gunthercox/ChatterBot
|
machine-learning
| 1,658
|
ImportError: cannot run the code name 'ChatBot'
|
I am trying to run below code in Anaconda. However, I am getting an error. kindly help.
CODE:
from chatterbot import ChatBot
from chatterbot.trainers import ListTrainers
import os
bot= ChatBot('Bot')
bot.set_trainers(ListTrainers)
for files in os.listdir('C:\\Users\\lenovo\\Downloads\\BOT\\chatterbot-corpus-master\\chatterbot_corpus\\data\\english'):
data=open('C:\\Users\\lenovo\\Downloads\\BOT\\chatterbot-corpus-master\\chatterbot_corpus\\data\\english\\'+files,'r').readlines()
bot.train(data)
while True:
message=input("USER:")
if(message.strip() != 'bye'):
reply=bot.get_response(message)
print("BOT:" + reply)
if(message.strip() == 'bye'):
print("BOT:BYE :-), hope to see you again")
break

|
closed
|
2019-03-08T01:12:40Z
|
2020-01-03T01:32:53Z
|
https://github.com/gunthercox/ChatterBot/issues/1658
|
[] |
neelabhsriv
| 5
|
reloadware/reloadium
|
django
| 76
|
Reloadium experienced a fatal error and has to quit.
|
Hi Reloadium, I have encountered following issue:
## Describe the bug
I would like to use reloadium in order to debug my remote code. Unfortunately, when I click on debug I get the following error:
<img width="777" alt="image" src="https://user-images.githubusercontent.com/44685678/205290701-89ada685-d320-4ea0-a81b-52d594c113c1.png">
## To Reproduce
Steps to reproduce the behavior:
1. Go to Pycharm and specify remote SFTP deployment, set the local and remote paths correctly
2. Go to Interpreter settings and set the remote ssh interpreter using deployment previously defined
3. Install reloadium by `pip3.8 install reloadium` command.
4. Click on debug in order to test that your debugging works in general, after breakpoint is hit, stop it.
6. Click on reloadium debugging
## Expected behavior
Debugging works as expected, hot-swap is possible.
|
closed
|
2022-12-02T12:19:02Z
|
2023-05-28T17:04:21Z
|
https://github.com/reloadware/reloadium/issues/76
|
[] |
TS9001
| 12
|
browser-use/browser-use
|
python
| 295
|
[bug] No support for nested iframes
|
I'm trying to automate a page which includes credit card number within (nested!) iframe.
While browser-use correctly sees the field and actually creates a selector, it fails to fill it out:
```
ERROR [browser] Failed to locate element: Locator.element_handle: Timeout 30000ms exceeded.
Call log:
- waiting for locator("html > body > div:nth-of-type(2) > div:nth-of-type(2) > div > div:nth-of-type(2) > iframe[id=\"tokenframe\"][name=\"tokenframe\"][src=\"https://msfc.cardconnect.com/itoke/ajax-tokenizer.html?invalidinputevent=true&css=.error%7Bcolor%3Ared%3Bborder-color%3Ared%3B%7D%3Binput%7Bheight%3A28px%3Bwidth%3A400px%3Bmargin-left%3A-7px%3Bmargin-top%3A-7px%3Bmargin-bottom%3A2px%3Bbox-shadow%3Anone%3Bborder%3Anone%3Bborder-color%3Againsboro%3B%7D\"][title=\"Frame\"]").content_frame.locator("html > body > form > input[id=\"ccnumfield\"][type=\"tel\"][name=\"ccnumfield\"][autocomplete=\"off\"][title=\"Credit Card Number\"][aria-label=\"Credit Card Number\"]")
```
I am not sure why it happen, but i have a guess:
- in this particular case (this is https://direct.sos.state.tx.us/acct/acct-login.asp?spage=login3, but it's behind password), we have two nested iframes
- they have same id (but they are scoped so its ok)
- in call log we see that it looks for inner iframe (based on selector code)
Here's simplified HTML:
```html
<html lang="en" rp-extension="true"><head>
<title>DIRECT ACCESS SUBSCRIBER LOGIN</title>
<body>
<!-- skipped -->
<iframe id="tokenframe" src="HTTPS://WWW.SNAPPAYGLOBAL.COM/INTEROP/INTEROPREQUEST?REQNO=3A0E722A-C4D4-EF11-88D1-CE5A7CAFDAE9">
<html><body>
<!-- skipped -->
<iframe id="tokenframe" name="tokenframe" src="https://msfc.cardconnect.com/itoke/ajax-tokenizer.html?invalidinputevent=true&css=.error%7Bcolor%3Ared%3Bborder-color%3Ared%3B%7D%3Binput%7Bheight%3A28px%3Bwidth%3A400px%3Bmargin-left%3A-7px%3Bmargin-top%3A-7px%3Bmargin-bottom%3A2px%3Bbox-shadow%3Anone%3Bborder%3Anone%3Bborder-color%3Againsboro%3B%7D" title="Frame" height="35" width="200" frameborder="0" scrolling="no">
<html><body><form><input/></form></body></html>
</iframe>
<!-- skipped -->
</body></html>
</iframe>
<!-- skipped -->
</body>
</html>
```
|
closed
|
2025-01-17T11:30:38Z
|
2025-01-19T22:18:01Z
|
https://github.com/browser-use/browser-use/issues/295
|
[] |
neoromantic
| 5
|
davidsandberg/facenet
|
tensorflow
| 338
|
OutOfRangeError (see above for traceback): FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0) [[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
|
Hi,
I am trying to train a facenet model on my own dataset. My dataset consists of images which were obtained by using a face detector developed at our lab at CMU. There is no problem with the generated crops. I have used the same dataset for training different models in Caffe.
When I change the data_dir path to my own dataset, the training starts and aborts at the third iteration in the first epoch itself. This is the run command that I use:
python src/train_softmax.py --logs_base_dir /home/uzair/tensorflow/facenet/logs/ --models_base_dir
/home/uzair/tensorflow/facenet/models_base_dir/ --image_width 96 --image_height 112 --model_def
models.face-resnet --lfw_dir /home/uzair/Datasets/lfw_mtcnnpy_96_112 --optimizer RMSPROP --
learning_rate -1 --max_nrof_epochs 80 --keep_probability 0.8 --random_crop --random_flip --
learning_rate_schedule_file /home/uzair/tensorflow/facenet/data/learning_rate_schedule_classifier_casia.txt --weight_decay 5e-5 --
center_loss_factor 1e-2 --center_loss_alfa 0.9 --lfw_pairs /home/uzair/tensorflow/facenet/data/pairs.txt -
-embedding_size 512 --batch_size 90 --epoch_size 100 --data_dir /home/uzair/caffe-
face/datasets/CASIA/CAISAdataset_112X96_#2
I have looked at other solutions where people suggest reducing the --epoch_size value but I see that in the code the
index_queue = tf.train.range_input_producer(range_size, num_epochs=None,
shuffle=True, seed=None, capacity=32)
function does not depend on num_epochs. So this is not a valid solution any more. Also I am using 'jpeg' images in my dataset and I have already changed the line
image = tf.image.decode_png(file_contents)
to
image = tf.image.decode_image(file_contents)
I have the exact error message with the stacktrace below:
2017-06-20 16:05:33.969081: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969110: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969138: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969152: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969164: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969206: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969219: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969557: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969587: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969610: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969635: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969671: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
2017-06-20 16:05:33.969713: W tensorflow/core/framework/op_kernel.cc:1152] Out of range: FIFOQueue '_1_batch_join/fifo_queue' is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue, _recv_batch_size_0)]]
Traceback (most recent call last):
File "src/train_softmax.py", line 522, in <module>
main(parse_arguments(sys.argv[1:]))
File "src/train_softmax.py", line 259, in main
cross_entropy_mean_backprop,reg_losses_without_ringloss,reg_losses_without_ringloss_backprop)
File "src/train_softmax.py", line 338, in train
err, _, step, reg_loss,
R_val,norm_feat,raw_ring_loss,grad_softmax1,grad_ringloss1,
ringloss_backprop1,total_loss_backprop1
,R_backprop1,cross_entropy_mean_backprop1,reg_losses_without_ringloss1,
reg_losses_without_ringloss_backprop1 = sess.run([loss, train_op, global_step,
regularization_losses,Rval,mean_norm_features,prelogits_center_loss,
grad_softmax,grad_ringloss,ringloss_backprop,total_loss_backprop,
R_backprop,cross_entropy_mean_backprop,reg_losses_without_ringloss,
reg_losses_without_ringloss_backprop], feed_dict=feed_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: FIFOQueue '_1_batch_join/fifo_queue'
is closed and has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64],
timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue,
_recv_batch_size_0)]]
Caused by op u'batch_join', defined at:
File "src/train_softmax.py", line 522, in <module>
main(parse_arguments(sys.argv[1:]))
File "src/train_softmax.py", line 153, in main
allow_smaller_final_batch=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 1065, in
batch_join
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 745, in _batch_join
dequeued = queue.dequeue_up_to(batch_size, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 499, in dequeue_up_to
self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 1420, in _queue_dequeue_up_to_v2
timeout_ms=timeout_ms, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1228, in __init__
self._traceback = _extract_stack()
OutOfRangeError (see above for traceback): FIFOQueue '_1_batch_join/fifo_queue' is closed and
has insufficient elements (requested 90, current size 0)
[[Node: batch_join = QueueDequeueUpToV2[component_types=[DT_FLOAT, DT_INT64],
timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](batch_join/fifo_queue,
_recv_batch_size_0)]]
Id really appreciate any help that I can get. I really need to move past this error so that I can train on different datasets that are available at my lab.
|
closed
|
2017-06-20T20:38:32Z
|
2021-03-10T11:10:07Z
|
https://github.com/davidsandberg/facenet/issues/338
|
[] |
uzair789
| 26
|
pyro-ppl/numpyro
|
numpy
| 1,324
|
0.9.0 error when sampling posterior predictive via guide
|
Error:
`TypeError: mul got incompatible shapes for broadcasting: (1000,), (3,).`
Works just fine in `0.8.0`
Code to reproduce:
```python
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import pandas as pd
import arviz as az
import matplotlib.pyplot as plt
import jax.numpy as jnp
from jax import random
import numpyro
import numpyro.distributions as dist
import numpyro.optim as optim
from numpyro.infer import SVI, Trace_ELBO, Predictive
from numpyro.infer.autoguide import AutoLaplaceApproximation, AutoNormal
# INFO: load data for problems from the official repo
data_uri = "https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/Howell1.csv"
df_dev = pd.read_csv(data_uri, sep=";")
df_dev = df_dev[df_dev["age"] >= 18]
X_train = df_dev.loc[:, ["height"]]
y_train = df_dev.loc[:, ["weight"]]
df_test = pd.DataFrame(data={"height": [140, 160, 175], "weight": [None, None, None]})
X_test = df_test.loc[:, ["height"]]
y_test = df_test.loc[:, ["weight"]]
def model(X: pd.DataFrame, y: pd.DataFrame, observed=True):
alpha = numpyro.sample("alpha", dist.Normal(60, 10))
beta = numpyro.sample("beta", dist.LogNormal(0, 1))
sigma = numpyro.sample("sigma", dist.Uniform(0, 10))
# INFO: subtract X_train explicitly to secure non-leaky inference
mu = numpyro.deterministic("mu", alpha + beta * (X["height"] - X_train["height"].mean()).values)
numpyro.sample("W", dist.Normal(mu, sigma), obs=y["weight"].values if observed else None)
# quadratic approximation part
guide = AutoNormal(model)
svi = SVI(model, guide, optim.Adam(1), Trace_ELBO(), X=X_train, y=y_train)
svi_result = svi.run(random.PRNGKey(0), 1000)
params = svi_result.params
# ---
# get posterior predictive samples, approach 1: from samples
num_samples = 1000
samples_posterior = guide.sample_posterior(random.PRNGKey(1), params=params, sample_shape=(num_samples,))
dist_posterior_predictive = Predictive(model=model, guide=guide, params=params, posterior_samples=samples_posterior)
samples_posterior_predictive = dist_posterior_predictive(random.PRNGKey(1), X=X_test, y=None, observed=False)
samples_posterior_predictive
```
|
closed
|
2022-02-05T10:25:37Z
|
2022-02-05T17:31:59Z
|
https://github.com/pyro-ppl/numpyro/issues/1324
|
[
"warnings & errors"
] |
ColdTeapot273K
| 1
|
proplot-dev/proplot
|
data-visualization
| 144
|
scatter colorbar ignores boundaries when outside of plot
|
### Description
When using the `plt.scatter` wrapper, if some dimension of the data is mapped to color and the color bar placed externally, it is forced into a discrete colorbar and the boundaries cannot be modified. The colorbar features are different if placed *inside* the plot.
1. It would be great for this to generally be fixed.
2. Would it be feasible to add `levels` to the scatter wrapper?
### Steps to reproduce
```python
import numpy as np
import proplot as plot
state = np.random.RandomState(51423)
data = state.rand(2, 100)
f, ax = plot.subplots()
ax.scatter(*data, marker='o', color=data.sum(axis=0), cmap='Marine',
colorbar='l', vmin=0, vmax=2,
colorbar_kw={'boundaries': plot.arange(0, 2, 0.01),
'locator': 0.5})
f, ax = plot.subplots()
ax.scatter(*data, marker='o', color=data.sum(axis=0), cmap='Marine',
colorbar='lr', vmin=0, vmax=2,
colorbar_kw={'boundaries': plot.arange(0, 2, 0.01),
'locator': 0.5})
```
In the first bit, I am using `boundaries` as a hack to get at `levels`. I want to force this to much smaller colorbar steps, just to prove that it's working. It seems like `locator` works, but not `boundaries` for instance.

In the second bit, when `colorbar='lr'`, it works fine. I.e., when colorbars are inside the plot, the `boundaries` keyword works. When placed outside (e.g. `l`, `r`, `b`), the colorbar is funky.

### Equivalent steps in matplotlib
The default exterior colorbar in `matplotlib` looks a lot better (it's continuous without mislabeling of the discrete boundaries), and the `boundaries` keyword works here.
```python
import matplotlib.pyplot as plt
state = np.random.RandomState(51423)
data = state.rand(2, 100)
f, ax = plt.subplots()
p = ax.scatter(*data, marker='o', c=data.sum(axis=0))
f.colorbar(p, boundaries=plot.arange(0, 2, 0.1))
```

### Proplot version
0.5.0.post46
|
closed
|
2020-04-22T20:56:00Z
|
2020-05-12T06:49:00Z
|
https://github.com/proplot-dev/proplot/issues/144
|
[
"bug",
"feature"
] |
bradyrx
| 3
|
open-mmlab/mmdetection
|
pytorch
| 12,053
|
Precision Fluctuation Doubt of GroundingDINO Fine-tuned with Industry Data vs YOLOv8 in mAP0.5 - 0.95,微调精度不如yoloV8?
|
I fine-tuned GroundingDINO with my own industry data. In some scenarios, map0.5-0.95 better than YOLOv8, while in others map0.5-0.95 is more than 5 percentage points lower. Has anyone encountered such a problem?
我用行业数据微调groundingdino,有些场景精度比yolov8有提升,有些比yolov8要低,主要是map0.5-0.95低很多。有人遇到这种问题吗?
|
open
|
2024-11-28T07:42:23Z
|
2024-11-28T07:42:39Z
|
https://github.com/open-mmlab/mmdetection/issues/12053
|
[] |
BruceWang1996
| 0
|
nolar/kopf
|
asyncio
| 299
|
Creating a DAG operator
|
> <a href="https://github.com/seperman"><img align="left" height="50" src="https://avatars2.githubusercontent.com/u/2314797?v=4"></a> An issue by [seperman](https://github.com/seperman) at _2020-01-24 22:33:47+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/299
>
## Question
How would you go about creating a dynamic DAG (Directed acyclic graph) operator using Kopf?
## Checklist
For example [Argo](https://github.com/argoproj/argo) uses operators to create DAGs in Kubernetes.
What I'm thinking about is:
- Having an in memory mutable version of the dag in the operator. Every time that the dag gets modified or a node in the dag is traversed (a pod finished execution), the state of the dag gets dumped to Kubernetes etcd via a configmap. This is done for persistence in case the operator restarts.
Any help is appreciated.
Thanks.
## Keywords
Priority Queue, DAG
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-03-06 11:13:27+00:00_
>
Sorry for the late response. I cannot help here much, as I am not familiar with your domain. But your way of solving it seems sufficient and normal — should work. Are there any specific problems why this does not work?
---
> <a href="https://github.com/seperman"><img align="left" height="30" src="https://avatars2.githubusercontent.com/u/2314797?v=4"></a> Commented by [seperman](https://github.com/seperman) at _2020-03-06 14:14:10+00:00_
>
Hi,
Thanks for responding. I decided to use RDBS instead of dumping the state to k8.
Sep
> On Mar 6, 2020, at 3:13 AM, Sergey Vasilyev <notifications@github.com> wrote:
>
>
> Sorry for the late response. I cannot help here much, as I am not familiar with your domain. But your way of solving it seems sufficient and normal — should work. Are there any specific problems why this does not work?
>
> —
> You are receiving this because you modified the open/close state.
> Reply to this email directly, view it on GitHub, or unsubscribe.
|
closed
|
2020-08-18T20:03:05Z
|
2020-08-23T20:54:55Z
|
https://github.com/nolar/kopf/issues/299
|
[
"question",
"archive"
] |
kopf-archiver[bot]
| 0
|
facebookresearch/fairseq
|
pytorch
| 5,576
|
MMS TTS tutorial
|
CalledProcessError: Command 'wget https://dl.fbaipublicfiles.com/mms/tts/eng.tar.gz -O ./eng.tar.gz;tar zxvf ./eng.tar.gz' returned non-zero exit status 1.
Not able to process 2nd cell and am getting the error above in the code link below: https://github.com/facebookresearch/fairseq/blob/main/examples/mms/tts/tutorial/MMS_TTS_Inference_Colab.ipynb
|
open
|
2024-12-28T08:06:46Z
|
2024-12-28T08:06:46Z
|
https://github.com/facebookresearch/fairseq/issues/5576
|
[
"bug",
"needs triage"
] |
ARTHARKING55
| 0
|
biolab/orange3
|
data-visualization
| 6,236
|
CI: LayoutRequest event on half-constructed/deconstructed widget
|
Tests on CI with Ubuntu + PyQt6 randomly failed with the following error, apparently despite https://github.com/biolab/orange3/pull/6136/files#diff-929e35c379b1494f7cb484aaa393313db85bdb58d1ac89d505920722f3800f48. @ales-erjavec?
```
Traceback (most recent call last):
File "/home/runner/work/orange3/orange3/.tox/pyqt6/lib/python3.9/site-packages/Orange/widgets/unsupervised/owhierarchicalclustering.py", line 820, in eventFilter
elif event.type() == QEvent.LayoutRequest and obj is self._main_graphics:
AttributeError: 'OWHierarchicalClustering' object has no attribute '_main_graphics'
```
**What's your environment?**
- Operating system: Ubuntu + PyQt6
- Orange version: master,
- How you installed Orange: Tox
|
closed
|
2022-12-03T16:12:54Z
|
2022-12-23T08:51:49Z
|
https://github.com/biolab/orange3/issues/6236
|
[
"bug"
] |
janezd
| 0
|
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 415
|
使用Singularity部署服务时遇到问题
|
你好,我在尝试使用容器部署时遇到了问题,不好意思我对容器并不是很了解,我尽量描述我的操作,如果有需要我补充的信息,请告诉我。
我目前用的hpc似乎不支持直接使用Docker,但支持Singularity(可以将Docker镜像转换为Singularity镜像),但我在使用时遇到了问题。具体来说,我使用以下命令拉取镜像:
```
singularity pull douyin_tiktok_download_api.sif docker://evil0ctal/douyin_tiktok_download_api
```
拉取镜像后,我尝试使用以下命令启动并进入容器实例的交互式shell。我使用--bind选项来替换容器内的Cookie文件,并尝试使用hpc宿主机的命令(因为容器内好像无法直接使用curl等命令):
```
singularity shell --bind config.yaml:/app/crawlers/tiktok/web/config.yaml,/usr douyin_tiktok_download_api.sif
```
但是,进入容器后,我没有在容器的80端口发现任何服务,使用curl访问localhost会提示curl: (7) Failed connect to localhost:80; Connection refused。此外,我尝试修改/app/config.yaml中的端口为8888并重启容器,但8888端口同样没有任何服务。
我想知道在哪里可以查看服务的log文件以及进行debug,因为我在启动容器时终端并没有输出?请问这个问题可能是因为我没有root权限,还是我在操作中有什么错误?期待您的解答~
|
closed
|
2024-05-28T03:02:03Z
|
2024-05-29T02:19:58Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/415
|
[] |
scn0901
| 2
|
tatsu-lab/stanford_alpaca
|
deep-learning
| 46
|
OOM issue
|
Can this finetuning script fit into A10, which only has 24GB GPU memory? I am trying to fine-tune the model on 4 A10 GPUs using a batch size of 1, but I still get an OOM error.
|
closed
|
2023-03-16T06:31:33Z
|
2023-04-07T12:56:33Z
|
https://github.com/tatsu-lab/stanford_alpaca/issues/46
|
[] |
puyuanliu
| 14
|
explosion/spaCy
|
machine-learning
| 13,649
|
Cannot complete a Docker build using v.3.8.2
|
It looks like this problem still exists in the newest release, @honnibal :
Issue #13606
The Docker image build hangs in the same place...
```
#35 42.62 Collecting spacy (from -r requirements.txt (line 15))
#35 42.64 Downloading spacy-3.8.2.tar.gz (1.3 MB)
#35 42.78 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 11.8 MB/s eta 0:00:00
#35 45.06 Installing build dependencies: started
#35 131.6 Installing build dependencies: still running...
#35 218.1 Installing build dependencies: still running...
#35 376.2 Installing build dependencies: still running...
#35 501.0 Installing build dependencies: still running...
#35 715.7 Installing build dependencies: still running...
#35 1017.0 Installing build dependencies: still running...
#35 1082.7 Installing build dependencies: still running...
#35 1306.2 Installing build dependencies: still running...
```
It looks like the specific test run for Python 3.12.5 on the latest Ubuntu was canceled...here:
https://github.com/explosion/spaCy/actions/runs/11127868506/job/30921252535
## How to reproduce the behaviour
Make sure your project pulls the latest spaCy (3.8.2 as of now), then attmept to build a Docker image. In our case, this happens withing a GitHub workflow, and we build them on latest Ubuntu.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Ubuntu, latest
* Python Version Used: 3.12.5
* spaCy Version Used: 3.8.2
* Environment Information: Issue seen in a GitHub workflow run
Our workaround will be to revert back to a hard pin of spaCy 3.7.5 for now, but we'd certainly like to get back to the latest releases once this issue is sorted. Thanks!
|
open
|
2024-10-03T17:51:15Z
|
2025-01-31T14:28:58Z
|
https://github.com/explosion/spaCy/issues/13649
|
[] |
erikspears
| 5
|
ijl/orjson
|
numpy
| 516
|
orjson.dump() and orjson.load() should be support
|
Both the official json library and UltraJSON support these two functions, and I think orjson should also support them. I found a related issue #329, but it seems that these two methods are still not supported at present. Can they be supported directly or is there any work around way to support them?
e.g.
```
import json
# load from json file
with open('config/config.json', 'r') as f:
data_dict = json.load(f)
# dump to file
data = {"1": 11, "2": 22, "3": 33, "4": 44}
with open('./test.json', 'w') as f:
json.dump(data, f)
```
|
closed
|
2024-08-26T05:55:54Z
|
2024-09-04T08:02:05Z
|
https://github.com/ijl/orjson/issues/516
|
[
"Stale"
] |
fighterhit
| 0
|
vi3k6i5/flashtext
|
nlp
| 42
|
Is three a C# file?
|
Is three a C# file?
|
closed
|
2017-12-16T08:00:52Z
|
2017-12-18T06:27:56Z
|
https://github.com/vi3k6i5/flashtext/issues/42
|
[
"question"
] |
billjonewbz
| 1
|
cuemacro/chartpy
|
plotly
| 5
|
Fix multiple PEP-8 violations
|
open
|
2017-10-30T00:12:59Z
|
2017-10-30T14:54:27Z
|
https://github.com/cuemacro/chartpy/issues/5
|
[] |
rs2
| 1
|
|
lukasmasuch/streamlit-pydantic
|
pydantic
| 28
|
`Streamlit Settings` playground app is currently broken
|
<!--
Thanks for reporting a bug 🙌 ❤️
Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. Also, be sure to check our documentation first.
-->
**Describe the bug:**
<!-- Describe your issue, but please be descriptive! Thanks again 🙌 ❤️ -->
The `Streamlit Settings` playground is currently failing:

Note: A similar error also occurs when running the playground app locally.
**Expected behaviour:**
<!-- A clear and concise description of what you expected to happen. -->
Not sure!
**Steps to reproduce the issue:**
<!-- include screenshots, logs, code or other info to help explain your problem.
-->
1. Go to the [deployed playground](https://lukasmasuch-streamlit-pydantic-playgroundplayground-app-711bhu.streamlit.app/)
2. Click on the 'Streamlit Settings' demo
3. See error
**Technical details:**
- Host Machine OS (Windows):
- Browser (Firefox):
**Possible Fix:**
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Recent changes to `streamlit-pydantic` or `streamlit` or `pydantic` may have broken this functionality.
**Additional context:**
<!-- Add any other context about the problem here. -->
|
open
|
2023-04-17T11:52:03Z
|
2023-04-17T11:52:03Z
|
https://github.com/lukasmasuch/streamlit-pydantic/issues/28
|
[
"type:bug"
] |
HIL340
| 0
|
modelscope/modelscope
|
nlp
| 498
|
ModuleNotFoundError
|
[i solved, thanks]
Firstly, thanks for this repo.
When i was run this code
from model_scope.modelscope.pipelines.builder import pipeline
from model_scope.modelscope.utils.constant import Tasks
itn_inference_pipline = pipeline(
task=Tasks.inverse_text_processing,
model='damo/speech_inverse_text_processing_fun-text-processing-itn-en')
itn_result = itn_inference_pipline(text_in='on december second, we paid one hundred and twenty three dollars for christmas tree.')
print(itn_result)
ModuleNotFoundError: No module named 'model_scope'.
modelscope is in the model_scope folder. [this](https://github.com/modelscope/modelscope/blob/c08b161ae126dd668445fb63deb7d494fd991ce9/modelscope/pipelines/builder.py#L63)
I don't know how can i fix that.
|
closed
|
2023-08-24T07:46:28Z
|
2023-09-18T11:01:40Z
|
https://github.com/modelscope/modelscope/issues/498
|
[] |
cestcode
| 0
|
bmoscon/cryptofeed
|
asyncio
| 801
|
Is there a way to avoid installing GCC to install?
|
I'm trying to install it on EC2 instance on Amazon Linux 2 and it is trying to compile using GCC. Is there a way to avoid it?
[scratch_12.txt](https://github.com/bmoscon/cryptofeed/files/8199539/scratch_12.txt)
pip 22.0.4
python 3.7.10
|
closed
|
2022-03-07T17:14:24Z
|
2022-03-07T18:13:42Z
|
https://github.com/bmoscon/cryptofeed/issues/801
|
[
"question"
] |
ideoma
| 2
|
davidsandberg/facenet
|
tensorflow
| 827
|
Problems with --pretrained_model
|
when i run:
python train_softmax.py --data_dir /home/han2/facenet/face/CASIA-WebFace_align --image_size 160 --pretrained_model /home/han2/facenet/src/pretrained_models/20180408-102900/model-20180408-102900.ckpt-90
I got the error:
Assign requires shapes of both tensors to match. lhs shape= [1792,128] rhs shape= [1792,512]
[[Node: save/Assign_21 = Assign[T=DT_FLOAT, _class=["loc:@InceptionResnetV1/Bottleneck/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](InceptionResnetV1/Bottleneck/weights, save/RestoreV2/_841)]]
[[Node: save/RestoreV2/_1554 = _Send[T=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_730_save/RestoreV2", _device="/job:localhost/replica:0/task:0/device:CPU:0"](save/RestoreV2:194)]]
I don't change the code, but it can't run with the provided pretrained_model
|
closed
|
2018-07-29T09:02:06Z
|
2018-11-21T05:32:05Z
|
https://github.com/davidsandberg/facenet/issues/827
|
[] |
Victoria2333
| 1
|
supabase/supabase-py
|
fastapi
| 682
|
Error importing `ClientOptions` from supabase
|
**Describe the bug**
I get this error: `ImportError: cannot import name 'ClientOptions' from 'supabase'`
**To Reproduce**
This is my code:
```Python
from supabase import create_client, Client, ClientOptions
from .config import Settings
settings = Settings()
def get_supabase(jwt: str = None) -> Client:
if jwt:
return create_client(
settings.SUPABASE_URL,
settings.SUPABASE_KEY,
options=ClientOptions(
headers={"Authorization": f"Bearer {jwt}"}
)
)
else:
return create_client(settings.SUPABASE_URL, settings.SUPABASE_KEY)
```
**Expected behavior**
This is using the exact code described [in the docs](https://supabase.com/docs/reference/python/initializing). You can see a screenshot of it below. This import should work as documented.
**Screenshots**
<img width="554" alt="Screenshot 2024-02-04 at 15 09 09" src="https://github.com/supabase-community/supabase-py/assets/1239724/910af9f1-b0cc-4ab1-bd74-e015ebbf49ab">
**Desktop (please complete the following information):**
- OS: macOS Sonoma 14.2.1
- Browser: Safari
- Version:
- supabase==2.3.4
- Python 3.11.7
- Using virtualenv
**Additional context**
I am trying to set the header to the user's JWT auth token so I can implement RLS policies, as this will prevent me from writing a bunch of backend code to authorize access.
|
closed
|
2024-02-04T14:13:08Z
|
2024-02-15T10:53:01Z
|
https://github.com/supabase/supabase-py/issues/682
|
[
"bug"
] |
dantheman0207
| 5
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 10,114
|
Query.from_statement fails with due to column_property
|
### Describe the bug
We are seeing issues when using column properties together with `Query.from_statement` as it appears that it fails to resolve the name / label for the property .
It looks like a similar issue was addressed for sqlalchemy 2 ( https://github.com/sqlalchemy/sqlalchemy/issues/9273) any chance of this making it into 1.4?
cheers
>
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
1.4.49
### DBAPI (i.e. the database driver)
pysqlite
### Database Vendor and Major Version
SQLite, Mariadb
### Python Version
3.11
### Operating system
Linux
### To Reproduce
```python
import sqlalchemy
import sqlalchemy.orm
Base = sqlalchemy.orm.declarative_base()
class Entity(Base):
__tablename__ = 'entity'
id = sqlalchemy.Column(
sqlalchemy.Integer, primary_key=True
)
c_property = sqlalchemy.orm.column_property(
sqlalchemy.select(
1
).as_scalar()
)
engine = sqlalchemy.create_engine(
'sqlite:///:memory:', echo=True
)
Base.metadata.create_all(engine)
Session = sqlalchemy.orm.sessionmaker(
bind=engine
)
Session().query(Entity).from_statement(sqlalchemy.select(Entity.id)).all()
```
### Error
```
Traceback (most recent call last):
File "../test.py", line 32, in <module>
Session().query(Entity).from_statement(sqlalchemy.select(Entity.id)).all()
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/orm/query.py", line 2773, in all
return self._iter().all()
^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter
result = self.session.execute(
^^^^^^^^^^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/orm/session.py", line 1720, in execute
result = compile_state_cls.orm_setup_cursor_result(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/orm/context.py", line 349, in orm_setup_cursor_result
return loading.instances(result, querycontext)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/orm/loading.py", line 88, in instances
with util.safe_reraise():
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/orm/loading.py", line 69, in instances
*[
^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/orm/loading.py", line 70, in <listcomp>
query_entity.row_processor(context, cursor)
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/orm/context.py", line 2631, in row_processor
_instance = loading._instance_processor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/orm/loading.py", line 796, in _instance_processor
prop.create_row_processor(
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/orm/interfaces.py", line 658, in create_row_processor
strat.create_row_processor(
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/orm/strategies.py", line 255, in create_row_processor
col = adapter.columns[col]
~~~~~~~~~~~~~~~^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/util/_collections.py", line 762, in __missing__
self[key] = val = self.creator(self.weakself(), key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/sql/util.py", line 1036, in _locate_col
c = ClauseAdapter.traverse(self, col)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/sql/visitors.py", line 619, in traverse
return replacement_traverse(obj, self.__traverse_options__, replace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/sql/visitors.py", line 848, in replacement_traverse
obj = clone(
^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/sql/visitors.py", line 844, in clone
newelem._copy_internals(clone=clone, **kw)
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/sql/elements.py", line 4634, in _copy_internals
self._element = clone(self._element, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/sql/visitors.py", line 827, in clone
newelem = replace(elem)
^^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/sql/visitors.py", line 615, in replace
e = v.replace(elem)
^^^^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/sql/util.py", line 914, in replace
return self._corresponding_column(col, True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/sql/util.py", line 852, in _corresponding_column
newcol = self.selectable.exported_columns.get(col.name)
^^^^^^^^
File "../.venv/lib64/python3.11/site-packages/sqlalchemy/sql/elements.py", line 4066, in __getattr__
return getattr(self.element, attr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Select' object has no attribute 'name'```
```
### Additional context
This appears to work in sqlalchemy 2.0.19.
|
closed
|
2023-07-18T10:39:36Z
|
2023-07-18T12:57:39Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/10114
|
[
"bug",
"duplicate",
"orm",
"great mcve"
] |
torsdag
| 2
|
sqlalchemy/alembic
|
sqlalchemy
| 663
|
Unclear error messages from alembic when base cannot find 'migrations' directory.
|
I have a pull request ready to fix this.
It's a very simple issue, when I run some flask stuff from the wrong directory, calling upgrade() in flask_migrate tries to use alembic to obtain the 'migrations' directory. Unfortunately the error message when it cannot find 'migrate' is pretty unclear if it ends up printing a relative path (in my case the path was simply '.') you'll see an error message like:
```alembic.util.exc.CommandError: Path doesn't exist: 'migrations'. Please use the 'init' command to create a new scripts folder.```
|
closed
|
2020-02-24T06:21:16Z
|
2022-05-11T11:49:49Z
|
https://github.com/sqlalchemy/alembic/issues/663
|
[
"bug",
"migration environment"
] |
novafacing
| 9
|
vaexio/vaex
|
data-science
| 1,897
|
How to Filter on String on a virtual column
|
Hello.
I applied on my dataframe the next command
```
df['seccion'] = df.pagePath.str.split('/').apply(lambda x: x[1])
```
And this created the column 'seccion'
|pagePath | seccion |
|-----------------------------------|----------------------|
|'/empresas/2021/10/22/tiendas-no-participan-buen'| empresas |
|'/finanzas-personales/2021/10/22/pueden-cobrar-c| finanzas-personales |
|'/finanzas-personales/2021/10/01/autos-mas-vendidos| finanzas-personales |
Now I want to left only the values that contain some value. For example 'finanzas-personales'.
I know that in Vaex column "seccion" is a virtual column. So I tried something like
```
df['seccion'] = df.pagePath.str.split('/').apply(lambda x: x[1])
df2 = df.extract()
df2 = df2[df2['seccion']=='empresas']
```
But this doesn't work
I also tried:
```
df2.select(df2.seccion == 'empresas')
df2.evaluate(df2.seccion, selection=True)
```
This also doesn't work
Expected output once dateframe is filtered
|pagePath | seccion |
|-----------------------------------|----------------------|
|'/finanzas-personales/2021/10/22/pueden-cobrar-c| finanzas-personales |
|'/finanzas-personales/2021/10/01/autos-mas-vendidos| finanzas-personales |
Any suggestions you have will be greatly appreciated
|
closed
|
2022-02-09T18:17:06Z
|
2022-02-10T10:21:20Z
|
https://github.com/vaexio/vaex/issues/1897
|
[] |
jgzga
| 4
|
FlareSolverr/FlareSolverr
|
api
| 1,106
|
[exttorrents] (testing) Exception (exttorrents): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds.: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds.
|
### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.3.13
- Operating system: Windows
- Are you using Docker: yes
- FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
- Are you using a VPN: no
- Are you using a Proxy: no
- Are you using Captcha Solver: yes
- If using captcha solver, which one: FlareSolverr
```
### Description
Everytime I test jackett indexers like 1337x and EXTtorrents I have the error:
Error solving the challenge
### Logged Error Messages
```text
Exception (exttorrents): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds.: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds.
```
### Screenshots
_No response_
|
closed
|
2024-03-01T12:20:04Z
|
2024-03-01T16:00:58Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/1106
|
[] |
ELAD1010
| 1
|
wkentaro/labelme
|
computer-vision
| 911
|
Adding new button with slot
|
Thank you for providing an open-source Tool. It is great to help researchers. I just want to add a new button for my python function. So it will be very nice if you can briefly explain the overflow flow of the label code.
|
closed
|
2021-09-02T02:36:21Z
|
2022-06-25T04:37:55Z
|
https://github.com/wkentaro/labelme/issues/911
|
[] |
Ehteshamciitwah
| 0
|
recommenders-team/recommenders
|
machine-learning
| 2,173
|
[BUG] Issue with installing the recommenders package while building cornac on macOS ARM64 (M2)
|
### Description
I am encountering an issue while trying to install the recommenders package on macOS ARM64 (Apple Silicon). The error occurs during the cornac building process, specifically while compiling the C++ source files. I've followed the instructions in the GitHub repository and tried installing via both pip and conda, but the issue persists.
### In which platform does it happen?
I am using VSCode as my development environment and have tried multiple approaches, including different compilers and package managers.
### My Setup:
macOS Version: macOS 14.3 (Sonoma)
Python Version: 3.10.12 (installed via pyenv and conda) and tried with 3.9 as well
Compiler:
Tried with clang++ and g++
Also tried Homebrew's LLVM and gcc
Package Managers:
Tried both pip and conda based on the instructions in the GitHub repository.
### Error Details

Additionally if I am trying to install through requirements file it fails saying while collecting cornac the dependencies are missing even if they are listed in the requirements file.
### Expected behavior (i.e. solution)
recommenders package installed without any issues
|
closed
|
2024-09-27T11:33:11Z
|
2024-12-27T06:28:39Z
|
https://github.com/recommenders-team/recommenders/issues/2173
|
[
"bug"
] |
Harikapl
| 3
|
airtai/faststream
|
asyncio
| 1,263
|
Bug: `AttributeError: 'list' object has no attribute 'items' with "Discriminated Unions"
|
**Describe the bug**
When using an `Annotated` union with a pydantic `Field(discriminator=...)` AsyncDocs fail to generate due to a `AttributeError: 'list' object has no attribute 'items'`
https://docs.pydantic.dev/latest/concepts/unions/#discriminated-unions
```
File "/Users/kilo59/.local/share/virtualenvs/fs-unions-sN2W4Mxo/lib/python3.11/site-packages/faststream/asyncapi/generate.py", line 186, in _resolve_msg_payloads
for p_title, p in m.payload.get("oneOf", {}).items():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'items'
```
**How to reproduce**
```python
# repro.py
from typing import Annotated, Literal
from pydantic import BaseModel
from pydantic.fields import Field
from faststream import FastStream
from faststream.rabbit import RabbitBroker
broker = RabbitBroker("amqp://guest:guest@localhost:5672/")
app = FastStream(broker, version="0.1.0")
class Hotdog(BaseModel):
type: Literal["hotdog"]
condiments: list[str]
class Sub(BaseModel):
type: Literal["sub"]
condiments: list[str]
bread: Literal["white", "wheat"]
MyTypeUnion = Annotated[ Hotdog | Sub, Field(discriminator="type")]
@broker.subscriber("test_queue")
async def handle_event(msg_body: MyTypeUnion) -> None:
print(msg_body)
# Error is thrown when generating docs
# faststream docs gen repro:app
```
**Expected behavior**
I would expect to see docs generated with an appropriate `anyOf` or `oneOf` union.
Note this works with other union types.
```python
# Removing the discriminator from `Field` causes things to work again.
...
MyTypeUnion = Annotated[ Hotdog | Sub, Field(union="smart")]
@broker.subscriber("test_queue")
async def handle_event(msg_body: MyTypeUnion) -> None:
print(msg_body)
```
**Observed behavior**
Describe what is actually happening clearly and concisely.
**Screenshots**
~If applicable, attach screenshots to help illustrate the problem.~
**Environment**
```
Running FastStream 0.4.4 with CPython 3.11.7 on Darwin
```
**Additional context**
I suspect nesting this field in a higher-level model would also allow me to use the discriminated union but it should be possible without changing the structure of my payload. Especially considering it works for other union types.
|
closed
|
2024-02-25T19:44:43Z
|
2024-03-05T16:44:13Z
|
https://github.com/airtai/faststream/issues/1263
|
[
"bug"
] |
Kilo59
| 2
|
predict-idlab/plotly-resampler
|
plotly
| 246
|
Saving plotly-resampler object for faster viewing
|
I am plotting large time series array and it takes considerable time to plot. Currently, I am loading the data and passing it to resampler which resamples and plots. My understanding is that resampling code is taking time.
Is it possible to save the plotly resampler object (first time) so that I can load the object and view plot faster for future use?
|
closed
|
2023-07-21T12:16:51Z
|
2023-10-25T22:55:16Z
|
https://github.com/predict-idlab/plotly-resampler/issues/246
|
[] |
vinay-hebb
| 2
|
automagica/automagica
|
automation
| 113
|
Bot Key error
|
File "C:\Users\Dell\Desktop\testing_for_automagica\venv\lib\site-packages\automagica\utilities.py", line 17, in wrapper
return func(*args, **kwargs)
File "C:\Users\Dell\Desktop\testing_for_automagica\venv\lib\site-packages\automagica\activities.py", line 8490, in find_text_on_screen_ocr
api_key = str(local_data['bot_secret']) # Your API key
KeyError: 'bot_secret'
(venv) PS C:\Users\Dell\Desktop\testing_for_automagica> automagica -f .\ocr_testing.py
Traceback (most recent call last):
File "c:\users\dell\appdata\local\programs\python\python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\dell\appdata\local\programs\python\python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Dell\Desktop\testing_for_automagica\venv\Scripts\automagica.exe\__main__.py", line 7, in <module>
File "c:\users\dell\desktop\testing_for_automagica\venv\lib\site-packages\automagica\cli.py", line 652, in main
app = Automagica()
File "c:\users\dell\desktop\testing_for_automagica\venv\lib\site-packages\automagica\cli.py", line 139, in __init__
exec(script, globals())
File "<string>", line 5, in <module>
File "c:\users\dell\desktop\testing_for_automagica\venv\lib\site-packages\automagica\utilities.py", line 17, in wrapper
return func(*args, **kwargs)
File "c:\users\dell\desktop\testing_for_automagica\venv\lib\site-packages\automagica\activities.py", line 8490, in find_text_on_screen_ocr
api_key = str(local_data['bot_secret']) # Your API key
KeyError: 'bot_secret'
@audieleon @ygxiao @tvturnhout @0xflotus @rtroncosogar
can u guys help solving this problem .Every time when i work on OCR it gices me the Above error
|
closed
|
2020-03-19T10:03:22Z
|
2020-05-05T08:43:20Z
|
https://github.com/automagica/automagica/issues/113
|
[] |
sultan-sheriff
| 4
|
serengil/deepface
|
deep-learning
| 646
|
AttributeError: module 'deepface.commons.functions' has no attribute 'preprocess_face'
|
I was trying to follow the tutorial: https://sefiks.com/2021/04/03/deep-face-recognition-with-neo4j/
And I got the attribute error for preprocessing_face. Please kindly advice how I can proceed, many thanks in advance!
|
closed
|
2023-01-29T09:18:40Z
|
2023-01-29T11:28:28Z
|
https://github.com/serengil/deepface/issues/646
|
[
"question"
] |
iammaylee123
| 1
|
adbar/trafilatura
|
web-scraping
| 657
|
MemoryError in table conversion
|
`python`: `3.12.0`
`trafilatura`: `1.11.0`
MemoryError happens here https://github.com/adbar/trafilatura/blob/e9921b3724b5fd6219c683b016f89a9b6a79c99c/trafilatura/xml.py#L325
```
# in this case
max_span = 9007199254740991
cell_count = 3
```
```
url = 'https://docs.stripe.com/declines/codes'
resp = trafilatura.fetch_url(url)
content = trafilatura.extract(resp, favor_precision=True, include_images=True, deduplicate=True)
```
In case you can't access this page: https://github.com/Honesty-of-the-Cavernous-Tissue/trafilatura/blob/master/tests/test.html
|
closed
|
2024-07-25T03:54:35Z
|
2024-07-25T12:47:05Z
|
https://github.com/adbar/trafilatura/issues/657
|
[
"bug"
] |
Honesty-of-the-Cavernous-Tissue
| 2
|
FlareSolverr/FlareSolverr
|
api
| 1,124
|
[speedcd] (testing) Exception (speedcd): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds.: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds.
|
### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.3.12
- Last working FlareSolverr version: 3.3.12
- Operating system: Ubuntu
- Are you using Docker: [yes/no] yes
- FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
- Are you using a VPN: [yes/no] no
- Are you using a Proxy: [yes/no] no
- Are you using Captcha Solver: [yes/no] no
- If using captcha solver, which one:
- URL to test this issue: https://speed.cd/browse/
```
### Description
It looks like maybe speed.cd recently added Cloudflare protections? I am unable to configure the cookie and user agent settings in Jackett for this indexer like I am for others. Logging into the site directly, I did see the Cloudflare screen briefly while browsing.
Using the "Test" button in Jackett produces the error for the speed.cd tracker. I tried removing the indexer, restarting the serivce, and re-adding it but still getting the same result.
### Logged Error Messages
```text
2024-03-18 13:36:27 INFO Challenge detected. Title found: Just a moment...
2024-03-18 13:37:20 ERROR Error: Error solving the challenge. Timeout after 55.0 seconds.
2024-03-18 13:37:20 INFO Response in 55.908 s
2024-03-18 13:37:20 INFO XXX.XXX.XXX.XXX POST http://server_url:8191/v1 500 Internal Server Error
2024-03-18 13:37:22 INFO Incoming request => POST /v1 body: {'maxTimeout': 55000, 'cmd': 'request.get', 'url': 'https://speed.cd/browse/52/53/41/55/49/2/norar/q/'}
version_main cannot be converted to an integer
```
### Screenshots
_No response_
|
closed
|
2024-03-18T13:45:30Z
|
2024-03-18T15:09:12Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/1124
|
[
"duplicate"
] |
norbsmaq
| 1
|
huggingface/diffusers
|
deep-learning
| 10,796
|
Docs for HunyuanVideo LoRA?
|
### Describe the bug
As it seems like LoRA loading on HunyuanVideo has been implemented, I wonder where I can find the docs on this? Are they missing?
### Reproduction
Search for HunyuanVideo and LoRA
### Logs
```shell
```
### System Info
As it is the online docs...
### Who can help?
@stevhliu @sayakpaul
|
open
|
2025-02-15T04:31:34Z
|
2025-03-17T15:03:10Z
|
https://github.com/huggingface/diffusers/issues/10796
|
[
"bug",
"stale"
] |
tin2tin
| 8
|
mars-project/mars
|
numpy
| 3,350
|
[BUG][Ray] LightGBMError: Machine list file doesn't contain the local machine
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
There will raise `LightGBMError: Machine list file doesn't contain the local machine` when I run a `lightgbm.LGBMClassifier.fit` on a Mars cluster which runs on Ray.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: python 3.7.9
2. The version of Mars you use: 0.10.0
3. Versions of crucial packages, such as numpy, scipy and pandas: numpy 1.21.6, pandas 1.3.5, lightgbm 3.32
4. Full stack of the error.
5. Minimized code to reproduce the error.
I launched a Mars cluster running on 4 nodes Ray, 1 supervisor and 3 workers. The Supervisor occupies a node, and the other 3 worker are on 3 different nodes.
`Breast_cancer_data.csv` is from https://www.kaggle.com/code/prashant111/lightgbm-classifier-in-python/input
```python
import pandas as pd
import mars.dataframe as md
df = pd.read_csv("./Breast_cancer_data.csv")
mdf = md.DataFrame(data=df, chunk_size=300)
X = mdf[['mean_radius','mean_texture','mean_perimeter','mean_area','mean_smoothness']]
y = mdf['diagnosis']
from mars.learn.contrib import lightgbm as lgb
gbm = lgb.LGBMClassifier(importance_type='gain')
gbm.fit(X, y)
```
The results are as follows:
```
2023-05-25 19:32:31,136 ERROR threading.py:870 -- Got unhandled error when handling message ('run', 0, (<Subtask id=dKPvo4XoC1caISqsBgPdSr1D results=[LGBMTrain(f48c751592621feca43dcee83cb7e6c8_0)]>,), {}) in actor b'oTwzkmb1xDpbruGF6ienLYTb_subtask_processor' at ray://mars_cluster_1685014327/1/3
Traceback (most recent call last):
File "mars/oscar/core.pyx", line 519, in mars.oscar.core._BaseActor.__on_receive__
File "mars/oscar/core.pyx", line 404, in _handle_actor_result
File "mars/oscar/core.pyx", line 447, in mars.oscar.core._BaseActor._run_actor_async_generator
File "mars/oscar/core.pyx", line 448, in mars.oscar.core._BaseActor._run_actor_async_generator
File "mars/oscar/core.pyx", line 453, in mars.oscar.core._BaseActor._run_actor_async_generator
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py", line 641, in run
result = yield self._running_aio_task
File "mars/oscar/core.pyx", line 458, in mars.oscar.core._BaseActor._run_actor_async_generator
File "mars/oscar/core.pyx", line 378, in _handle_actor_result
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py", line 474, in run
await self._execute_graph(chunk_graph)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py", line 231, in _execute_graph
await to_wait
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/lib/aio/_threads.py", line 36, in to_thread
return await loop.run_in_executor(None, func_call)
File "/usr/local/python3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/core/mode.py", line 77, in _inner
return func(*args, **kwargs)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py", line 199, in _execute_operand
raise ExecutionError(ex).with_traceback(ex.__traceback__) from None
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py", line 196, in _execute_operand
return execute(ctx, op)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/core/operand/core.py", line 491, in execute
result = executor(results, op)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/learn/contrib/lightgbm/_train.py", line 390, in execute
**op.kwds,
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/sklearn.py", line 972, in fit
callbacks=callbacks, init_model=init_model)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/sklearn.py", line 758, in fit
callbacks=callbacks
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/engine.py", line 271, in train
booster = Booster(params=params, train_set=train_set)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/basic.py", line 2602, in __init__
num_machines=params["num_machines"]
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/basic.py", line 2745, in set_network
ctypes.c_int(num_machines)))
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/basic.py", line 125, in _safe_call
raise LightGBMError(_LIB.LGBM_GetLastError().decode('utf-8'))
mars.core.base.ExecutionError: Machine list file doesn't contain the local machine
2023-05-25 19:32:31,139 ERROR api.py:121 -- Got unhandled error when handling message ('run_subtask', 0, (<Subtask id=dKPvo4XoC1caISqsBgPdSr1D results=[LGBMTrain(f48c751592621feca43dcee83cb7e6c8_0)]>,), {}) in actor b'slot_numa-0_2_subtask_runner' at ray://mars_cluster_1685014327/1/3
Traceback (most recent call last):
File "mars/oscar/core.pyx", line 519, in mars.oscar.core._BaseActor.__on_receive__
File "mars/oscar/core.pyx", line 404, in _handle_actor_result
File "mars/oscar/core.pyx", line 447, in mars.oscar.core._BaseActor._run_actor_async_generator
File "mars/oscar/core.pyx", line 448, in mars.oscar.core._BaseActor._run_actor_async_generator
File "mars/oscar/core.pyx", line 453, in mars.oscar.core._BaseActor._run_actor_async_generator
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/runner.py", line 147, in run_subtask
result = yield self._running_processor.run(subtask)
File "mars/oscar/core.pyx", line 458, in mars.oscar.core._BaseActor._run_actor_async_generator
File "mars/oscar/core.pyx", line 378, in _handle_actor_result
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/context.py", line 196, in send
return self._process_result_message(result)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/context.py", line 76, in _process_result_message
raise message.as_instanceof_cause()
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/pool.py", line 677, in send
result = await self._run_coro(message.message_id, coro)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/pool.py", line 370, in _run_coro
return await coro
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/oscar/api.py", line 121, in __on_receive__
return await super().__on_receive__(message)
File "mars/oscar/core.pyx", line 526, in __on_receive__
File "mars/oscar/core.pyx", line 519, in mars.oscar.core._BaseActor.__on_receive__
File "mars/oscar/core.pyx", line 404, in _handle_actor_result
File "mars/oscar/core.pyx", line 447, in mars.oscar.core._BaseActor._run_actor_async_generator
File "mars/oscar/core.pyx", line 448, in mars.oscar.core._BaseActor._run_actor_async_generator
File "mars/oscar/core.pyx", line 453, in mars.oscar.core._BaseActor._run_actor_async_generator
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py", line 641, in run
result = yield self._running_aio_task
File "mars/oscar/core.pyx", line 458, in mars.oscar.core._BaseActor._run_actor_async_generator
File "mars/oscar/core.pyx", line 378, in _handle_actor_result
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py", line 474, in run
await self._execute_graph(chunk_graph)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py", line 231, in _execute_graph
await to_wait
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/lib/aio/_threads.py", line 36, in to_thread
return await loop.run_in_executor(None, func_call)
File "/usr/local/python3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/core/mode.py", line 77, in _inner
return func(*args, **kwargs)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py", line 199, in _execute_operand
raise ExecutionError(ex).with_traceback(ex.__traceback__) from None
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py", line 196, in _execute_operand
return execute(ctx, op)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/core/operand/core.py", line 491, in execute
result = executor(results, op)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/mars/learn/contrib/lightgbm/_train.py", line 390, in execute
**op.kwds,
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/sklearn.py", line 972, in fit
callbacks=callbacks, init_model=init_model)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/sklearn.py", line 758, in fit
callbacks=callbacks
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/engine.py", line 271, in train
booster = Booster(params=params, train_set=train_set)
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/basic.py", line 2602, in __init__
num_machines=params["num_machines"]
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/basic.py", line 2745, in set_network
ctypes.c_int(num_machines)))
File "/home/admin/ray-pack/tmp/job/9f040080/pyenv/lib/python3.7/site-packages/lightgbm/basic.py", line 125, in _safe_call
raise LightGBMError(_LIB.LGBM_GetLastError().decode('utf-8'))
mars.core.base.ExecutionError: [address=ray://mars_cluster_1685014327/1/3, pid=400941] Machine list file doesn't contain the local machine
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
|
closed
|
2023-05-25T11:45:00Z
|
2023-06-02T08:34:03Z
|
https://github.com/mars-project/mars/issues/3350
|
[
"mod: learn",
"mod: ray integration"
] |
zhongchun
| 0
|
flasgger/flasgger
|
api
| 220
|
A way to parse request data and validate schema
|
Hi, I find a way to parse request`s data and validate schema, not only data in body can be validated, take a look at [gist](https://gist.github.com/strongbugman/2c8e1cd9673ebdc59403a4055e4060c6), do you have any idea?
|
closed
|
2018-07-30T05:12:55Z
|
2018-08-27T05:30:06Z
|
https://github.com/flasgger/flasgger/issues/220
|
[] |
strongbugman
| 2
|
pytorch/vision
|
computer-vision
| 8,753
|
Conversion of WEBP to grayscale on read
|
### 🐛 Describe the bug
The `read_image` func ignores `ImageReadMode.GRAY` when reading WEBP images. It produces tensors with 3 color channels instead of 1.
Example image: [here](https://drive.google.com/file/d/1NyDLCoWV_TWOiVwlI8J-98oV-l_KKr-y)
Reproduction code:
```python
from torchvision.io import read_image, ImageReadMode
read_image("webp-image-file-format.webp"), ImageReadMode.GRAY).shape
```
Expected: `torch.Size([1, 576, 1022])`
Actual: `torch.Size([3, 576, 1022])`
### Versions
```sh
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.40
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla P100-PCIE-16GB
Nvidia driver version: 550.120
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 10
On-line CPU(s) list: 0-9
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i5-10600K CPU @ 4.10GHz
CPU family: 6
Model: 165
Thread(s) per core: 1
Core(s) per socket: 10
Socket(s): 1
Stepping: 5
BogoMIPS: 8207.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 320 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 40 MiB (10 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-9
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Unknown: Dependent on hypervisor status
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
|
open
|
2024-11-26T13:30:42Z
|
2024-12-09T12:37:16Z
|
https://github.com/pytorch/vision/issues/8753
|
[] |
DLumi
| 3
|
JaidedAI/EasyOCR
|
machine-learning
| 681
|
Accelerate reader.readtext() with OpenMP
|
Hello all, this is more a question than an issue. I know `reader.readtext()` can be accelerated if I have a GPU with CUDA available; I was wondering if there was a flag to accelerate it with multi-threading (OpenMP).
Regards,
Victor
|
open
|
2022-03-14T01:31:44Z
|
2022-03-14T01:31:44Z
|
https://github.com/JaidedAI/EasyOCR/issues/681
|
[] |
vkrGitHub
| 0
|
mljar/mercury
|
data-visualization
| 249
|
app hangs for long running notebooks
|
When @adrianblazeusz was running a notebook that needs >10 seconds to execute, he noticed that at the start, a web app is unresponsive. This can easily be reproduced by using a notebook with cells that need >10 seconds to execute.
Looks like we overwritten the state of the worker before notebook initialization ...
|
closed
|
2023-04-20T11:36:45Z
|
2023-04-20T14:02:12Z
|
https://github.com/mljar/mercury/issues/249
|
[
"bug"
] |
pplonski
| 2
|
PaddlePaddle/models
|
computer-vision
| 4,973
|
The avg losses are not same for training and validation when the same data are used.
|
Hi,
I am training a AttentionCluster for video classification by using my own data, followed here
https://github.com/PaddlePaddle/models/blob/release/1.8/PaddleCV/video/models/attention_cluster/README.md
I used the same data for training, validation, testing, and inference. In the training phrase, the avg losses for training and validation are different. The avg loss for training is about 0.002, but that for validation is 0.1.
My config file is something like this.
MODEL:
name: "AttentionCluster"
dataset: "YouTube-8M"
bone_network: None
drop_rate: 0.5
feature_num: 2
feature_names: ['rgb', 'audio']
feature_dims: [1024, 128]
seg_num: 100
cluster_nums: [32, 32]
num_classes: 12
topk: 20
UNIQUE:
good: 20
bad: 30
TRAIN:
epoch: 100
learning_rate: 0.001
pretrain_base: None
#batch_size: 2048
batch_size: 128
use_gpu: True
#num_gpus: 8
num_gpus: 1
filelist: "data/dataset/youtube8m/train.list"
VALID:
#batch_size: 2048
batch_size: 128
filelist: "data/dataset/youtube8m/train.list"
TEST:
#batch_size: 256
batch_size: 128
filelist: "data/dataset/youtube8m/train.list"
INFER:
batch_size: 1
filelist: "data/dataset/youtube8m/train.list"
Here is a piece of log of the last epoch.
[INFO: train_utils.py: 46]: ------- learning rate [0.001], learning rate counter [-] -----
[INFO: metrics_util.py: 79]: [TRAIN 2020-11-25 00:59:49] Epoch 99, iter 0, time 0.34092283248901367, , loss = 0.001345, Hit@1 = 0.40, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py: 79]: [TRAIN 2020-11-25 00:59:49] Epoch 99, iter 1, time 0.18329644203186035, , loss = 0.028296, Hit@1 = 0.41, PERR = 0.99, GAP = 1.00
[INFO: metrics_util.py: 79]: [TRAIN 2020-11-25 00:59:49] Epoch 99, iter 2, time 0.1722731590270996, , loss = 0.001181, Hit@1 = 0.47, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py: 79]: [TRAIN 2020-11-25 00:59:50] Epoch 99, iter 3, time 0.17226123809814453, , loss = 0.002051, Hit@1 = 0.43, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py: 79]: [TRAIN 2020-11-25 00:59:50] Epoch 99, iter 4, time 0.17180728912353516, , loss = 0.001143, Hit@1 = 0.42, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py: 79]: [TRAIN 2020-11-25 00:59:50] Epoch 99, iter 5, time 0.17211627960205078, , loss = 0.002029, Hit@1 = 0.45, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py: 79]: [TRAIN 2020-11-25 00:59:50] Epoch 99, iter 6, time 0.171461820602417, , loss = 0.002691, Hit@1 = 0.46, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py: 79]: [TRAIN 2020-11-25 00:59:50] Epoch 99, iter 7, time 0.1721813678741455, , loss = 0.001076, Hit@1 = 0.44, PERR = 1.00, GAP = 1.00
[INFO: train_utils.py: 122]: [TRAIN] Epoch 99 training finished, average time: 0.17362822805132186
[INFO: metrics_util.py: 79]: [TEST] test_iter 0 , loss = 0.474687, Hit@1 = 0.76, PERR = 0.98, GAP = 0.98
[INFO: metrics_util.py: 79]: [TEST] test_iter 1 , loss = 0.015227, Hit@1 = 1.00, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py: 79]: [TEST] test_iter 2 , loss = 0.113151, Hit@1 = 0.98, PERR = 0.98, GAP = 1.00
[INFO: metrics_util.py: 79]: [TEST] test_iter 3 , loss = 0.117705, Hit@1 = 0.99, PERR = 0.99, GAP = 0.99
[INFO: metrics_util.py: 79]: [TEST] test_iter 4 , loss = 0.105678, Hit@1 = 0.46, PERR = 0.98, GAP = 1.00
[INFO: metrics_util.py: 79]: [TEST] test_iter 5 , loss = 0.105103, Hit@1 = 0.98, PERR = 0.99, GAP = 1.00
[INFO: metrics_util.py: 79]: [TEST] test_iter 6 , loss = 0.004947, Hit@1 = 1.00, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py: 79]: [TEST] test_iter 7 , loss = 0.013899, Hit@1 = 1.00, PERR = 1.00, GAP = 1.00
[INFO: metrics_util.py: 112]: [TEST] Finish avg_hit_at_one: 0.896331787109375, avg_perr: 0.9912109375, avg_loss :0.1187995703658089, aps: [0.9961657036415631, 0, 0, 0, 0, 0, 1.0000000000000002, 0, 0.9999999999999999, 1.0, 0, 0.9981928818589554], gap:0.9960663751614023
share_vars_from is set, scope is ignored.
|
open
|
2020-11-25T04:20:57Z
|
2024-02-26T05:09:49Z
|
https://github.com/PaddlePaddle/models/issues/4973
|
[] |
p1n0cch10
| 2
|
ultralytics/ultralytics
|
pytorch
| 19,468
|
What is the best practice to set `shuffle=False` in DataLoader?
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I tried to enable `rect=True` features in yolov11 Training mode. And it just shows:
```
WARNING `rect=True` is incompatible with DataLoader shuffle, setting shuffle=False
```
Then I found that `shuffle` is not an argument for training configuration. https://docs.ultralytics.com/modes/train/#train-settings
Then I checked the code, the shuffle is automaticly set as `True` when in training mode. https://github.com/ultralytics/ultralytics/blob/main/ultralytics/models/yolo/detect/train.py#L50
Then I checked it seems the only way is to define own DataLoader?
What is the best practice to do so? Is there any code available for reference?
Thanks
### Additional
_No response_
|
closed
|
2025-02-27T22:38:23Z
|
2025-02-28T12:31:49Z
|
https://github.com/ultralytics/ultralytics/issues/19468
|
[
"question"
] |
WuZhuoran
| 12
|
marimo-team/marimo
|
data-visualization
| 3,545
|
Cells sometimes switch columns in app view
|
### Describe the bug
Editing a notebook in app view — sometimes cells that are supposed to be in one column are rendered in the other.
Example:
<img width="1390" alt="Image" src="https://github.com/user-attachments/assets/4f391ac1-306d-4b41-8e13-66475c95f5e7" />
Expected: In app view, left column says hello, right column says Title, bye.
Actual:
<img width="1101" alt="Image" src="https://github.com/user-attachments/assets/cfc30b24-6478-4e41-9727-22f2a44860d7" />
### Environment
<details>
```
{
"marimo": "0.10.16",
"OS": "Darwin",
"OS Version": "22.5.0",
"Processor": "arm",
"Python Version": "3.12.4",
"Binaries": {
"Browser": "131.0.6778.265",
"Node": "v21.5.0"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.23.0",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.1",
"pyyaml": "6.0.2",
"ruff": "0.9.2",
"starlette": "0.45.2",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.2"
},
"Optional Dependencies": {}
```
</details>
### Code to reproduce
```python
import marimo
__generated_with = "0.10.16"
app = marimo.App(width="columns")
@app.cell(column=0, hide_code=True)
def _(mo):
mo.md(
"""
hello
hello
hello
hello
"""
)
return
@app.cell(column=1, hide_code=True)
def _(mo):
mo.md("""# Title""")
return
@app.cell
def _(mo):
mo.md("""bye""")
return
@app.cell
def _():
import marimo as mo
return (mo,)
if __name__ == "__main__":
app.run()
```
|
closed
|
2025-01-23T05:33:14Z
|
2025-01-23T06:14:53Z
|
https://github.com/marimo-team/marimo/issues/3545
|
[
"bug",
"good first issue (typescript)"
] |
akshayka
| 0
|
HIT-SCIR/ltp
|
nlp
| 176
|
请问目前的离线版本(3.3.2)是否可以用“语义依存分析”功能呢?
|
请问目前的离线版本(3.3.2)是否可以用“语义依存分析”功能呢?
|
closed
|
2016-07-22T11:24:18Z
|
2016-07-22T12:46:16Z
|
https://github.com/HIT-SCIR/ltp/issues/176
|
[] |
Reneexml
| 1
|
polakowo/vectorbt
|
data-visualization
| 430
|
How to custom order size in from_signals() different size for different order
|
I'm using the from_signals() to construct the backtesting, However, I want the first order size be different from other orders, being bigger size. The other order size except the first order are all the same, a constant value.
I checked the size and size_type parameters, but not able to find a proper solution.
How can I do it with from_signals().
Thanks a lot.
|
closed
|
2022-04-06T14:21:46Z
|
2022-04-07T01:56:55Z
|
https://github.com/polakowo/vectorbt/issues/430
|
[] |
mikolaje
| 1
|
JoeanAmier/TikTokDownloader
|
api
| 8
|
KeyError: 55
|
File "D:\PycharmProjects\TikTokDownloader\src\DataDownloader.py", line 469, in run
self.get_info(video, "Video")
File "D:\PycharmProjects\TikTokDownloader\src\DataDownloader.py", line 271, in get_info
type_ = {68: "Image", 0: "Video"}[item["aweme_type"]]
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
KeyError: 55
|
closed
|
2023-06-28T09:49:22Z
|
2023-06-30T15:30:42Z
|
https://github.com/JoeanAmier/TikTokDownloader/issues/8
|
[
"功能异常(bug)"
] |
jingle1267
| 2
|
AutoGPTQ/AutoGPTQ
|
nlp
| 263
|
[BUG] Unable to generate, model crashes
|
**Describe the bug**
Unable to generate any text. As soon as I input a prompt, the model crashes less than a second later. The following error occurs:
```
Traceback (most recent call last):
File "/home/maxwellj/Development/llama/text-generation-webui/modules/callbacks.py", line 55, in gentask
ret = self.mfunc(callback=_callback, *args, **self.kwargs)
File "/home/maxwellj/Development/llama/text-generation-webui/modules/text_generation.py", line 307, in generate_with_callback
shared.model.generate(**kwargs)
File "/home/maxwellj/Development/llama/installer_files/env/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 442, in generate
with torch.inference_mode(), torch.amp.autocast(device_type=self.device.type):
File "/home/maxwellj/Development/llama/installer_files/env/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 431, in device
device = [d for d in self.hf_device_map.values() if d not in {'cpu', 'disk'}][0]
IndexError: list index out of range
Output generated in 0.39 seconds (0.00 tokens/s, 0 tokens, context 43, seed 802264682)
```
**Hardware details**
```
System:
Kernel: 6.4.10-200.fc38.x86_64 arch: x86_64 bits: 64 Desktop: Hyprland
Distro: Fedora release 38 (Thirty Eight)
Machine:
Type: Desktop Mobo: ASUSTeK model: EX-A320M-GAMING v: Rev X.0x
UEFI: American Megatrends v: 4023
date: 08/20/2018
CPU:
Info: 8-core model: AMD Ryzen 7 2700X bits: 64 type: MT MCP cache: L2: 4 MiB
Speed (MHz): avg: 2264 min/max: 2200/3700 cores: 1: 2200 2: 2200 3: 2200
4: 3700 5: 2200 6: 2200 7: 2200 8: 2200 9: 2053 10: 2200 11: 2200 12: 1884
13: 2196 14: 2200 15: 2200 16: 2200
Graphics:
Device-1: AMD Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT]
driver: amdgpu v: kernel
Display: wayland server: X.org v: 1.20.14 with: Xwayland v: 22.1.9
compositor: Hyprland driver: X: loaded: amdgpu
unloaded: fbdev,modesetting,radeon,vesa dri: radeonsi gpu: amdgpu
resolution: 1: 1920x1080~144Hz 2: 1920x1080~144Hz
API: OpenGL v: 4.6 Mesa 23.1.5 renderer: AMD Radeon RX 5600 XT (navi10
LLVM 16.0.6 DRM 3.52 6.4.10-200.fc38.x86_64)
Swap:
ID-1: swap-1 type: zram size: 8 GiB used: 1.21 GiB (15.1%) dev: /dev/zram0
```
**Software version**
Using oobabooga's text-generation-webui (`300219b`). The model I tried is https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.1
**To Reproduce**
Steps to reproduce the behavior:
1. Load airoboros-7b-gpt4-1.1
2. Input a prompt
3. Model crashes
**Expected behavior**
Model generates a response
**Additional context**
I am using CPU only. Have other (seemingly unrelated) issues with GPU that prevent oobabooga's web UI from functioning.
|
closed
|
2023-08-16T18:07:26Z
|
2023-08-17T10:33:09Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/263
|
[
"bug"
] |
maxwelljens
| 2
|
MagicStack/asyncpg
|
asyncio
| 512
|
The custom uuid implementation of asyncpg causes errors in my api
|
* **asyncpg version**:
asyncpg==0.20.0
* **PostgreSQL version**:
PostgreSQL 11.5 on x86_64-pc-linux-musl, compiled by gcc (Alpine 8.3.0) 8.3.0, 64-bit
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**:
no
* **Python version**:
3.8.0
* **Platform**:
linux
* **Do you use pgbouncer?**:
no
* **Did you install asyncpg with pip?**:
yes
* **If you built asyncpg locally, which version of Cython did you use?**:
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**:
yes
I am using asyncpg to build a simple api using the library [fastapi](https://github.com/tiangolo/fastapi), the [asyncpg implementation of uuid](https://github.com/MagicStack/py-pgproto/blob/484e3520d8cb0514b7596a8f9eaa80f3f7b79d0c/uuid.pyx) causes an error when I read a uuid from the database.
The uuid implementation of asynpg is not properly picked up by fastapi because they use type(obj) instead of instance(obj, uuid.UUID). Basically there is a defined list of types with their jsonification methods and asyncpg.pgproto.pgproto.UUID is not in that list. This piece of code demonstrates the issue:
```python
import uuid
from asyncpg.pgproto import pgproto
def demonstrate_uuid_issue():
asyncpg_uuid = pgproto.UUID("a10ff360-3b1e-4984-a26f-d3ab460bdb51")
regular_uuid = uuid.UUID("a10ff360-3b1e-4984-a26f-d3ab460bdb51")
print(regular_uuid == asyncpg_uuid) # True
print(isinstance(asyncpg_uuid, type(regular_uuid))) # True
print(type(asyncpg_uuid) == type(regular_uuid)) # False
```
The pgproto.UUID implementation does a pretty good job hiding it's true colors but unfortunately it is still of a different type than the regular uuid.UUID. This basically throws an error in my api and forces me to use the regular uuid type like so:
```python
await connection.set_type_codec('uuid',
encoder=str,
decoder=uuid.UUID,
schema='pg_catalog')
```
Someone already suggested to push your new uuid implementation upstream:
https://github.com/MagicStack/py-pgproto/blob/484e3520d8cb0514b7596a8f9eaa80f3f7b79d0c/uuid.pyx#L324-L328
I think that would be an ideal solution to the problem. Are there any other steps I can take to resolve this issue on the asyncpg side of things?
|
closed
|
2019-11-29T12:41:07Z
|
2024-08-10T23:45:23Z
|
https://github.com/MagicStack/asyncpg/issues/512
|
[] |
RmStorm
| 14
|
aio-libs/aiopg
|
sqlalchemy
| 222
|
Connection remains open if InternalError happened during connect()
|
Imagine following situation:
```python
if conn.closed:
try:
conn = yield from aiopg.connect(dsn)
except psycopg2.InternalError as e:
print("reconnect InternalError ", str(e), type(e))
except psycopg2.Error as e:
print("reconnect error ", str(e), type(e))
```
When connect() raises InternalError (in my case that happens during hstore select request) conn is still points to old connection, but new connection remains open and after some time will emit ResourceWarning about deleting connection object with open _conn.
|
open
|
2016-11-29T13:24:16Z
|
2017-10-17T19:28:40Z
|
https://github.com/aio-libs/aiopg/issues/222
|
[] |
kelvich
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.