repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
hzwer/ECCV2022-RIFE
|
computer-vision
| 221
|
About LiteFlowNet's pre-trained model as the overpowered teacher in the leakage distillation
|
您好,有一个问题想要请教一下您:如何在将LiteFlowNet的预训练模型作为overpowered teacher添加到代码中?我没有在代码中找到对LiteFlowNet预训练模型的调用。
|
closed
|
2021-12-11T08:50:41Z
|
2021-12-17T11:57:51Z
|
https://github.com/hzwer/ECCV2022-RIFE/issues/221
|
[] |
Heroandzhang
| 4
|
huggingface/diffusers
|
pytorch
| 10,972
|
Loading LoRA weights fails for OneTrainer Flux LoRAs
|
### Describe the bug
Loading [OneTrainer](https://github.com/Nerogar/OneTrainer) style LoRAs, using diffusers commit #[dcd77ce22273708294b7b9c2f7f0a4e45d7a9f33](https://github.com/huggingface/diffusers/commit/dcd77ce22273708294b7b9c2f7f0a4e45d7a9f33), fails with error:
```
Traceback (most recent call last):
File "/+DEV/diffusers-edge/src/diffusers/loaders/lora_pipeline.py", line 1527, in load_lora_weights
state_dict, network_alphas = self.lora_state_dict(
File "/+DEVTOOL/miniconda3/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/+DEV/diffusers-edge/src/diffusers/loaders/lora_pipeline.py", line 1450, in lora_state_dict
state_dict = _convert_kohya_flux_lora_to_diffusers(state_dict)
File "/+DEV/diffusers-edge/src/diffusers/loaders/lora_conversion_utils.py", line 687, in _convert_kohya_flux_lora_to_diffusers
return _convert_mixture_state_dict_to_diffusers(state_dict)
File "/+DEV/diffusers-edge/src/diffusers/loaders/lora_conversion_utils.py", line 659, in _convert_mixture_state_dict_to_diffusers
if remaining_all_unet:
UnboundLocalError: local variable 'remaining_all_unet' referenced before assignment
```
Basic OneTrainer LoRA structure:
```
"onetrainer": {
"transformer_name": "lora_transformer_",
"double_block_name": "transformer_blocks_",
"single_block_name": "single_transformer_blocks_",
"double_module_names": (
"_attn_to_out_0",
("_attn_to_q", "_attn_to_k", "_attn_to_v"),
"_ff_net_0_proj",
"_ff_net_2",
"_norm1_linear",
"_attn_to_add_out",
("_attn_add_q_proj", "_attn_add_k_proj", "_attn_add_v_proj"),
"_ff_context_net_0_proj", "_ff_context_net_2",
"_norm1_context_linear"
),
"single_module_names": (
("_attn_to_q", "_attn_to_k", "_attn_to_v", "_proj_mlp"),
"_proj_out",
"_norm_linear",
),
"param_names": (".lora_down.weight", ".lora_up.weight", ".alpha"),
"dora_param_name": ".dora_scale",
"text_encoder_names": ("lora_te1_", "lora_te2_"),
"unique_meta": ("ot_branch", "ot_revision", "ot_config"),
"comment": "kohya-diffusers mix-ish, supports modelspec, yay"
},
```
Example LoRAs:
https://civitai.com/models/767016?modelVersionId=857899
https://civitai.com/models/794095?modelVersionId=887953
https://civitai.com/models/754969?modelVersionId=884632
https://civitai.com/models/991928?modelVersionId=1111315
https://civitai.com/models/825919?modelVersionId=923640
https://civitai.com/models/1226276?modelVersionId=1381683
Somewhat related: https://github.com/huggingface/diffusers/issues/10954
### Reproduction
```
from pathlib import Path
import torch
from diffusers import FluxTransformer2DModel, TorchAoConfig, FluxPipeline
from transformers import T5EncoderModel
repo_id = "black-forest-labs/FLUX.1-dev"
dtype = torch.bfloat16
quantization_config = TorchAoConfig("int8_weight_only")
transformer = FluxTransformer2DModel.from_pretrained(
repo_id,
subfolder="transformer",
quantization_config=quantization_config,
torch_dtype=dtype,
)
text_encoder_2 = T5EncoderModel.from_pretrained(
repo_id,
subfolder="text_encoder_2",
quantization_config=quantization_config,
torch_dtype=dtype,
)
pipe = FluxPipeline.from_pretrained(
repo_id,
transformer=transformer,
text_encoder_2=text_encoder_2,
torch_dtype=dtype,
)
lora_path = Path("/-LoRAs/Flux/charcoal3000.safetensors")
pipe.load_lora_weights(lora_path, adapter_name=lora_path.stem)
```
### Logs
```shell
```
### System Info
diffusers dcd77ce22273708294b7b9c2f7f0a4e45d7a9f33, Linux like everyone, and python3.10
### Who can help?
Calling LoRA ambassador Mr. @sayakpaul
|
closed
|
2025-03-05T13:07:40Z
|
2025-03-06T08:33:34Z
|
https://github.com/huggingface/diffusers/issues/10972
|
[
"bug"
] |
spezialspezial
| 2
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,635
|
Hello,
|
谢谢
|
closed
|
2024-03-13T09:02:26Z
|
2024-03-21T00:30:07Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1635
|
[] |
czh886
| 1
|
modin-project/modin
|
data-science
| 6,712
|
Copy `_shape_hint` in `query_complier.copy` function
|
closed
|
2023-11-06T18:02:39Z
|
2023-11-07T09:46:00Z
|
https://github.com/modin-project/modin/issues/6712
|
[
"Performance 🚀"
] |
anmyachev
| 0
|
|
graphql-python/gql
|
graphql
| 207
|
gql tests are failing with graphql-core 3.1.5 (cosmetic)
|
With graphql-core version 3.1.5:
* [print_ast() break arguments over multiple lines ](https://github.com/graphql-python/graphql-core/commit/ae923bb15ce58c7059e7e9f352e079ba8b23d3f9)
* [the check for the source argument was changed](https://github.com/graphql-python/graphql-core/commit/a9ae0d90fc25565dada6e363464ddc2f8eb712b3)
Some gql tests are failing now because of this.
Calling gql with an int instead of a string will generate a TypeError with `object of type 'int' has no len() `instead of `body must be a string`
|
closed
|
2021-05-11T11:29:52Z
|
2021-05-22T21:41:45Z
|
https://github.com/graphql-python/gql/issues/207
|
[
"type: tests"
] |
leszekhanusz
| 0
|
Lightning-AI/LitServe
|
fastapi
| 366
|
Info route
|
<!--
⚠️ BEFORE SUBMITTING, READ:
We're excited for your request! However, here are things we are not interested in:
- Decorators.
- Doing the same thing in multiple ways.
- Adding more layers of abstraction... tree-depth should be 1 at most.
- Features that over-engineer or complicate the code internals.
- Linters, and crud that complicates projects.
-->
----
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Add a `/info` route that returns
- Litserver configuration
- Model metadata
for example:
```
{
"model": {
"name": "my-awesome-model",
"version": "v1.1.0"
},
"litserver": {
"num_workers": 2,
"devices": ["cpu"],
"workers_per_device": 2,
"max_batch_size": 4,
"batch_timeout": "100ms"
}
}
```
### Motivation
<!--
Please outline the motivation for the proposal.
Is your feature request related to a problem? e.g., I'm always frustrated when [...].
If this is related to another GitHub issue, please link here too...
-->
It is useful to have a fast way to investigate server and loaded model configuration, for example a backend could call this api to show in a dedicated UI what model is currently deployed in production.
### Pitch
<!-- A clear and concise description of what you want to happen. -->
Model metadata will be passed to `LitServer` class as a custom JSON serializable object:
```
server = ls.LitServer(ExampleLitAPI(), model_metadata={"name": "my-awesome-model", "version": "v1.1.0"})
```
then in the method `register_endpoint` the route will be added to the fastapi app.
|
closed
|
2024-11-21T10:29:09Z
|
2024-11-27T17:31:26Z
|
https://github.com/Lightning-AI/LitServe/issues/366
|
[
"enhancement"
] |
lorenzomassimiani
| 2
|
amdegroot/ssd.pytorch
|
computer-vision
| 78
|
in config.py, did min_sizes and max_sizes mean scale?
|
Nice work, thanks very much. But I have a little question:
```python
'min_sizes' : [30, 60, 111, 162, 213, 264],
'max_sizes' : [60, 111, 162, 213, 264, 315],
```
Did this mean the scale of default boxes in ssd? Why did you set in this way?why is it different with 0.2-0.95 in the original caffe implementation?
|
open
|
2017-11-24T12:37:34Z
|
2019-06-12T11:25:24Z
|
https://github.com/amdegroot/ssd.pytorch/issues/78
|
[] |
squirrel16
| 5
|
ultralytics/ultralytics
|
computer-vision
| 19,232
|
I found that training on dual GPU will load pre training weights, while in single GPU mode it seems that pretraining weights will not be loaded
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
### question1:
1. dual GPU mode (device)

My training script is as follows:
```
from ultralytics import YOLO
model = YOLO("yolo11x-cls.yaml")
results = model.train(data=r"/home/ps/train/YK/task/datasets/google-gt4/YOLO-classify/dataset1/20250120-new_jitai/",
epochs=3000,
imgsz=640,
batch=4,
workers=8,
patience=800,
task='classify',
pretrained=r"/home/ps/train/YK/task/ultralytics/runs/classify/train26/weights/best.pt",
seed=6,
cos_lr=True,
device="0,1",
augment=True,
)
```
2. single GPU mode (No logs similar to 'Transfer from pretrained weights' have appeared)
My training script is as follows:
```
from ultralytics import YOLO
model = YOLO("yolo11x-cls.yaml")
results = model.train(data=r"/home/ps/train/YK/task/datasets/google-gt4/YOLO-classify/dataset1/20250120-new_jitai/",
epochs=3000,
imgsz=640,
batch=4,
workers=8,
patience=800,
task='classify',
pretrained=r"/home/ps/train/YK/task/ultralytics/runs/classify/train26/weights/best.pt",
seed=6,
cos_lr=True,
device="1",
augment=True,
)
```
### question2:
Debugging seems to be limited in dual GPU mode, Debugging seems to be limited in dual GPU mode, for example, if the following breakpoint is inserted in the _do_train function of BaseTrainer (as below), it cannot be debugged here, but single GPU mode can be debugged here,
Breakpoint position

I believe this is due to the conditional judgment of the variable world_size

### Additional
_No response_
|
closed
|
2025-02-13T11:43:01Z
|
2025-02-13T11:45:51Z
|
https://github.com/ultralytics/ultralytics/issues/19232
|
[
"question",
"classify"
] |
yuan-kai-design
| 1
|
mckinsey/vizro
|
data-visualization
| 681
|
Setting a default columnSize value for dash_ag_grid
|
Reference link: https://dash.plotly.com/dash-ag-grid/column-sizing#size-to-fit-and-responsive-size-to-fit
Should Vizro set `columnSize="responsiveSizeToFit"` for the `dash_ag_grid` figure function by default?
|
open
|
2024-09-04T15:01:27Z
|
2024-09-23T10:08:00Z
|
https://github.com/mckinsey/vizro/issues/681
|
[] |
petar-qb
| 3
|
huggingface/transformers
|
deep-learning
| 36,836
|
GOT-OCR2 docs indicate model can produce markdown, but it only produces LaTeX.
|
Stated [here](https://huggingface.co/docs/transformers/en/model_doc/got_ocr2#formatted-text-inference)
Returning formatted text is toggled via the `format` boolean:
```python
inputs = processor(image, return_tensors="pt", format=True).to(device)
```
It only returns LaTeX. Can the model somehow return markdown or are the docs mistaken?
|
open
|
2025-03-19T21:14:46Z
|
2025-03-20T12:40:46Z
|
https://github.com/huggingface/transformers/issues/36836
|
[] |
piercelamb
| 1
|
psf/black
|
python
| 4,476
|
Report error when processing folders on the command line
|
<!--
Please make sure that the bug is not already fixed either in newer versions or the
current development version. To confirm this, you have three options:
1. Update Black's version if a newer release exists: `pip install -U black`
2. Use the online formatter at <https://black.vercel.app/?version=main>, which will use
the latest main branch. Note that the online formatter currently runs on
an older version of Python and may not support newer syntax, such as the
extended f-string syntax added in Python 3.12.
3. Or run _Black_ on your machine:
- create a new virtualenv (make sure it's the same Python version);
- clone this repository;
- run `pip install -e .[d]`;
- run `pip install -r test_requirements.txt`
- make sure it's sane by running `python -m pytest`; and
- run `black` like you did last time.
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Error when formatting a folder with multiple files using the command line
**To Reproduce**
<!--
Minimal steps to reproduce the behavior with source code and Black's configuration.
-->
For example:
Create a new folder ```src``` and create 2 empty files in ```src```
```shell
mkdir src
type nul > ./src/t1.py
type nul > ./src/t2.py
black ./src
```
The resulting error is:
```python
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\root\AppData\Local\pypoetry\Cache\virtualenvs\test-3OFd-K4v-py3.12\Scripts\black.exe\__main__.py", line 7, in <module>
File "src\black\__init__.py", line 1588, in patched_main
File "C:\Users\root\AppData\Local\pypoetry\Cache\virtualenvs\test-3OFd-K4v-py3.12\Lib\site-packages\click\core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\root\AppData\Local\pypoetry\Cache\virtualenvs\test-3OFd-K4v-py3.12\Lib\site-packages\click\core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\root\AppData\Local\pypoetry\Cache\virtualenvs\test-3OFd-K4v-py3.12\Lib\site-packages\click\core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\root\AppData\Local\pypoetry\Cache\virtualenvs\test-3OFd-K4v-py3.12\Lib\site-packages\click\core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\root\AppData\Local\pypoetry\Cache\virtualenvs\test-3OFd-K4v-py3.12\Lib\site-packages\click\decorators.py", line 33, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "src\black\__init__.py", line 711, in main
File "C:\Users\root\AppData\Local\pypoetry\Cache\virtualenvs\test-3OFd-K4v-py3.12\Lib\site-packages\black\concurrency.py", line 98, in reformat_many
loop = asyncio.new_event_loop()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\USER_PROGRAMS\python\Lib\asyncio\events.py", line 823, in new_event_loop
return get_event_loop_policy().new_event_loop()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\USER_PROGRAMS\python\Lib\asyncio\events.py", line 720, in new_event_loop
return self._loop_factory()
^^^^^^^^^^^^^^^^^^^^
File "C:\USER_PROGRAMS\python\Lib\asyncio\windows_events.py", line 316, in __init__
super().__init__(proactor)
File "C:\USER_PROGRAMS\python\Lib\asyncio\proactor_events.py", line 640, in __init__
self._make_self_pipe()
File "C:\USER_PROGRAMS\python\Lib\asyncio\proactor_events.py", line 787, in _make_self_pipe
self._ssock, self._csock = socket.socketpair()
^^^^^^^^^^^^^^^^^^^
File "C:\USER_PROGRAMS\python\Lib\socket.py", line 642, in _fallback_socketpair
raise ConnectionError("Unexpected peer connection")
ConnectionError: Unexpected peer connection
Exception ignored in: <function BaseEventLoop.__del__ at 0x0000000003B8B380>
Traceback (most recent call last):
File "C:\USER_PROGRAMS\python\Lib\asyncio\base_events.py", line 728, in __del__
File "C:\USER_PROGRAMS\python\Lib\asyncio\proactor_events.py", line 697, in close
File "C:\USER_PROGRAMS\python\Lib\asyncio\proactor_events.py", line 779, in _close_self_pipe
AttributeError: 'ProactorEventLoop' object has no attribute '_ssock'
```
**Expected behavior**
It should not report any errors.
<!-- A clear and concise description of what you expected to happen. -->
**Environment**
<!-- Please complete the following information: -->
- Black version: 24.10.0
- OS and Python version: Windows11 23H2/ Python 3.12.6
**Additional context**
<!-- Add any other context about the problem here. -->
|
closed
|
2024-10-10T15:40:16Z
|
2024-10-10T15:56:45Z
|
https://github.com/psf/black/issues/4476
|
[
"T: bug"
] |
wevsty
| 2
|
skypilot-org/skypilot
|
data-science
| 4,506
|
[bug] Task name is required when running `sky launch --docker`
|
<!-- Describe the bug report / feature request here -->
The documentation at https://docs.skypilot.co/en/latest/reference/yaml-spec.html#task-yaml states that the task name is optional; however, using the localdocker backend will result in an error if it is not specified.
```yaml
# Task name (optional), used for display purposes.
# name: my-task
resources:
# Optional; if left out, automatically pick the cheapest cloud.
cloud: kubernetes
# Working directory (optional) containing the project codebase.
# Its contents are synced to ~/sky_workdir/ on the cluster.
workdir: .
# Typical use: pip install -r requirements.txt
# Invoked under the workdir (i.e., can use its files).
setup: |
echo "Running setup."
# Typical use: make use of resources, such as running training.
# Invoked under the workdir (i.e., can use its files).
run: |
echo "Hello, SkyPilot!"
conda env list
```
```
(sky) ➜ hello-sky git:(f0ebf13b) ✗ sky launch ./hello-world.yaml --docker
Task from YAML spec: ./hello-world.yaml
Considered resources (1 node):
---------------------------------------------------------------------------------------------
CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
---------------------------------------------------------------------------------------------
Kubernetes 2CPU--2GB 2 2 - xxxxxxx 0.00 ✔
---------------------------------------------------------------------------------------------
Launching a new cluster 'sky-4242-gcgg'. Proceed? [Y/n]: y
Traceback (most recent call last):
File "/home/gcgg/applications/miniconda3/bin/sky", line 33, in <module>
sys.exit(load_entry_point('skypilot', 'console_scripts', 'sky')())
File "/home/gcgg/applications/miniconda3/lib/python3.8/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/home/gcgg/applications/miniconda3/lib/python3.8/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/home/gcgg/code/skypilot/sky/utils/common_utils.py", line 366, in _record
return f(*args, **kwargs)
File "/home/gcgg/code/skypilot/sky/cli.py", line 838, in invoke
return super().invoke(ctx)
File "/home/gcgg/applications/miniconda3/lib/python3.8/site-packages/click/core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/gcgg/applications/miniconda3/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/gcgg/applications/miniconda3/lib/python3.8/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/home/gcgg/code/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/gcgg/code/skypilot/sky/cli.py", line 1159, in launch
_launch_with_confirm(task,
File "/home/gcgg/code/skypilot/sky/cli.py", line 628, in _launch_with_confirm
sky.launch(
File "/home/gcgg/code/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/gcgg/code/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/gcgg/code/skypilot/sky/execution.py", line 529, in launch
return _execute(
File "/home/gcgg/code/skypilot/sky/execution.py", line 302, in _execute
handle = backend.provision(
File "/home/gcgg/code/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
File "/home/gcgg/code/skypilot/sky/utils/common_utils.py", line 366, in _record
return f(*args, **kwargs)
File "/home/gcgg/code/skypilot/sky/backends/backend.py", line 84, in provision
return self._provision(task, to_provision, dryrun, stream_logs,
File "/home/gcgg/code/skypilot/sky/backends/local_docker_backend.py", line 149, in _provision
assert task.name is not None, ('Task name cannot be None - have you '
AssertionError: Task name cannot be None - have you specified a task name?
```
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: PLEASE_FILL_IN
* `sky -c`: PLEASE_FILL_IN
|
closed
|
2024-12-25T14:03:33Z
|
2024-12-26T00:15:09Z
|
https://github.com/skypilot-org/skypilot/issues/4506
|
[] |
gaocegege
| 2
|
MaartenGr/BERTopic
|
nlp
| 1,897
|
Home page get_topic_info() function not understood
|


Why don't the two functions get the same topic name with the same label? In fact I think the first picture is out of order, the name of the topic labeled 0 should not be 0_... What?
|
open
|
2024-03-31T10:14:58Z
|
2024-04-03T08:11:23Z
|
https://github.com/MaartenGr/BERTopic/issues/1897
|
[] |
EricIrving-chs
| 5
|
dmlc/gluon-nlp
|
numpy
| 1,552
|
Operator npx.broadcast_like
|
## Description
Currently, pr #1551 and pr #1545 are blocked by operator npx.broadcast_like. This will be fixed in https://github.com/apache/incubator-mxnet/pull/20169
|
closed
|
2021-04-15T04:26:31Z
|
2021-06-03T17:44:45Z
|
https://github.com/dmlc/gluon-nlp/issues/1552
|
[
"bug"
] |
barry-jin
| 3
|
cvat-ai/cvat
|
computer-vision
| 8,576
|
Grafana is not restarting as other containers (using docker)
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Install and start cvat on a new VM using docker (not Kubernetes)
2. On CVAT UI, go to "Analytics" tab (it's working)
3. Reboot VM
4. On CVAT UI, try to go to "Analytics" tab (it's not working)
5. Check if grafana container is running. It's not running.
### Expected Behavior
Analytics tab is working, and Grafana container is restarted with other containers.
### Possible Solution
Use docker restart policy. I'm working on it in a PR.
### Context
_No response_
### Environment
```Markdown
- CVAT v2.21.1
- Docker version 27.3.1
- Docker used (not Kubernetes)
- Linux
```
|
closed
|
2024-10-22T09:02:56Z
|
2024-10-22T11:27:30Z
|
https://github.com/cvat-ai/cvat/issues/8576
|
[
"bug"
] |
Gui-U
| 0
|
huggingface/datasets
|
pytorch
| 6,810
|
Allow deleting a subset/config from a no-script dataset
|
As proposed by @BramVanroy, it would be neat to have this functionality through the API.
|
closed
|
2024-04-15T07:53:26Z
|
2025-01-11T18:40:40Z
|
https://github.com/huggingface/datasets/issues/6810
|
[
"enhancement"
] |
albertvillanova
| 3
|
ivy-llc/ivy
|
pytorch
| 28,717
|
Fix Frontend Failing Test: tensorflow - logic.paddle.equal_all
|
To-do List: https://github.com/unifyai/ivy/issues/27499
|
closed
|
2024-04-01T13:15:46Z
|
2024-04-09T04:31:30Z
|
https://github.com/ivy-llc/ivy/issues/28717
|
[
"Sub Task"
] |
ZJay07
| 0
|
pytest-dev/pytest-mock
|
pytest
| 123
|
Patches not stopped between tests.
|
Hi,
My patches seem to be leeking from one test to the other. Can you suggest why this is?
Tests below:
```python
import asyncio
import logging
import sys
import time
import pytest
import websockets
from asynctest import CoroutineMock, MagicMock
from pyskyq.status import Status
from .asynccontextmanagermock import AsyncContextManagerMock
from .mock_constants import WS_STATUS_MOCK
logformat = "[%(asctime)s] %(levelname)s:%(name)s:%(message)s"
logging.basicConfig(level=logging.WARNING, stream=sys.stdout,
format=logformat) # datefmt="%Y-%m-%d %H:%M:%S"
# THIS TEST WORKS FINE
def test_status(mocker):
a = mocker.patch('websockets.connect', new_callable=AsyncContextManagerMock)
a.return_value.__aenter__.return_value.recv = CoroutineMock(return_value=WS_STATUS_MOCK)
stat = Status('some_host')
stat.create_event_listener()
time.sleep(1) # allow time for awaiting, etc.
assert stat.standby is True
mocker.stopall()
def wait_beyond_timeout_then_serve_json():
time.sleep(3)
raise websockets.exceptions.ConnectionClosed
#return WS_STATUS_MOCK
# THIS TEST HAS THE MOCK FROM THE PREVIOUS TEST STILL IN PLACE
def test_status_timeout(mocker):
mocker.stopall()
b = mocker.patch('websockets.connect', new_callable=AsyncContextManagerMock)
b.return_value.__aenter__.return_value.recv = CoroutineMock(side_effect=wait_beyond_timeout_then_serve_json)
b.start()
stat = Status('timeout_host', ws_timeout=2)
logging.getLogger().setLevel(logging.DEBUG)
time.sleep(1)
with pytest.raises(asyncio.TimeoutError):
stat.create_event_listener()
b.stop()
```
|
closed
|
2018-09-29T15:47:40Z
|
2018-10-13T11:57:20Z
|
https://github.com/pytest-dev/pytest-mock/issues/123
|
[
"question"
] |
bradwood
| 6
|
keras-team/keras
|
python
| 20,574
|
MeanIoU differ from custom IOU metrics implementation
|
Hi,
am running a segmentation training process and am using the following function as IoU Custom metrics:
```
@keras.saving.register_keras_serializable(package="glass_segm", name="custom_iou_metric")
def custom_iou_metric(y_true, y_pred, num_classes=3):
y_pred = tf.argmax(y_pred, axis=-1)
y_true = tf.cast(tf.reshape(y_true, [-1]), tf.int32)
y_pred = tf.cast(tf.reshape(y_pred, [-1]), tf.int32)
iou = tf.constant(0.0)
for i in range(num_classes):
true_mask = tf.cast(tf.equal(y_true, i), tf.float32)
pred_mask = tf.cast(tf.equal(y_pred, i), tf.float32)
intersection = tf.reduce_sum(true_mask * pred_mask)
union = tf.reduce_sum(true_mask) + tf.reduce_sum(pred_mask) - intersection
class_iou = tf.cond(
tf.equal(union, 0), lambda: tf.constant(1.0), lambda: intersection / union
)
iou += class_iou
iou /= tf.cast(num_classes, tf.float32)
return iou
```
I was expecting that your MeanIoU would be the IoU mean across the classes and **also** the mean all the training and validation set batches, but it does not seem like that.
example:
given this:
```
y_pred = np.array([[[0.8, 0.1, 0.1], [0.7, 0.2, 0.1], [0.1, 0.7, 0.2], [0.2, 0.6, 0.2]],
[[0.6, 0.3, 0.1], [0.7, 0.2, 0.1], [0.2, 0.5, 0.3], [0.3, 0.4, 0.3]],
[[0.7, 0.2, 0.1], [0.3, 0.6, 0.1], [0.4, 0.4, 0.2], [0.3, 0.3, 0.4]],
[[0.5, 0.4, 0.1], [0.3, 0.5, 0.2], [0.3, 0.3, 0.4], [0.4, 0.5, 0.1]]]
)
y_true = np.array([[0, 0, 2, 1],
[1, 0, 1, 2],
[1, 2, 2, 0],
[1, 2, 0, 0]]
)
```
If I run:
```
m = keras.metrics.MeanIoU(num_classes=3,sparse_y_pred=False)
for i in range(1):
y_true[0][0] = i % 2
y_true[1][0] = i % 2
m.update_state(y_true, y_pred)
m.result()
```
and then:
```
import numpy as np
ll = []
for i in range(1):
y_true[0][0] = i % 2
y_true[1][0] = i % 2
ll.append(custom_iou_metric(y_true,y_pred))
np.mean(ll)
```
the result is the same.
But If I increase the range they diverge a bit. What is the intuition behind summing confusion matrixes as you do? That is not exactly the average.
In my example, if you increase the range they diverge but not that much, but I see big differences during training:
```
model.compile(
optimizer=optimizer,
loss=focal_loss(),
metrics=[
"accuracy",
custom_iou_metric,
keras.metrics.MeanIoU(num_classes=3,sparse_y_pred=False)
],
)
```
I am using a custom data generator, if that matters.
Thanks for the clarificaitons
|
closed
|
2024-12-01T17:26:27Z
|
2024-12-10T09:52:54Z
|
https://github.com/keras-team/keras/issues/20574
|
[
"type:Bug"
] |
edge7
| 12
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 859
|
添加词表
|
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
- [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行
### 问题类型
None
### 基础模型
None
### 操作系统
None
### 详细描述问题
你好,我使用llama2的词表为32000,扩充后自己的词表为49013,但是微调的时候报错ValueError: Trying to set a tensor of shape torch.Size([32000, 4096]) in "weight" (which has shape torch.Size([49013, 4096])), this look incorrect. 请问这个怎么解决
### 依赖情况(代码类问题务必提供)
_No response_
### 运行日志或截图
_No response_
|
closed
|
2023-10-24T01:33:04Z
|
2023-11-13T22:02:12Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/859
|
[
"stale"
] |
clclclaiggg
| 6
|
openapi-generators/openapi-python-client
|
fastapi
| 651
|
Why required: [] generates an error
|
```json
{
...
"components": {
"schemas": {
"ABC": {
"required": [],
...
},
...
}
}
}
```
this generates an error:
```
components -> schemas -> ABC -> required
ensure this value has at least 1 items (type=value_error.list.min_items; limit_value=1)
```
`"required": null` do not generates an error
expected:
- required not set
- required = null
- required = []
are the same and not generates an error
|
closed
|
2022-08-11T02:52:56Z
|
2024-10-27T18:52:24Z
|
https://github.com/openapi-generators/openapi-python-client/issues/651
|
[
"🐞bug"
] |
erdnax123
| 3
|
HumanSignal/labelImg
|
deep-learning
| 143
|
Can you tell me how to modify your code?
|
Hello !
I need to save other attributes as person wear glasser or not , I have append these attributes with QCheckBox.Can you tell me how to save these attributes ,I donot know how to append thess checks in shapes.
Thank you for your help !

|
open
|
2017-08-10T09:20:23Z
|
2017-11-28T07:31:58Z
|
https://github.com/HumanSignal/labelImg/issues/143
|
[] |
shiwenhao
| 3
|
yt-dlp/yt-dlp
|
python
| 11,847
|
Metadata embedding being very slow
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
Is it normal for --embed-metadata to take multiple minutes? I have tried it with just that option, but it always takes a long time. I did check CPU usage, but it's very minimal, so that most likely isn't the issue.
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_
|
closed
|
2024-12-17T19:52:45Z
|
2024-12-18T03:55:41Z
|
https://github.com/yt-dlp/yt-dlp/issues/11847
|
[
"question"
] |
kenbaird13
| 5
|
inducer/pudb
|
pytest
| 205
|
Support for pyc files
|
I found this issue in the mailing list:
https://lists.tiker.net/pipermail/pudb/2015-August/000247.html
Are there any plans to be able to inspect the .py files that were used to compile the .pyc files? Is there any workaround? Thanks.
|
open
|
2016-10-28T19:19:24Z
|
2016-10-28T20:08:08Z
|
https://github.com/inducer/pudb/issues/205
|
[] |
ghost
| 1
|
jina-ai/serve
|
deep-learning
| 6,109
|
How to import logic from other modules in executor.py ?
|
**Describe your proposal/problem**
Cannot import my custom logic from other modules in executor.py.
Here's my executor.py:
```
from jina import Executor, requests
from docarray import DocList
from docarray.documents import TextDoc
from fake_agent import predict
class MyExecutor(Executor):
@requests(on='/prompt')
def promt(self, docs: DocList[TextDoc], **kwargs) -> DocList[TextDoc]:
print(docs.text)
docs.text = [predict(a) for a in docs.text]
# docs[1].text = 'goodbye, world!'
return docs
```
This is the project structure:
```
Project
| client.py
| deployment.yml
| tree.txt
|
+---executor1
| | config.yml
| | executor.py
| | fake_agent.py
| | requirements.txt
| |
| \---__pycache__
| | executor.cpython-39.pyc
|
\---__pycache__
| client.cpython-39.pyc
```
I'm serving the executor using `jina deployment --uses deployment.yml`
But I'm getting ImportError:
```
ImportError('can not import module from
C:\\Users\\Lenovo\\Desktop\\jina-exp\\executor1\\executor.py') during 'WorkerRuntime'
initialization
add "--quiet-error" to suppress the exception details
Traceback (most recent call last):
File "C:\programs\anaconda\envs\jina_llm\lib\site-packages\jina\importer.py", line 149, in _path_import
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\Lenovo\Desktop\jina-exp\executor1\executor.py", line 4, in <module>
from fake_agent import predict
ModuleNotFoundError: No module named 'fake_agent'
```
which eventually is causing
```
jina.excepts.RuntimeFailToStart
```
What am I doing wrong?
Please guide me to appropriate docs or issues regarding this if exists.
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
- jina 3.22.4
- docarray 0.39.1
- jcloud 0.3
- jina-hubble-sdk 0.39.0
- jina-proto 0.1.27
- protobuf 4.25.0
- proto-backend upb
- grpcio 1.47.5
- pyyaml 6.0.1
- python 3.9.0
- platform Windows
- platform-release 10
- platform-version 10.0.19041
- architecture AMD64
- processor Intel64 Family 6 Model 165 Stepping 2, GenuineIntel
- uid 273085306477006
- session-id be030012-83e9-11ee-bc64-f85ea0af99ce
- uptime 2023-11-16T00:34:02.782899
- ci-vendor (unset)
- internal False
* JINA_DEFAULT_HOST (unset)
* JINA_DEFAULT_TIMEOUT_CTRL (unset)
* JINA_DEPLOYMENT_NAME (unset)
* JINA_DISABLE_UVLOOP (unset)
* JINA_EARLY_STOP (unset)
* JINA_FULL_CLI (unset)
* JINA_GATEWAY_IMAGE (unset)
* JINA_GRPC_RECV_BYTES (unset)
* JINA_GRPC_SEND_BYTES (unset)
* JINA_HUB_NO_IMAGE_REBUILD (unset)
* JINA_LOG_CONFIG (unset)
* JINA_LOG_LEVEL (unset)
* JINA_LOG_NO_COLOR (unset)
* JINA_MP_START_METHOD (unset)
* JINA_OPTOUT_TELEMETRY (unset)
* JINA_RANDOM_PORT_MAX (unset)
* JINA_RANDOM_PORT_MIN (unset)
* JINA_LOCKS_ROOT (unset)
* JINA_K8S_ACCESS_MODES (unset)
* JINA_K8S_STORAGE_CLASS_NAME (unset)
* JINA_K8S_STORAGE_CAPACITY (unset)
* JINA_STREAMER_ARGS (unset)
|
closed
|
2023-11-15T19:20:49Z
|
2023-11-16T07:11:05Z
|
https://github.com/jina-ai/serve/issues/6109
|
[] |
that-rahul-guy
| 2
|
laurentS/slowapi
|
fastapi
| 2
|
Limit rate issue
|
I've tested limit rate locally and it works fine.
After I deployed application on AWS, rates did't work at all until I set redis as storage.
But even with redis, rate limit seems to be broken.
Limit is exceeded after ~10th attempt, and I've set limit to 5.
I've checked redis value for the key inserted by limiter, and I think it did not count every attempt.
I'm using FastAPI 0.45.0 and slowapi 0.1.1
|
closed
|
2020-04-23T12:32:22Z
|
2020-05-26T15:38:41Z
|
https://github.com/laurentS/slowapi/issues/2
|
[] |
ghost
| 12
|
ageitgey/face_recognition
|
machine-learning
| 1,022
|
face verification
|
* face_recognition version: 1.2.3
* Python version:3.6
* Operating System: centos7
### Description
This work is very helpful to check if two images come from the same person. But my case is I have a pool of faces of same person. How can I improve the accuracy by comparing my probe image with the pool rather than random chosen one? Can I get one embedding with a pool of same person face images?
|
closed
|
2020-01-10T03:18:50Z
|
2021-08-09T10:27:02Z
|
https://github.com/ageitgey/face_recognition/issues/1022
|
[] |
flyingmrwang
| 7
|
wkentaro/labelme
|
deep-learning
| 807
|
[QUESTION]
|
I have this in Python and it works fine.
subprocess.Popen(['labelme_json_to_dataset', json_path, '-o', out_path], stdout = subprocess.PIPE)
There is a way to run labelme_json_to_dataset from C#?
|
closed
|
2020-11-30T22:33:02Z
|
2020-12-07T10:43:56Z
|
https://github.com/wkentaro/labelme/issues/807
|
[
"issue::bug"
] |
Dzsepetto
| 1
|
thtrieu/darkflow
|
tensorflow
| 357
|
Color change after 'resize_input' function
|
The 'resize_input' function in darkflow/net/yolo/predict.py,
there is one line 'imsz = imsz[:,:,::-1]'.
It seems like the image loses 'red' color after this line.
Anyone could answer why it is required to remove 'red' color from the image?
|
open
|
2017-07-27T04:20:42Z
|
2017-07-27T04:20:42Z
|
https://github.com/thtrieu/darkflow/issues/357
|
[] |
nuitvolgit
| 0
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 15,434
|
[Bug]: pytorch rocm 6.0 with 7600xt = HSA_STATUS_ERROR_INVALID_ISA
|
### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I have 7600xt now, and some error coming out.
the error occur while start the stalbe diffusion webui 1.8.0 with pytorch 2.1.2 rocm 6.0.
it is same with pytorch 2.4.0 rocm 6.0
even pytorch 2.3.0
the error is
` rocdevice.cpp :2728: 0225198107 us: [pid:4807 tid:0x7d4fc8fff640] Callback: Queue 0x7d4de2e00000 aborting with error : HSA_STATUS_ERROR_INVALID_ISA: The instruction set architecture is invalid. code: 0x100f `
but it is fine with pytorch 2.2.0 with rocm 5.7
of course, I have setting with
export HSA_OVERRIDE_GFX_VERSION=11.0.0
### Steps to reproduce the problem
with 7600xt or 7600
install rocm and pytorch with rocm 6.0 at python venv environement.
and run
`HSA_OVERRIDE_GFX_VERSION=11.0.0 webui.sh`
### What should have happened?
please support rx 7600xt pytorch rocm 6 version
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
I will upload the sysinfo
But now I can upload rocminfo now
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD Ryzen 5 5600 6-Core Processor
Uuid: CPU-XX
Marketing Name: AMD Ryzen 5 5600 6-Core Processor
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 3500
BDFID: 0
Internal Node ID: 0
Compute Unit: 12
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 32775240(0x1f41c48) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 32775240(0x1f41c48) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 32775240(0x1f41c48) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 2
*******
Name: gfx1102
Uuid: GPU-XX
Marketing Name: AMD Radeon™ RX 7600 XT
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 32(0x20) KB
L2: 2048(0x800) KB
Chip ID: 29824(0x7480)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2539
BDFID: 10240
Internal Node ID: 1
Compute Unit: 32
SIMDs per CU: 2
Shader Engines: 2
Shader Arrs. per Eng.: 2
WatchPts on Addr. Ranges:4
Coherent Host Access: FALSE
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 32(0x20)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 32(0x20)
Max Work-item Per CU: 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 52
SDMA engine uCode:: 16
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 16760832(0xffc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 16760832(0xffc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx1102
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***
### Console logs
```Shell
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [6ce0161689] from /stable_diffusion/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Creating model from config: /stable_diffusion/stable-diffusion-webui/configs/v1-inference.yaml
Startup time: 11.0s (prepare environment: 3.4s, import torch: 4.0s, import gradio: 0.8s, setup paths: 1.1s, other imports: 0.5s, load scripts: 0.2s, create ui: 0.4s, gradio launch: 0.6s).
Applying attention optimization: Doggettx... done.
Model loaded in 13.5s (load weights from disk: 0.8s, create model: 0.3s, apply weights to model: 11.8s, calculate empty prompt: 0.4s).
:0:rocdevice.cpp :2728: 0960471930 us: [pid:7379 tid:0x7932b7dff640] Callback: Queue 0x793124600000 aborting with error : HSA_STATUS_ERROR_INVALID_ISA: The instruction set architecture is invalid. code: 0x100f
./webui.sh: 292: 7379 (core dumped) "${python_cmd}" -u "${LAUNCH_SCRIPT}" "$@"
```
### Additional information
_No response_
|
closed
|
2024-04-03T03:59:29Z
|
2025-02-24T14:28:38Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15434
|
[
"bug-report"
] |
neem693
| 15
|
zwczou/weixin-python
|
flask
| 54
|
python3中没有basestring
|
环境:python3.7
微信消息推送出现
`NameError: name 'basestring' is not defined`
我目前解决方案是在msg.py 内添加了一行
```python3
basestring = (str, bytes)
```
|
closed
|
2019-10-19T11:51:15Z
|
2019-10-20T09:51:53Z
|
https://github.com/zwczou/weixin-python/issues/54
|
[] |
vaakian
| 2
|
modelscope/modelscope
|
nlp
| 838
|
Qwen1.5自我认知微调 官方教程运行报错ValueError: malformed node
|
运行报错
[Qwen1.5自我认知微调 官方教程 ](https://modelscope.cn/docs/Qwen1.5%E5%85%A8%E6%B5%81%E7%A8%8B%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5)
ValueError: malformed node or string


感觉是json数据格式的问题
|
closed
|
2024-04-22T10:42:29Z
|
2024-05-29T01:51:11Z
|
https://github.com/modelscope/modelscope/issues/838
|
[
"Stale"
] |
alexhmyang
| 4
|
mckinsey/vizro
|
data-visualization
| 175
|
How do I get Vizro to run in a Google Colab Notebook?
|
### Question
Im using a Google Colab Notebook due to my need to access a BigQuery instance in which I dont have access to other than via login with a google account.
Im not so sure how to make it work. Using the following snippet doesnt work as expected
Vizro().build(dashboard=dashboard).run()
### Code/Examples
_No response_
### Other information
_No response_
### vizro version
_No response_
### Python version
_No response_
### OS
_No response_
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
|
closed
|
2023-11-16T12:40:02Z
|
2024-10-30T13:17:07Z
|
https://github.com/mckinsey/vizro/issues/175
|
[
"General Question :question:"
] |
gbabeleda
| 5
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,098
|
bugg
|
I cant procesing this music whyy pls fix it
|
open
|
2024-01-10T15:10:57Z
|
2024-01-11T18:30:16Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1098
|
[] |
Plsdonthackmy
| 1
|
netbox-community/netbox
|
django
| 18,328
|
'GenericRel' object has no attribute 'verbose_name' error when trying to search something on Netbox
|
### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
v4.2.0
### Python Version
3.10
### Steps to Reproduce
1. Upgrade Netbox
2. Connect as a user
3. Try to search something (like an IP adress) on the search bar
### Expected Behavior
Not having the issue below
### Observed Behavior
This error occurs when you try to search for something using the search bar:
```
<class 'AttributeError'>
'GenericRel' object has no attribute 'verbose_name'
Version Python: 3.10.12
Version NetBox: 4.2.0
Plug-ins: None installed
```
Screenshot of the error :

|
closed
|
2025-01-07T15:24:37Z
|
2025-01-07T15:30:19Z
|
https://github.com/netbox-community/netbox/issues/18328
|
[
"type: bug",
"status: duplicate"
] |
TheGuardianLight
| 1
|
jupyterhub/repo2docker
|
jupyter
| 1,041
|
Current RStudio version does not support R 4.1.0 graphics engine
|
### Bug description
[RStudio v1.2.5001](https://github.com/jupyterhub/repo2docker/blob/81e1e39/repo2docker/buildpacks/_r_base.py#L7-L10) does not support the R graphics engine v14 that comes with R v4.1.0 ("Camp Pontanezen") (see [release notes](https://stat.ethz.ch/pipermail/r-announce/2021/000670.html)).
#### Expected behaviour
No warning message.
#### Actual behaviour
```
Warning message:
R graphics engine version 14 is not supported by this version of RStudio. The Plots tab will be disabled until a newer version of RStudio is installed.
```
### How to reproduce
1. [](https://gke.mybinder.org/v2/gl/fkohrt%2FRMarkdown-sandbox/f5d8582b?urlpath=rstudio) ([source repo](https://gitlab.com/fkohrt/RMarkdown-sandbox/-/tree/f5d8582b))
2. Enter `R.version.string` (leading `R version 4.1.0 (2021-05-18)`)
3. Get warning message
### Your personal set up
Using `mybinder.org`.
### Related information
See also rstudio/rstudio#8383.
|
closed
|
2021-05-21T11:04:43Z
|
2022-01-25T18:03:07Z
|
https://github.com/jupyterhub/repo2docker/issues/1041
|
[] |
fkohrt
| 12
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 270
|
获取抖音视频数据失败!原因:SyntaxError: 缺少 ';'
|
安装解析库:pip install douyin-tiktok-scraper
使用示例代码,Win10 本地测试运行成功,同样的脚本放到 Windows Server 2019 以及 Windows Server 2008 报错:
正在获取抖音视频数据...
获取抖音视频数据失败!原因:SyntaxError: 缺少 ';'
Win10的Python版本为:Python 3.11.0
pip安装包版本:Successfully installed Brotli-1.1.0 PyExecJS-1.5.1 douyin-tiktok-scraper-1.2.8 orjson-3.9.7
Windows Server 2019 的Python版本:Python 3.11.3
pip安装包版本:Successfully installed douyin_tiktok_scraper-1.2.8
在Windows Server 2019也尝试换过Python3.8 也是报:获取抖音视频数据失败!原因:SyntaxError: 缺少 ';'
以及在Windows Server 2008 运行也报错:获取抖音视频数据失败!原因:SyntaxError: 缺少 ';'


|
closed
|
2023-09-11T15:41:22Z
|
2023-09-12T01:47:38Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/270
|
[
"BUG"
] |
shimxx
| 1
|
chainer/chainer
|
numpy
| 8,210
|
Apply pairwise parameterization selectively in tests
|
#8164 applied pairwise parameterization in tests unconditionally in `chainer_tests` and `chainerx_tests`, but I think it's too dangerous. Some tests might be designed carefully in a way that omitting some of their combination would lead to degradation of the tests.
Ideally we should apply pairwise testing only selectively in those tests that are
* taking large amount of time, and
* their parameterization are not easily reduced by hand.
We should also provide a method to switch whether to enable pairwise parameterization, and only enable it in Travis CI (otherwise we don't have timeout issue).
However it would take too much time to check existing tests manually. We have an immediate timeout issue in Travis CI.
So I suggest at first adding a method to switch the mode as mentioned above.
Initially choices would be "always" and "never" (default), and later we would add another mode "selective" (or whatever name) in which mode pairwise parameterization would be applied only in designated tests.
|
closed
|
2019-10-01T14:11:22Z
|
2020-02-05T07:39:44Z
|
https://github.com/chainer/chainer/issues/8210
|
[
"cat:test",
"stale",
"prio:low"
] |
niboshi
| 2
|
huggingface/peft
|
pytorch
| 1,988
|
TypeError: WhisperForConditionalGeneration.forward() got an unexpected keyword argument 'input_ids'
|
### System Info
peft version = '0.12.0'
transformers version = '4.41.2'
accelerate version = '0.30.1'
bitsandbytes version = '0.43.3'
pytorch version = '2.1.2'
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
import torch
from transformers import WhisperForConditionalGeneration
from peft import get_peft_model, LoraConfig, TaskType, prepare_model_for_kbit_training
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
whisper_model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny", quantization_config = bnb_config, device_map='auto')
quantized_model = prepare_model_for_kbit_training(whisper_model )
peft_config = LoraConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, target_modules=["q_proj", "v_proj"], r=32, lora_alpha=64, lora_dropout=0.1)
final_model = get_peft_model(quantized_model , peft_config)
final_model(torch.zeros([1, 80, 3000])) # torch.zeros is just used as a dummy input
### Expected behavior
This TypeError that appears to be caused by recent changes in the peft library. I found similar codes run fine but whenever I try to run the same code it rises this Error.
A potential workaround is to create a custom WhisperForConditionGeneration subclass, modifying the forward method to accept "input_ids" and rename it to "input_features" inside the body of forward method. However, this solution seems inefficient.
|
closed
|
2024-08-02T17:48:46Z
|
2025-03-11T15:50:14Z
|
https://github.com/huggingface/peft/issues/1988
|
[] |
YOUSEFNANIS
| 6
|
OFA-Sys/Chinese-CLIP
|
nlp
| 35
|
基于提供的权重,无法复现clip_cn_vit-b-16结果
|
1. 使用你们提供的数据集(Flickr30k-CN)及pretrain权重,运行代码,无法得到预期结果。
{"score": 74.3, "mean_recall": 74.3, "r1": 54.42, "r5": 80.82000000000001, "r10": 87.66000000000001}}
<img width="434" alt="image" src="https://user-images.githubusercontent.com/19340566/209259689-5d4f7b4f-715c-4a4b-a1ea-1f7ad610a7b0.png">
<img width="1920" alt="image" src="https://user-images.githubusercontent.com/19340566/209259632-e1465c76-af4d-4587-bb7d-a4eaf2bd4f6e.png">
启动脚本为:
<img width="640" alt="image" src="https://user-images.githubusercontent.com/19340566/209259764-dcb4b5df-b2ff-479d-a225-145094c5af13.png">
是不是某个超参不对?
|
closed
|
2022-12-23T02:37:36Z
|
2022-12-23T02:48:27Z
|
https://github.com/OFA-Sys/Chinese-CLIP/issues/35
|
[] |
Maycbj
| 2
|
nl8590687/ASRT_SpeechRecognition
|
tensorflow
| 191
|
训练报错,specified in either feed_devices or fetch_devices was not found in the Graph
|

2020-05-27 01:02:04.419862: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10869 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0001:00:00.0, compute capability: 3.7)
Traceback (most recent call last):
File "train_mspeech.py", line 48, in <module>
ms.TrainModel(datapath, epoch = 50, batch_size = 16, save_step = 500)
File "/root/ASRT_SpeechRecognition/SpeechModel251.py", line 187, in TrainModel
self.TestModel(self.datapath, str_dataset='train', data_count = 4)
File "/root/ASRT_SpeechRecognition/SpeechModel251.py", line 250, in TestModel
pre = self.Predict(data_input, data_input.shape[0] // 8)
File "/root/ASRT_SpeechRecognition/SpeechModel251.py", line 305, in Predict
base_pred = self.base_model.predict(x = x_in)
File "/anaconda/envs/python36/lib/python3.6/site-packages/keras/engine/training.py", line 1462, in predict
callbacks=callbacks)
File "/anaconda/envs/python36/lib/python3.6/site-packages/keras/engine/training_arrays.py", line 324, in predict_loop
batch_outs = f(ins_batch)
File "/anaconda/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 3473, in __call__
self._make_callable(feed_arrays, feed_symbols, symbol_vals, session)
File "/anaconda/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 3410, in _make_callable
callable_fn = session._make_callable_from_options(callable_opts)
File "/anaconda/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1505, in _make_callable_from_options
return BaseSession._Callable(self, callable_options)
File "/anaconda/envs/python36/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1460, in __init__
session._session, options_ptr)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Tensor the_input:0, specified in either feed_devices or fetch_devices was not found in the Graph
|
closed
|
2020-05-27T01:15:55Z
|
2020-05-27T04:22:35Z
|
https://github.com/nl8590687/ASRT_SpeechRecognition/issues/191
|
[] |
JIANG3330
| 1
|
microsoft/unilm
|
nlp
| 1,261
|
DeltaLm: the model contains only the weights, where is model's config?
|
I am trying to use the model for inference using fairseq like:
import torch
from deltalm.models.deltalm import DeltaLMModel
model = DeltaLMModel.from_pretrained(
model_dir,
checkpoint_file=model_name,
bpe='sentencepiece',
sentencepiece_model=spm)
and I get this error :
RuntimeError: Neither args nor cfg exist in state keys = dict_keys(['weights'])
the deltalm-base.pt contains only the model's weights. how can I use the model properly for inference ?
|
open
|
2023-08-21T04:31:02Z
|
2023-08-27T03:39:10Z
|
https://github.com/microsoft/unilm/issues/1261
|
[] |
Khaled-Elsaka
| 2
|
modin-project/modin
|
data-science
| 6,767
|
Provide the ability to use experimental functionality when experimental mode is not enabled globally via an environment variable.
|
Example where it can be useful:
```python
import modin.pandas as pd
df = pd.DataFrame([1,2,3,4])
# [some code]
with modin.utils.enable_exp_mode():
# this import has side effects that will need to be removed when leaving the context
# for example:
# 1. `IsExperimental.put(True)`
# 2. `setattr(DataFrame, "to_pickle_distributed", to_pickle_distributed)`
# 3. Modification of internal factory and IO classes
from modin.experimental.pandas import read_pickle_distributed, to_pickle_distributed
to_pickle_distributed(df, "test_file*.pkl")
# [some code]
new_df = read_pickle_distributed("test_file*.pkl")
```
|
closed
|
2023-11-23T16:07:58Z
|
2023-12-08T16:31:15Z
|
https://github.com/modin-project/modin/issues/6767
|
[
"new feature/request 💬",
"P1"
] |
anmyachev
| 0
|
graphql-python/graphene-django
|
django
| 1,000
|
v2.11.0: "lookup_required" with django_filters.LookupChoiceFilter()
|
v2.11.0 seems to have a problem with `django_filters.LookupChoiceFilter()` in a `django_filters.FilterSet` class.
A valid filter value in query will result in a `ValidationError: ['{"FooBar": [{"message": "Select a lookup.", "code": "lookup_required"}]}']`
I updated graphene-django from v2.10.1 to v2.11.0 in our project. This breaks existing tests.
This happens with django-filter v2.3.0 and v2.2
|
open
|
2020-07-07T08:40:06Z
|
2020-07-07T08:40:06Z
|
https://github.com/graphql-python/graphene-django/issues/1000
|
[] |
jedie
| 0
|
streamlit/streamlit
|
machine-learning
| 10,160
|
Expose OAuth errors during `st.login`
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
We're soon launching native authentication in Streamlit (see #8518). One thing we left out for now is handling errors that appear during the OAuth flow. It would be great to either handle them automatically (e.g. by showing a dialog or toast) or exposing them programmatically.
### Why?
These errors should be very rare in Streamlit because many errors are handled directly in the OAuth flow by the identity provider and [most possible errors that are propagated back to the app](https://www.oauth.com/oauth2-servers/server-side-apps/possible-errors/) are due to a) wrong configuration (which we usually catch before even initiating the OAuth flow), b) wrong implementation (which we control), or c) the server of the identity provider being down (which shouldn't happen often for the major providers).
But errors can still happen – the most prominent example we found during testing is when the user clicks "Cancel" on the consent screen shown when logging in for the first time. And there might be others we didn't think about yet.
### How?
Two possible ways:
1. Automatically show a dialog or toast with the error code and potentially error description and error URI. Note that OAuth recommends showing a custom error message to the user instead of showing the error code and error description directly. But I think in our case (where these errors are very rare), it might be fine to just show that and not require the developer to implement it themselves. We should probably have a parameter on `st.login` to disable this automatic notification in case the developer wants to handle the error themselves (see 2).
2. Expose the error details programmatically. One way would be to put it into `st.user` as keys `error`, `error_description` (optional), and `error_uri` (optional). In that case, we should automatically clear these items on the next rerun, otherwise it becomes very hard to only show the error when it happens. Another possible approach would be to have an `on_error` callback on `st.login`. But a) we'd need to pass the error details to this callback, which would make it work a bit differently than our callbacks (currently) work and b) it's a bit more cumbersome to work with this in practice because you often have to stick the error message into `st.session_state` if you want to show it somewhere within the app.
### Additional Context
_No response_
|
open
|
2025-01-10T23:32:58Z
|
2025-01-10T23:33:46Z
|
https://github.com/streamlit/streamlit/issues/10160
|
[
"type:enhancement",
"feature:st.user",
"feature:st.login"
] |
jrieke
| 1
|
ipython/ipython
|
jupyter
| 14,489
|
Do not use mixed units in timeit output
|
<!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
Timeit reports mean and std dev in different units in some cases, e.g. the mean is in a different order of magnitude from the std.
```
In [1]: from time import sleep
In [2]: %timeit sleep(0.001)
1.08 ms ± 11.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
```
This is pretty hard to parse and violates best practices of reporting value and uncertainty in the same units.
The preferred output would be:
```
In [1]: from time import sleep
In [2]: %timeit sleep(0.001)
1.080 ± 0.011 ms per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
```
|
open
|
2024-07-25T09:29:21Z
|
2024-10-16T02:04:51Z
|
https://github.com/ipython/ipython/issues/14489
|
[] |
maxnoe
| 3
|
matterport/Mask_RCNN
|
tensorflow
| 2,623
|
Training for grayscale input | All layers training getting stopped
|
I am actually trying to run one experiment with grayscale input and for that I have already made required changes. The problem I am getting is that the code is able to run training for head epochs but it stopped for all layers training like after printing the layers' names there's no output related to anything, there's no error and gpu memory getting freed which means experiment is getting stopped when all layers training starts.
I am not able to understand what is causing that? And how can I fix that?
Let me know if I need to share anything to make my doubts more clear. Any help would be really appreciated. Thanks!
|
open
|
2021-07-06T18:02:17Z
|
2021-07-06T18:02:17Z
|
https://github.com/matterport/Mask_RCNN/issues/2623
|
[] |
Dartum08
| 0
|
rgerum/pylustrator
|
matplotlib
| 17
|
Missing community guidelines
|
As part of the JOSS review I could not find the community guide lines for contributing. The docs do say how to report bugs (although the text refers to bitbucket but links to github!).
|
closed
|
2020-02-04T01:51:24Z
|
2020-02-04T08:51:43Z
|
https://github.com/rgerum/pylustrator/issues/17
|
[] |
tacaswell
| 1
|
ivy-llc/ivy
|
numpy
| 28,334
|
Fix Frontend Failing Test: tensorflow - math.tensorflow.math.is_strictly_increasing
|
To-do List: https://github.com/unifyai/ivy/issues/27499
|
closed
|
2024-02-19T17:27:12Z
|
2024-02-20T09:26:02Z
|
https://github.com/ivy-llc/ivy/issues/28334
|
[
"Sub Task"
] |
Sai-Suraj-27
| 0
|
Johnserf-Seed/TikTokDownload
|
api
| 291
|
直接运行 example.py 报错
|
[ 💻 ]:Windows平台
[ 🗻 ]:获取最新版本号中!
[ 🚩 ]:目前 13043 版本已是最新
[ 警告 ]:未检测到命令,将使用配置文件进行批量下载!
[ 提示 ]:读取本地配置完成!
[ 提示 ]:为您下载多个视频!
[ 提示 ]:用户的sec_id=MS4wLjABAAAA3nckmLU8MKXB4Aao7ZOOLaHIRCJG5AzKMDRh_6WMkU4
[ 提示 ]:获取用户昵称失败! 请检查是否发布过作品,发布后请重新运行本程序!
[2023-01-19 20:25:29,460] - Log.py] - ERROR: [ 提示 ]:获取用户昵称失败! 请检查是否发布过作品,发布后请重新运行本程序!
[2023-01-19 20:25:29,460] - Log.py] - ERROR: list index out of range
|
closed
|
2023-01-19T12:27:32Z
|
2023-01-22T09:06:22Z
|
https://github.com/Johnserf-Seed/TikTokDownload/issues/291
|
[
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] |
liu-runsen
| 2
|
microsoft/nni
|
pytorch
| 5,663
|
Does the trail_command in config file support the spaceholder?
|
I get a problem,I try to run my program with NNI Command "nnictl create --config config.yml --port 60001 --timestamp 88",and the trail-command in config.yml is "python framework.py --timestrap %timestrap%".
|
open
|
2023-08-16T03:00:36Z
|
2023-08-16T03:00:36Z
|
https://github.com/microsoft/nni/issues/5663
|
[] |
Nnnaqooooo
| 0
|
sebp/scikit-survival
|
scikit-learn
| 307
|
LASSO Cox differences between packages
|
I have a list of genes and I have performed a Penalized Cox Model LASSO (cox_lasso = CoxnetSurvivalAnalysis(l1_ratio=1.0, alpha_min_ratio=0.01) to select the most useful prognostic genes to be included later in a multivariate Cox regression to generate a risk score.
I have also used the R package glmnet (with the parameter family = "cox"), to perform LASSO but the list of genes I have obtained is different from the one I have obtained from this package.
What are the differences between the two LASSO models? Which one is best for what I want to do?
Thanks in advance.
|
closed
|
2022-09-20T21:07:04Z
|
2022-09-21T15:22:19Z
|
https://github.com/sebp/scikit-survival/issues/307
|
[] |
alberto-mora
| 0
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 782
|
smp.utils module is deprecated
|
I am following the example [cars segmentation](https://github.com/qubvel/segmentation_models.pytorch/blob/master/examples/cars%20segmentation%20(camvid).ipynb)
In order to train my custom data, I have written a train.py
`
if __name__ == '__main__':
ENCODER = 'resnet34'
ENCODER_WEIGHTS = 'imagenet'
CLASSES = ['object']
ACTIVATION = 'sigmoid' # could be None for logits or 'softmax2d' for multiclass segmentation
DEVICE = 'cuda'
# create segmentation model with pretrained encoder
model = smp.UnetPlusPlus(
encoder_name=ENCODER,
encoder_weights=ENCODER_WEIGHTS,
classes=len(CLASSES),
activation=ACTIVATION,
)
preprocessing_fn = smp.encoders.get_preprocessing_fn(ENCODER, ENCODER_WEIGHTS)
DATA_DIR = 'data/MGD/'
x_train_dir = os.path.join(DATA_DIR, 'train')
y_train_dir = os.path.join(DATA_DIR, 'trainannot')
x_valid_dir = os.path.join(DATA_DIR, 'val')
y_valid_dir = os.path.join(DATA_DIR, 'valannot')
train_dataset = Dataset(
x_train_dir,
y_train_dir,
augmentation=get_training_augmentation(),
preprocessing=get_preprocessing(preprocessing_fn),
classes=CLASSES,
)
valid_dataset = Dataset(
x_valid_dir,
y_valid_dir,
augmentation=get_validation_augmentation(),
preprocessing=get_preprocessing(preprocessing_fn),
classes=CLASSES,
)
train_loader = DataLoader(train_dataset, batch_size=2, shuffle=True)
valid_loader = DataLoader(valid_dataset, batch_size=1, shuffle=False)
loss = smp.utils.losses.DiceLoss()
metrics = [
smp.utils.metrics.IoU(threshold=0.5),
]
optimizer = torch.optim.Adam([
dict(params=model.parameters(), lr=0.0001),
])
train_epoch = smp_utils.train.TrainEpoch(
model,
loss=loss,
metrics=metrics,
optimizer=optimizer,
device=DEVICE,
verbose=True,
)
valid_epoch = smp.utils.train.ValidEpoch(
model,
loss=loss,
metrics=metrics,
device=DEVICE,
verbose=True,
)
# train model for 40 epochs
max_score = 0
for i in range(0, 40):
print('\nEpoch: {}'.format(i))
train_logs = train_epoch.run(train_loader)
# valid_logs = valid_epoch.run(valid_loader)
# do something (save model, change lr, etc.)
if max_score < train_logs['iou_score']:
max_score = train_logs['iou_score']
torch.save(model, 'checkpoints/best_model.pth')
print('Model saved!')
if i == 25:
optimizer.param_groups[0]['lr'] = 1e-5
print('Decrease decoder learning rate to 1e-5!')
`
However,it shows smp.utils module is deprecated.


How to use the latest module to avoid this warning?Maybe you can update the jupyter notebook.
Thank you for your attention.
|
closed
|
2023-06-13T02:40:48Z
|
2023-10-02T01:49:15Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/782
|
[
"Stale"
] |
ningmenghongcha
| 5
|
ipython/ipython
|
data-science
| 14,157
|
Magics with toplevel await
|
Hi,
It would be nice to be able to use top level await with magics, specifically `%time` and `%%time`.
For example:
```
async def foo():
return 'bar'
# This works
await foo()
# This fails with SyntaxError: 'await' outside function
%time await foo()
```
|
closed
|
2023-09-07T22:12:03Z
|
2023-12-27T13:00:05Z
|
https://github.com/ipython/ipython/issues/14157
|
[] |
mlucool
| 6
|
sunscrapers/djoser
|
rest-api
| 813
|
serializer for /users/me/ PATCH
|
Based on the [docs](https://djoser.readthedocs.io/en/latest/base_endpoints.html#user), we can make a `PATCH` request to `/users/me/` by giving `{{ User.FIELDS_TO_UPDATE }}`. I have `FIELDS_TO_UPDATE` defined in my custom user model:
```python
class CustomUser(AbstractBaseUser):
email = models.EmailField(
verbose_name="email address",
max_length=255,
unique=True,
)
first_name = models.CharField(max_length=150)
last_name = models.CharField(max_length=150)
# ...
objects = CustomUserManager()
USERNAME_FIELD = "email"
REQUIRED_FIELDS = ["first_name", "last_name"]
FIELDS_TO_UPDATE = ["first_name", "last_name"]
```
i have also set the `serilizers` like this:
```python
DJOSER = {
# ...
"SERIALIZERS": {
"current_user": "customauth.serializers.UserSerializer",
"user": "customauth.serializers.UserSerializer",
"user_update": "customauth.serializers.UserSerializer",
},
```
```python
class UserSerializer(BaseUserSerializer):
class Meta(BaseUserSerializer.Meta):
fields = ["id", "first_name", "last_name", "email"]
```
but serializer for `PATCH` request on `/users/me/` is not working properly and not showing `HTML form`. But for `PUT` it shows `HTML form` with `REQUIRED_FIELDS`.
`PATCH`:

`PUT`:

- Is there a serializer that needs to be set for [SERIALIZERS](https://djoser.readthedocs.io/en/latest/settings.html#serializers) setting in `settings.py` to enable `HTML form` in `/users/me/` when using `PATCH`?
- How should i set `FIELDS_TO_UPDATE`?
|
closed
|
2024-04-12T14:22:07Z
|
2024-04-13T10:45:51Z
|
https://github.com/sunscrapers/djoser/issues/813
|
[] |
andypal333
| 1
|
exaloop/codon
|
numpy
| 642
|
Unable to get Fast API working
|
See here:
I tried to get fast api working with codon. I like the idea of getting quite a bit of speed out of python.
https://github.com/fastapi/fastapi/discussions/10096
This is probably user error but I was not able to get their sample hello world running with codon. It might be a dependency issue?
Here is the simpliest code I could come up with:
```
from python import fastapi as fast
from typing import Union
#from fastapi import FastAPI
app = fast.FastAPI()
@app.get("/")
def read_root():
return {"Hello": "World"}
```
I also tried without the decorator with the same error below:
```python
from python import fastapi as fast
from typing import Union
app = fast.FastAPI()
def read_root(request):
return {"Hello": "World"}
app.router.add_api_route("/", read_root, methods=["GET"])
```
Error:
```
codon run main.py
PyError: ((), ()) is not a callable object
Raised from: pyobj.exc_check:0
/root/.codon/lib/codon/stdlib/internal/python.codon:1198:13
Backtrace:
[0x7f37143126b7] pyobj.exc_check:0.882 at /root/.codon/lib/codon/stdlib/internal/python.codon:1198:13
[0x7f37143126f0] pyobj.exc_wrap:0[Ptr[byte],Ptr[byte]].883 at /root/.codon/lib/codon/stdlib/internal/python.codon:1201:9
[0x7f3714313a60] pyobj:pyobj.__call__:0[pyobj,Tuple[Partial.N.read_root:0[read_root:0,Tuple,KwTuple.N0]],KwTuple.N0].1033 at /root/.codon/lib/codon/stdlib/internal/python.codon:1225:68
[0x7f37143153d2] main.0 at /workspace/fast-api/main.py:11:1
Aborted
```
I made sure `export CODON_PYTHON=/usr/lib/python3.12/config-3.12-x86_64-linux-gnu/libpython3.12.so` is populated and other codon examples from the main site works on the machine. Im using ubuntu:latest docker container but was able to reproduce on a bare metal ubuntu machine as well.
Any ideas?
|
open
|
2025-03-20T16:39:54Z
|
2025-03-21T16:27:49Z
|
https://github.com/exaloop/codon/issues/642
|
[] |
michaelachrisco
| 1
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 16,365
|
[Bug]: protobuf==3.20.0 requirement breaks several extensions and offline mode
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
If you install any of those extensions (or had those installed prior to last stable WebUI-Update) :
adetailer
sd-webui-controlnet
stable-diffusion-webui-wd14-tagger
WebUI wont be able to start offline, since those extensions will download and install protobuf > 3.20.0 (currently "protobuf 4.25.4").
Uninstalling protobuf 4.25.4 and installing protobuf 3.20.0, will break the extensions mentioned above.
(Looks like mediapipe, which is required for all above extentions, causes this compatibility problem)
```
python -m pip check
mediapipe 0.10.14 has requirement protobuf<5,>=4.25.3, but you have protobuf 3.20.0.
onnx 1.16.1 has requirement protobuf>=3.20.2, but you have protobuf 3.20.0.
tensorflow-intel 2.17.0 has requirement protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3, but you have protobuf 3.20.0.
```
When those extensions (and so the protobuf 4.25.4 is installed), the WebUI will not run in offline mode.
However with internet access, on every run is "Installing requirements" is displayed and webui starts.
When run in offline mode WebUI tries every time to download protobuf==3.20.0, fails and will not start:
```
venv "E:\WebUI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing requirements
Traceback (most recent call last):
File "E:\WebUI\stable-diffusion-webui\launch.py", line 48, in <module>
main()
File "E:\WebUI\stable-diffusion-webui\launch.py", line 39, in main
prepare_environment()
File "E:\WebUI\stable-diffusion-webui\modules\launch_utils.py", line 423, in prepare_environment
run_pip(f"install -r \"{requirements_file}\"", "requirements")
File "E:\WebUI\stable-diffusion-webui\modules\launch_utils.py", line 144, in run_pip
return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
File "E:\WebUI\stable-diffusion-webui\modules\launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install requirements.
Command: "E:\WebUI\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install -r "requirements_versions.txt" --prefer-binary
Error code: 1
stdout: Requirement already satisfied: setuptools==69.5.1 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 1)) (69.5.1)
Requirement already satisfied: GitPython==3.1.32 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 2)) (3.1.32)
Requirement already satisfied: Pillow==9.5.0 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 3)) (9.5.0)
Requirement already satisfied: accelerate==0.21.0 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 4)) (0.21.0)
Requirement already satisfied: blendmodes==2022 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 5)) (2022)
Requirement already satisfied: clean-fid==0.1.35 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 6)) (0.1.35)
Requirement already satisfied: diskcache==5.6.3 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 7)) (5.6.3)
Requirement already satisfied: einops==0.4.1 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 8)) (0.4.1)
Requirement already satisfied: facexlib==0.3.0 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 9)) (0.3.0)
Requirement already satisfied: fastapi==0.94.0 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 10)) (0.94.0)
Requirement already satisfied: gradio==3.41.2 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 11)) (3.41.2)
Requirement already satisfied: httpcore==0.15 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 12)) (0.15.0)
Requirement already satisfied: inflection==0.5.1 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 13)) (0.5.1)
Requirement already satisfied: jsonmerge==1.8.0 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 14)) (1.8.0)
Requirement already satisfied: kornia==0.6.7 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 15)) (0.6.7)
Requirement already satisfied: lark==1.1.2 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 16)) (1.1.2)
Requirement already satisfied: numpy==1.26.2 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 17)) (1.26.2)
Requirement already satisfied: omegaconf==2.2.3 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 18)) (2.2.3)
Requirement already satisfied: open-clip-torch==2.20.0 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 19)) (2.20.0)
Requirement already satisfied: piexif==1.1.3 in E:\WebUI\stable-diffusion-webui\venv\lib\site-packages (from -r requirements_versions.txt (line 20)) (1.1.3)
stderr: WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x000001767A88C160>: Failed to establish a new connection: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions')': /simple/protobuf/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x000001767A88C490>: Failed to establish a new connection: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions')': /simple/protobuf/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x000001767A88C640>: Failed to establish a new connection: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions')': /simple/protobuf/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x000001767A88C7F0>: Failed to establish a new connection: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions')': /simple/protobuf/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x000001767A88C9A0>: Failed to establish a new connection: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions')': /simple/protobuf/
ERROR: Could not find a version that satisfies the requirement protobuf==3.20.0 (from versions: none)
ERROR: No matching distribution found for protobuf==3.20.0
```
currently i had to uninstall all above extentions and do:
```
pip uninstall -y protobuf mediapipe onnxruntime onnxruntime-gpu open-clip-torch tensorboard open-clip-torch tensorflow-intel onnx insightface tensorflow open-clip-torch
```
After that webui will need to download its reqs again, but the offline mode will work after installing those
### Steps to reproduce the problem
1. Install any of the extentions above
2. start webui online so it could download the reqs
3. close webui
4. start webui offline
5. webui wont start
### What should have happened?
webui should be able to start offline with following extentions:
adetailer
sd-webui-controlnet
stable-diffusion-webui-wd14-tagger
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
```
{
"Platform": "Windows-10-10.0.19045-SP0",
"Python": "3.10.10",
"Version": "v1.10.1",
"Commit": "82a973c04367123ae98bd9abdf80d9eda9b910e2",
"Git status": "On branch master\nYour branch is up to date with 'origin/master'.\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n\t-extensions-break-webui/\n\txSTARTME.bat\n\tGitBash.bat\n\tztestTorch.py\n\nnothing added to commit but untracked files present (use \"git add\" to track)",
"Script path": "E:\\WebUI\\stable-diffusion-webui",
"Data path": "E:\\WebUI\\stable-diffusion-webui",
"Extensions dir": "E:\\WebUI\\stable-diffusion-webui\\extensions",
"Checksum": "d43551809fb043246971204f163c2a4208e5c7321330b2423569f924d7b70354",
"Commandline": [
"launch.py",
"--no-half-vae",
"--opt-sdp-no-mem-attention",
"--opt-channelslast",
"--theme",
"dark",
"--ckpt-dir",
"E:\\WebUI\\RES\\model",
"--embeddings-dir",
"E:\\WebUI\\RES\\embedding",
"--hypernetwork-dir",
"E:\\WebUI\\RES\\hypernetwork",
"--lora-dir",
"E:\\WebUI\\RES\\lora",
"--vae-dir",
"E:\\WebUI\\RES\\vae",
"--bsrgan-models-path",
"E:\\WebUI\\RES\\upscalers\\BSRGAN",
"--codeformer-models-path",
"E:\\WebUI\\RES\\upscalers\\Codeformer",
"--esrgan-models-path",
"E:\\WebUI\\RES\\upscalers\\ESRGAN",
"--gfpgan-models-path",
"E:\\WebUI\\RES\\upscalers\\GFPGAN",
"--ldsr-models-path",
"E:\\WebUI\\RES\\upscalers\\LDSR",
"--realesrgan-models-path",
"E:\\WebUI\\RES\\upscalers\\RealESRGAN",
"--scunet-models-path",
"E:\\WebUI\\RES\\upscalers\\ScuNET",
"--swinir-models-path",
"E:\\WebUI\\RES\\upscalers\\SwinIR",
"--dat-models-path",
"E:\\WebUI\\RES\\upscalers\\DAT"
],
"Torch env info": {
"torch_version": "2.1.2+cu121",
"is_debug_build": "False",
"cuda_compiled_version": "12.1",
"gcc_version": null,
"clang_version": null,
"cmake_version": null,
"os": "Microsoft Windows 10 Pro",
"libc_version": "N/A",
"python_version": "3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)",
"python_platform": "Windows-10-10.0.19045-SP0",
"is_cuda_available": "True",
"cuda_runtime_version": null,
"cuda_module_loading": "LAZY",
"nvidia_driver_version": "556.12",
"nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 3090 Ti",
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.26.2",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==2.1.2+cu121",
"torchdiffeq==0.2.3",
"torchmetrics==1.4.0.post0",
"torchsde==0.2.6",
"torchvision==0.16.2+cu121"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "",
"is_xnnpack_available": "True",
"cpu_info": [
"Architecture=9",
"CurrentClockSpeed=3501",
"DeviceID=CPU0",
"Family=107",
"L2CacheSize=8192",
"L2CacheSpeed=",
"Manufacturer=AuthenticAMD",
"MaxClockSpeed=3501",
"Name=AMD Ryzen 9 3950X 16-Core Processor ",
"ProcessorType=3",
"Revision=28928"
]
},
"Exceptions": [],
"CPU": {
"model": "AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD",
"count logical": 32,
"count physical": 16
},
"RAM": {
"total": "64GB",
"used": "23GB",
"free": "41GB"
},
"Extensions": [
{
"name": "a1111-sd-webui-tagcomplete",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\a1111-sd-webui-tagcomplete",
"commit": "1c6bba2a3d0a8d2dec0c180873e5090dee654ada",
"branch": "main",
"remote": "https://github.com/DominikDoom/a1111-sd-webui-tagcomplete"
},
{
"name": "adetailer",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\adetailer",
"commit": "25e7509fe018de8aa063a5f1902598f5eda0c06c",
"branch": "main",
"remote": "https://github.com/Bing-su/adetailer.git"
},
{
"name": "diffusion-noise-alternatives-webui",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\diffusion-noise-alternatives-webui",
"commit": "7a3f0a6c6c25be46590dc66e67801322fb59ad9f",
"branch": "main",
"remote": "https://github.com/Seshelle/diffusion-noise-alternatives-webui"
},
{
"name": "sd-webui-controlnet",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\sd-webui-controlnet",
"commit": "56cec5b2958edf3b1807b7e7b2b1b5186dbd2f81",
"branch": "main",
"remote": "https://github.com/Mikubill/sd-webui-controlnet"
},
{
"name": "sd-webui-freeu",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\sd-webui-freeu",
"commit": "c618bb7f269c8428f4b6cc47fcac67084e050d19",
"branch": "main",
"remote": "https://github.com/ljleb/sd-webui-freeu"
},
{
"name": "sd-webui-vectorscope-cc",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\sd-webui-vectorscope-cc",
"commit": "54720821c873d58c8898eeed8a9bb42f04d0249d",
"branch": "main",
"remote": "https://github.com/Haoming02/sd-webui-vectorscope-cc"
},
{
"name": "sdweb-merge-block-weighted-gui",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\sdweb-merge-block-weighted-gui",
"commit": "8a62a753e791a75273863dd04958753f0df7532f",
"branch": "master",
"remote": "https://github.com/bbc-mc/sdweb-merge-block-weighted-gui.git"
},
{
"name": "stable-diffusion-webui-wd14-tagger",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-wd14-tagger",
"commit": "f4b56ef07bc3c9c1a59f7d67fbf8479ffab2ab68",
"branch": "master",
"remote": "https://github.com/67372a/stable-diffusion-webui-wd14-tagger"
}
],
"Inactive extensions": [
{
"name": "sd-webui-aspect-ratio-helper",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\sd-webui-aspect-ratio-helper",
"commit": "99fcf9b0a4e3f8c8cac07b12d17b66f12297b828",
"branch": "main",
"remote": "https://github.com/thomasasfk/sd-webui-aspect-ratio-helper.git"
},
{
"name": "sd-webui-model-converter",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\sd-webui-model-converter",
"commit": "e5488193d255a37216a31b9b99dd11a85dfd2ad9",
"branch": "main",
"remote": "https://github.com/Akegarasu/sd-webui-model-converter.git"
},
{
"name": "stable-diffusion-webui-daam",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-daam",
"commit": "0906c850fb70d7e4b296f9449763d48fa8d1e687",
"branch": "master",
"remote": "https://github.com/toriato/stable-diffusion-webui-daam.git"
},
{
"name": "stable-diffusion-webui-tokenizer",
"path": "E:\\WebUI\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-tokenizer",
"commit": "ac6d541c7032e9f9c69c8ead2ed201302b06a4fe",
"branch": "master",
"remote": "https://github.com/AUTOMATIC1111/stable-diffusion-webui-tokenizer.git"
}
],
"Environment": {
"COMMANDLINE_ARGS": " --no-half-vae --opt-sdp-no-mem-attention --opt-channelslast --theme dark --ckpt-dir \"E:\\WebUI\\RES\\model\" --embeddings-dir \"E:\\WebUI\\RES\\embedding\" --hypernetwork-dir \"E:\\WebUI\\RES\\hypernetwork\" --lora-dir \"E:\\WebUI\\RES\\lora\" --vae-dir \"E:\\WebUI\\RES\\vae\" --bsrgan-models-path \"E:\\WebUI\\RES\\upscalers\\BSRGAN\" --codeformer-models-path \"E:\\WebUI\\RES\\upscalers\\Codeformer\" --esrgan-models-path \"E:\\WebUI\\RES\\upscalers\\ESRGAN\" --gfpgan-models-path \"E:\\WebUI\\RES\\upscalers\\GFPGAN\" --ldsr-models-path \"E:\\WebUI\\RES\\upscalers\\LDSR\" --realesrgan-models-path \"E:\\WebUI\\RES\\upscalers\\RealESRGAN\" --scunet-models-path \"E:\\WebUI\\RES\\upscalers\\ScuNET\" --swinir-models-path \"E:\\WebUI\\RES\\upscalers\\SwinIR\" --dat-models-path \"E:\\WebUI\\RES\\upscalers\\DAT\"",
"GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
"samples_save": false,
"samples_format": "jpg",
"samples_filename_pattern": "[datetime<%Y%m%d_%H%M%S>]",
"save_images_add_number": false,
"grid_save": true,
"grid_format": "jpg",
"grid_extended_filename": true,
"grid_only_if_multiple": true,
"grid_prevent_empty_spots": false,
"n_rows": -1,
"enable_pnginfo": true,
"save_txt": false,
"save_images_before_face_restoration": false,
"save_images_before_highres_fix": false,
"save_images_before_color_correction": false,
"jpeg_quality": 95,
"export_for_4chan": true,
"use_original_name_batch": true,
"use_upscaler_name_as_suffix": false,
"save_selected_only": true,
"do_not_add_watermark": true,
"temp_dir": "T:/Temp",
"clean_temp_dir_at_start": true,
"outdir_samples": "",
"outdir_txt2img_samples": "T:\\_SD_GENS\\txt2img-images",
"outdir_img2img_samples": "T:\\_SD_GENS\\img2img-images",
"outdir_extras_samples": "T:\\_SD_GENS\\extras-images",
"outdir_grids": "",
"outdir_txt2img_grids": "E:\\WebUI\\RES\\img\\SD-out\\_SD_GENS\\x\\txt2img-grids",
"outdir_img2img_grids": "E:\\WebUI\\RES\\img\\SD-out\\_SD_GENS\\x\\img2img-grids",
"outdir_save": "E:\\WebUI\\RES\\img\\SD-out\\_SD_GENS",
"save_to_dirs": true,
"grid_save_to_dirs": true,
"use_save_to_dirs_for_ui": false,
"directories_filename_pattern": "W[width]xH[height]",
"directories_max_prompt_words": 8,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"realesrgan_enabled_models": [
"R-ESRGAN 4x+",
"R-ESRGAN 4x+ Anime6B",
"R-ESRGAN General 4xV3",
"R-ESRGAN General WDN 4xV3",
"R-ESRGAN AnimeVideo",
"R-ESRGAN 2x+"
],
"upscaler_for_img2img": "ESRGAN_4x",
"face_restoration_model": "CodeFormer",
"code_former_weight": 0.5,
"face_restoration_unload": false,
"show_warnings": false,
"memmon_poll_rate": 8,
"samples_log_stdout": false,
"multiple_tqdm": true,
"print_hypernet_extra": false,
"unload_models_when_training": false,
"pin_memory": false,
"save_optimizer_state": false,
"save_training_settings_to_txt": true,
"dataset_filename_word_regex": "",
"dataset_filename_join_string": " ",
"training_image_repeats_per_epoch": 1,
"training_write_csv_every": 500,
"training_xattention_optimizations": false,
"training_enable_tensorboard": false,
"training_tensorboard_save_images": false,
"training_tensorboard_flush_every": 120,
"sd_model_checkpoint": "XXXXXXXXXXXXXXX-CUT-XXXXXXXXXXXXXXXXXXXXX",
"sd_checkpoint_cache": 0,
"sd_vae_checkpoint_cache": 0,
"sd_vae": "sdxl_vae.safetensors",
"sd_vae_as_default": true,
"inpainting_mask_weight": 1.0,
"initial_noise_multiplier": 1.0,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"img2img_background_color": "#ffffff",
"enable_quantization": false,
"enable_emphasis": true,
"enable_batch_seeds": true,
"comma_padding_backtrack": 20,
"CLIP_stop_at_last_layers": 1,
"upcast_attn": true,
"use_old_emphasis_implementation": false,
"use_old_karras_scheduler_sigmas": false,
"use_old_hires_fix_width_height": false,
"interrogate_keep_models_in_memory": false,
"interrogate_return_ranks": false,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500,
"interrogate_clip_skip_categories": [],
"interrogate_deepbooru_score_threshold": 0.5,
"deepbooru_sort_alpha": true,
"deepbooru_use_spaces": true,
"deepbooru_escape": true,
"deepbooru_filter_tags": "",
"extra_networks_default_view": "cards",
"extra_networks_default_multiplier": 0.8,
"sd_hypernetwork": "None",
"return_grid": true,
"do_not_show_images": false,
"add_model_hash_to_info": true,
"add_model_name_to_info": true,
"disable_weights_auto_swap": false,
"send_seed": true,
"send_size": true,
"font": "",
"js_modal_lightbox": true,
"js_modal_lightbox_initially_zoomed": true,
"show_progress_in_title": true,
"samplers_in_dropdown": true,
"dimensions_and_batch_together": true,
"keyedit_precision_attention": 0.1,
"keyedit_precision_extra": 0.05,
"quicksettings": "sd_model_checkpoint,CLIP_stop_at_last_layers,eta_noise_seed_delta,sd_vae,extra_networks_default_multiplier,lora_apply_to_outputs",
"ui_reorder": "override_settings, inpaint, sampler, checkboxes, hires_fix, dimensions, cfg, seed, batch, scripts",
"ui_extra_networks_tab_reorder": "",
"localization": "None",
"show_progressbar": true,
"live_previews_enable": true,
"show_progress_grid": true,
"show_progress_every_n_steps": 10,
"show_progress_type": "Full",
"live_preview_content": "Combined",
"live_preview_refresh_period": 1000,
"hide_samplers": [],
"eta_ddim": 0.0,
"eta_ancestral": 1.0,
"ddim_discretize": "uniform",
"s_churn": 0.0,
"s_tmin": 0.0,
"s_noise": 1.0,
"eta_noise_seed_delta": 31337,
"always_discard_next_to_last_sigma": false,
"postprocessing_enable_in_main_ui": [],
"postprocessing_operation_order": [],
"upscaling_max_images_in_cache": 5,
"disabled_extensions": [
"openpose-editor",
"sd-webui-aspect-ratio-helper",
"sd-webui-model-converter",
"stable-diffusion-webui-daam",
"stable-diffusion-webui-tokenizer",
"ultimate-upscale-for-automatic1111"
],
"sd_checkpoint_hash": "45a5febab2aac097e4af608a3ed2a23d3c34b547144a8872f6f9a06616c1a2a8",
"ldsr_steps": 100,
"ldsr_cached": false,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"sd_lora": "None",
"lora_apply_to_outputs": false,
"tac_tagFile": "danbooru.csv",
"tac_active": true,
"tac_activeIn.txt2img": true,
"tac_activeIn.img2img": true,
"tac_activeIn.negativePrompts": true,
"tac_activeIn.thirdParty": true,
"tac_activeIn.modelList": "",
"tac_activeIn.modelListMode": "Blacklist",
"tac_maxResults": 30.0,
"tac_showAllResults": true,
"tac_resultStepLength": 100.0,
"tac_delayTime": 100.0,
"tac_useWildcards": true,
"tac_useEmbeddings": true,
"tac_useHypernetworks": true,
"tac_useLoras": true,
"tac_showWikiLinks": true,
"tac_replaceUnderscores": true,
"tac_escapeParentheses": true,
"tac_appendComma": true,
"tac_alias.searchByAlias": true,
"tac_alias.onlyShowAlias": false,
"tac_translation.translationFile": "None",
"tac_translation.oldFormat": false,
"tac_translation.searchByTranslation": true,
"tac_extra.extraFile": "None",
"tac_extra.onlyAliasExtraFile": false,
"additional_networks_sort_models_by": "path name",
"additional_networks_reverse_sort_order": false,
"additional_networks_model_name_filter": "",
"additional_networks_xy_grid_model_metadata": "",
"additional_networks_hash_thread_count": 1.0,
"additional_networks_back_up_model_when_saving": true,
"additional_networks_show_only_safetensors": false,
"additional_networks_show_only_models_with_metadata": "disabled",
"additional_networks_max_top_tags": 20.0,
"images_history_preload": false,
"images_record_paths": true,
"images_delete_message": true,
"images_history_page_columns": 6.0,
"images_history_page_rows": 6.0,
"images_history_pages_perload": 20.0,
"promptgen_names": "AUTOMATIC/promptgen-lexart, AUTOMATIC/promptgen-majinai-safe, AUTOMATIC/promptgen-majinai-unsafe",
"promptgen_device": "gpu",
"tac_extra.addMode": "Insert before",
"additional_networks_max_dataset_folders": 20.0,
"control_net_model_config": "E:\\WebUI\\stable-diffusion-webui\\extensions\\sd-webui-controlnet\\models\\cldm_v15.yaml",
"control_net_models_path": "",
"control_net_control_transfer": false,
"control_net_no_detectmap": false,
"control_net_only_midctrl_hires": true,
"control_net_allow_script_control": false,
"img_downscale_threshold": 4.0,
"target_side_length": 4000.0,
"no_dpmpp_sde_batch_determinism": false,
"control_net_model_adapter_config": "E:\\WebUI\\stable-diffusion-webui\\extensions\\sd-webui-controlnet\\models\\sketch_adapter_v14.yaml",
"control_net_detectedmap_dir": "E:\\WebUI\\RES\\img\\SD-out\\_SD_GENS\\detected_maps",
"control_net_detectmap_autosaving": false,
"control_net_skip_img2img_processing": false,
"control_net_only_mid_control": false,
"control_net_max_models_num": 3,
"control_net_model_cache_size": 2,
"control_net_monocular_depth_optim": false,
"control_net_cfg_based_guidance": false,
"tac_slidingPopup": true,
"webp_lossless": false,
"img_max_size_mp": 200.0,
"extra_networks_add_text_separator": " ",
"hidden_tabs": [],
"uni_pc_variant": "bh1",
"uni_pc_skip_type": "time_uniform",
"uni_pc_order": 3,
"uni_pc_lower_order_final": true,
"control_net_sync_field_args": false,
"arh_expand_by_default": false,
"arh_ui_component_order_key": "MaxDimensionScaler, PredefinedAspectRatioButtons, PredefinedPercentageButtons",
"arh_show_max_width_or_height": true,
"arh_max_width_or_height": 1728,
"arh_show_predefined_aspect_ratios": true,
"arh_predefined_aspect_ratio_use_max_dim": false,
"arh_predefined_aspect_ratios": "1:1, 2:3, 3:5 , 4:3, 16:9, 9:16, 21:9",
"arh_show_predefined_percentages": true,
"arh_predefined_percentages": "25, 50, 75, 125, 150, 175, 200",
"arh_predefined_percentages_display_key": "Incremental/decremental percentage (-50%, +50%)",
"save_mask": false,
"save_mask_composite": false,
"extra_networks_card_width": 0.0,
"extra_networks_card_height": 0.0,
"return_mask": false,
"return_mask_composite": false,
"arh_javascript_aspect_ratio_show": false,
"arh_javascript_aspect_ratio": "1:1, 3:2, 4:3, 5:4, 16:9, 1.85:1, 2.35:1, 2.39:1, 2.40:1, 21:9, 1.375:1, 1.66:1, 1.75:1",
"arh_hide_accordion_by_default": false,
"disable_all_extensions": "none",
"tac_useLycos": true,
"tac_keymap": "{\n \"MoveUp\": \"ArrowUp\",\n \"MoveDown\": \"ArrowDown\",\n \"JumpUp\": \"PageUp\",\n \"JumpDown\": \"PageDown\",\n \"JumpToStart\": \"Home\",\n \"JumpToEnd\": \"End\",\n \"ChooseSelected\": \"Enter\",\n \"ChooseFirstOrSelected\": \"Tab\",\n \"Close\": \"Escape\"\n}",
"tac_colormap": "{\n \"danbooru\": {\n \"-1\": [\"red\", \"maroon\"],\n \"0\": [\"lightblue\", \"dodgerblue\"],\n \"1\": [\"indianred\", \"firebrick\"],\n \"3\": [\"violet\", \"darkorchid\"],\n \"4\": [\"lightgreen\", \"darkgreen\"],\n \"5\": [\"orange\", \"darkorange\"]\n },\n \"e621\": {\n \"-1\": [\"red\", \"maroon\"],\n \"0\": [\"lightblue\", \"dodgerblue\"],\n \"1\": [\"gold\", \"goldenrod\"],\n \"3\": [\"violet\", \"darkorchid\"],\n \"4\": [\"lightgreen\", \"darkgreen\"],\n \"5\": [\"tomato\", \"darksalmon\"],\n \"6\": [\"red\", \"maroon\"],\n \"7\": [\"whitesmoke\", \"black\"],\n \"8\": [\"seagreen\", \"darkseagreen\"]\n }\n}",
"arh_ui_javascript_selection_method": "Aspect Ratios Dropdown",
"control_net_modules_path": "",
"control_net_high_res_only_mid": false,
"restore_config_state_file": "",
"save_init_img": false,
"outdir_init_images": "E:\\WebUI\\RES\\img\\SD-out\\_SD_GENS\\init-images",
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"randn_source": "GPU",
"dont_fix_second_order_samplers_schedule": false,
"sd_lyco": "None",
"keyedit_delimiters": ".,\\/!?%^*;:{}=`~()",
"gradio_theme": "Default",
"s_min_uncond": 0,
"controlnet_show_batch_images_in_ui": false,
"controlnet_increment_seed_during_batch": false,
"quicksettings_list": [
"sd_model_checkpoint",
"CLIP_stop_at_last_layers",
"eta_noise_seed_delta",
"sd_vae",
"extra_networks_default_multiplier",
"lora_apply_to_outputs",
"token_merging_ratio",
"token_merging_ratio_hr",
"img2img_extra_noise"
],
"lora_functional": false,
"lora_preferred_name": "Filename",
"js_modal_lightbox_gamepad": true,
"js_modal_lightbox_gamepad_repeat": 250.0,
"add_version_to_infotext": true,
"tac_translation.liveTranslation": false,
"tac_chantFile": "demo-chants.json",
"ad_max_models": 4,
"ad_save_previews": false,
"ad_save_images_before": false,
"ad_only_seleted_scripts": true,
"ad_script_names": "dynamic_prompting,dynamic_thresholding,wildcards,wildcard_recursive",
"ad_bbox_sortby": "None",
"list_hidden_files": true,
"cross_attention_optimization": "Automatic",
"token_merging_ratio": 0,
"token_merging_ratio_img2img": 0,
"token_merging_ratio_hr": 0,
"extra_networks_show_hidden_directories": true,
"extra_networks_hidden_models": "When searched",
"lora_add_hashes_to_infotext": true,
"img2img_editor_height": 720,
"ui_tab_order": [],
"hires_fix_show_sampler": false,
"hires_fix_show_prompts": true,
"live_previews_image_format": "jpeg",
"tac_refreshTempFiles": "Refresh TAC temp files",
"controlnet_disable_control_type": false,
"tac_wildcardCompletionMode": "To next folder level",
"arh_show_min_width_or_height": false,
"arh_min_width_or_height": 1024,
"controlnet_disable_openpose_edit": false,
"ui_reorder_list": [
"override_settings"
],
"grid_zip_filename_pattern": "",
"sd_unet": "Automatic",
"pad_cond_uncond": false,
"experimental_persistent_cond_cache": false,
"hires_fix_use_firstpass_conds": false,
"disable_token_counters": false,
"extra_options": [],
"extra_options_accordion": false,
"infotext_styles": "Apply if any",
"k_sched_type": "Automatic",
"sigma_min": 0.0,
"sigma_max": 0.0,
"rho": 0.0,
"canvas_hotkey_zoom": "Alt",
"canvas_hotkey_adjust": "Ctrl",
"canvas_hotkey_move": "F",
"canvas_hotkey_fullscreen": "S",
"canvas_hotkey_reset": "R",
"canvas_hotkey_overlap": "O",
"canvas_show_tooltip": true,
"canvas_disabled_functions": [
"Overlap"
],
"tac_appendSpace": true,
"tac_alwaysSpaceAtEnd": true,
"tac_modelKeywordCompletion": "Never",
"control_net_inpaint_blur_sigma": 7,
"grid_text_active_color": "#000000",
"grid_text_inactive_color": "#999999",
"grid_background_color": "#ffffff",
"disable_mmap_load_safetensors": false,
"auto_vae_precision": true,
"sdxl_crop_top": 0.0,
"sdxl_crop_left": 0.0,
"sdxl_refiner_low_aesthetic_score": 2.5,
"sdxl_refiner_high_aesthetic_score": 6.0,
"extra_networks_card_text_scale": 1,
"extra_networks_card_show_desc": true,
"textual_inversion_print_at_load": false,
"textual_inversion_add_hashes_to_infotext": true,
"lora_show_all": true,
"lora_hide_unknown_for_versions": [],
"keyedit_move": true,
"add_user_name_to_info": false,
"canvas_blur_prompt": false,
"control_net_no_high_res_fix": false,
"tac_sortWildcardResults": true,
"tac_showExtraNetworkPreviews": true,
"controlnet_ignore_noninpaint_mask": false,
"sd_vae_overrides_per_model_preferences": false,
"save_incomplete_images": false,
"face_restoration": false,
"auto_launch_browser": "Disable",
"show_gradio_deprecation_warnings": true,
"hide_ldm_prints": true,
"api_enable_requests": true,
"api_forbid_local_requests": true,
"api_useragent": "",
"sd_checkpoints_limit": 2,
"sd_checkpoints_keep_in_cpu": true,
"tiling": false,
"hires_fix_refiner_pass": "second pass",
"sd_vae_encode_method": "Full",
"sd_vae_decode_method": "Full",
"img2img_extra_noise": 0.03,
"img2img_sketch_default_brush_color": "#ffffff",
"img2img_inpaint_mask_brush_color": "#ffffff",
"img2img_inpaint_sketch_default_brush_color": "#ffffff",
"persistent_cond_cache": true,
"batch_cond_uncond": true,
"use_old_scheduling": false,
"lora_in_memory_limit": 0,
"gradio_themes_cache": true,
"gallery_height": "",
"extra_options_txt2img": [],
"extra_options_img2img": [],
"extra_options_cols": 1,
"live_preview_allow_lowvram_full": false,
"live_preview_fast_interrupt": true,
"s_tmax": 0,
"sgm_noise_multiplier": false,
"canvas_auto_expand": true,
"tagger_out_filename_fmt": "[name].[output_extension]",
"tagger_count_threshold": 100,
"tagger_batch_recursive": true,
"tagger_auto_serde_json": true,
"tagger_store_images": false,
"tagger_weighted_tags_files": false,
"tagger_verbose": false,
"tagger_repl_us": true,
"tagger_repl_us_excl": "0_0, (o)_(o), +_+, +_-, ._., <o>_<o>, <|>_<|>, =_=, >_<, 3_3, 6_9, >_o, @_@, ^_^, o_o, u_u, x_x, |_|, ||_||",
"tagger_escape": false,
"tagger_batch_size": 1024.0,
"tagger_hf_cache_dir": "E:\\WebUI\\stable-diffusion-webui\\models\\interrogators",
"tac_includeEmbeddingsInNormalResults": false,
"tac_modelKeywordLocation": "Start of prompt",
"control_net_unit_count": 3,
"tac_modelSortOrder": "Name",
"ad_extra_models_dir": "",
"ad_same_seed_for_each_tap": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"tac_wildcardExclusionList": "",
"tac_skipWildcardRefresh": false,
"save_images_replace_action": "Replace",
"notification_audio": true,
"notification_volume": 100,
"extra_networks_dir_button_function": false,
"extra_networks_card_order_field": "Path",
"extra_networks_card_order": "Ascending",
"img2img_batch_show_results_limit": 32,
"add_vae_name_to_info": true,
"add_vae_hash_to_info": true,
"infotext_skip_pasting": [],
"js_live_preview_in_modal_lightbox": false,
"keyedit_delimiters_whitespace": [
"Tab",
"Carriage Return",
"Line Feed"
],
"compact_prompt_box": false,
"sd_checkpoint_dropdown_use_short": false,
"txt2img_settings_accordion": false,
"img2img_settings_accordion": false,
"enable_console_prompts": false,
"dump_stacks_on_signal": false,
"postprocessing_existing_caption_action": "Ignore",
"tac_useLoraPrefixForLycos": true,
"controlnet_crop_upscale_script_only": false,
"controlnet_disable_photopea_edit": false,
"controlnet_photopea_warning": true,
"controlnet_clip_detector_on_cpu": false,
"tac_useStyleVars": false,
"SWIN_torch_compile": false,
"tac_frequencySort": true,
"tac_frequencyFunction": "Logarithmic (weak)",
"tac_frequencyMinCount": 3,
"tac_frequencyMaxAge": 30,
"tac_frequencyRecommendCap": 10,
"tac_frequencyIncludeAlias": false,
"cc_metadata": true,
"auto_backcompat": true,
"use_downcasted_alpha_bar": true,
"refiner_switch_by_sample_steps": false,
"extra_networks_card_description_is_html": false,
"extra_networks_tree_view_style": "Dirs",
"extra_networks_tree_view_default_enabled": true,
"extra_networks_tree_view_default_width": 180.0,
"lora_not_found_warning_console": false,
"lora_not_found_gradio_warning": false,
"pad_cond_uncond_v0": false,
"fp8_storage": "Disable",
"cache_fp16_weight": false,
"sd_noise_schedule": "Default",
"emphasis": "Original",
"enable_prompt_comments": true,
"auto_vae_precision_bfloat16": false,
"overlay_inpaint": true,
"sd_webui_modal_lightbox_icon_opacity": 1,
"sd_webui_modal_lightbox_toolbar_opacity": 0.9,
"open_dir_button_choice": "Subdirectory",
"include_styles_into_token_counters": true,
"interrupt_after_current": true,
"enable_reloading_ui_scripts": false,
"prioritized_callbacks_app_started": [],
"prioritized_callbacks_model_loaded": [],
"prioritized_callbacks_ui_tabs": [],
"prioritized_callbacks_ui_settings": [],
"prioritized_callbacks_after_component": [],
"prioritized_callbacks_infotext_pasted": [],
"prioritized_callbacks_script_unloaded": [],
"prioritized_callbacks_before_ui": [],
"prioritized_callbacks_on_reload": [],
"prioritized_callbacks_list_optimizers": [],
"prioritized_callbacks_before_token_counter": [],
"prioritized_callbacks_script_before_process": [],
"prioritized_callbacks_script_process": [],
"prioritized_callbacks_script_before_process_batch": [],
"prioritized_callbacks_script_postprocess": [],
"prioritized_callbacks_script_postprocess_batch": [],
"prioritized_callbacks_script_post_sample": [],
"prioritized_callbacks_script_on_mask_blend": [],
"prioritized_callbacks_script_postprocess_image": [],
"prioritized_callbacks_script_postprocess_maskoverlay": [],
"enable_upscale_progressbar": true,
"postprocessing_disable_in_extras": [],
"dat_enabled_models": [
"DAT x2",
"DAT x3",
"DAT x4"
],
"DAT_tile": 192,
"DAT_tile_overlap": 8,
"set_scale_by_when_changing_upscaler": false,
"canvas_hotkey_shrink_brush": "Q",
"canvas_hotkey_grow_brush": "W",
"controlnet_control_type_dropdown": false,
"disable_mean_in_calclate_cond": false,
"freeu_png_info_auto_enable": true,
"prioritized_callbacks_cfg_after_cfg": [],
"prioritized_callbacks_script_process_batch": [],
"ad_same_seed_for_each_tab": false,
"save_write_log_csv": true,
"lora_bundled_ti_to_infotext": true,
"s_min_uncond_all": false,
"skip_early_cond": 0,
"beta_dist_alpha": 0.6,
"beta_dist_beta": 0.6,
"sdxl_clip_l_skip": false,
"sd3_enable_t5": false,
"prevent_screen_sleep_during_generation": true,
"profiling_enable": false,
"profiling_activities": [
"CPU"
],
"profiling_record_shapes": true,
"profiling_profile_memory": true,
"profiling_with_stack": true,
"profiling_filename": "trace.json",
"ad_only_selected_scripts": true
},
"Startup": {
"total": 35.55865979194641,
"records": {
"initial startup": 0.01500391960144043,
"prepare environment/checks": 0.005001544952392578,
"prepare environment/git version info": 0.03400874137878418,
"prepare environment/torch GPU test": 1.746356725692749,
"prepare environment/clone repositores": 0.10702705383300781,
"prepare environment/install requirements": 9.278183221817017,
"prepare environment/run extensions installers/a1111-sd-webui-tagcomplete": 0.0,
"prepare environment/run extensions installers/adetailer": 0.11202836036682129,
"prepare environment/run extensions installers/diffusion-noise-alternatives-webui": 0.0,
"prepare environment/run extensions installers/sd-webui-controlnet": 0.11703181266784668,
"prepare environment/run extensions installers/sd-webui-freeu": 0.0,
"prepare environment/run extensions installers/sd-webui-vectorscope-cc": 0.0,
"prepare environment/run extensions installers/sdweb-merge-block-weighted-gui": 0.0,
"prepare environment/run extensions installers/stable-diffusion-webui-wd14-tagger": 2.7473819255828857,
"prepare environment/run extensions installers": 2.9764420986175537,
"prepare environment": 14.147019386291504,
"launcher": 0.0010001659393310547,
"import torch": 2.7877724170684814,
"import gradio": 0.669173002243042,
"setup paths": 2.5366580486297607,
"import ldm": 0.0040013790130615234,
"import sgm": 0.0,
"initialize shared": 0.2037980556488037,
"other imports": 0.41710829734802246,
"opts onchange": 0.0010001659393310547,
"setup SD model": 0.0,
"setup codeformer": 0.0009999275207519531,
"setup gfpgan": 0.007001638412475586,
"set samplers": 0.0,
"list extensions": 0.0020008087158203125,
"restore config state file": 0.0,
"list SD models": 0.18398213386535645,
"list localizations": 0.0,
"load scripts/custom_code.py": 0.005001544952392578,
"load scripts/img2imgalt.py": 0.0010004043579101562,
"load scripts/loopback.py": 0.0,
"load scripts/outpainting_mk_2.py": 0.0,
"load scripts/poor_mans_outpainting.py": 0.0,
"load scripts/postprocessing_codeformer.py": 0.0,
"load scripts/postprocessing_gfpgan.py": 0.0009999275207519531,
"load scripts/postprocessing_upscale.py": 0.0,
"load scripts/prompt_matrix.py": 0.0,
"load scripts/prompts_from_file.py": 0.0,
"load scripts/sd_upscale.py": 0.0,
"load scripts/xyz_grid.py": 0.002000570297241211,
"load scripts/ldsr_model.py": 0.9752531051635742,
"load scripts/lora_script.py": 4.5637688636779785,
"load scripts/scunet_model.py": 0.031008005142211914,
"load scripts/swinir_model.py": 0.028007030487060547,
"load scripts/hotkey_config.py": 0.0,
"load scripts/extra_options_section.py": 0.0010004043579101562,
"load scripts/hypertile_script.py": 0.05801534652709961,
"load scripts/postprocessing_autosized_crop.py": 0.0,
"load scripts/postprocessing_caption.py": 0.0,
"load scripts/postprocessing_create_flipped_copies.py": 0.0010004043579101562,
"load scripts/postprocessing_focal_crop.py": 0.0009996891021728516,
"load scripts/postprocessing_split_oversized.py": 0.0,
"load scripts/soft_inpainting.py": 0.0,
"load scripts/model_keyword_support.py": 0.0030012130737304688,
"load scripts/shared_paths.py": 0.0,
"load scripts/tag_autocomplete_helper.py": 0.785203218460083,
"load scripts/tag_frequency_db.py": 0.0010001659393310547,
"load scripts/!adetailer.py": 0.5280201435089111,
"load scripts/Alternate Noise.py": 0.0009992122650146484,
"load scripts/adapter.py": 0.0,
"load scripts/api.py": 0.20205354690551758,
"load scripts/batch_hijack.py": 0.0,
"load scripts/cldm.py": 0.002000570297241211,
"load scripts/controlnet.py": 0.5245223045349121,
"load scripts/controlnet_diffusers.py": 0.0,
"load scripts/controlnet_lllite.py": 0.0,
"load scripts/controlnet_lora.py": 0.0,
"load scripts/controlnet_model_guess.py": 0.0010001659393310547,
"load scripts/controlnet_sparsectrl.py": 0.0,
"load scripts/controlnet_version.py": 0.0,
"load scripts/enums.py": 0.0010001659393310547,
"load scripts/external_code.py": 0.0,
"load scripts/global_state.py": 0.0010001659393310547,
"load scripts/hook.py": 0.0,
"load scripts/infotext.py": 0.0,
"load scripts/logging.py": 0.0010004043579101562,
"load scripts/lvminthin.py": 0.0,
"load scripts/movie2movie.py": 0.0,
"load scripts/supported_preprocessor.py": 0.0010001659393310547,
"load scripts/utils.py": 0.0010004043579101562,
"load scripts/xyz_grid_support.py": 0.0,
"load scripts/freeu.py": 0.10002660751342773,
"load scripts/cc.py": 0.0029997825622558594,
"load scripts/cc_callback.py": 0.03901100158691406,
"load scripts/cc_colorpicker.py": 0.0,
"load scripts/cc_const.py": 0.0,
"load scripts/cc_hdr.py": 0.0,
"load scripts/cc_scaling.py": 0.0,
"load scripts/cc_settings.py": 0.03200817108154297,
"load scripts/cc_style.py": 0.0,
"load scripts/cc_xyz.py": 0.0,
"load scripts/merge_block_weighted_extension.py": 0.040009498596191406,
"load scripts/tagger.py": 0.10702824592590332,
"load scripts/comments.py": 0.03300881385803223,
"load scripts/refiner.py": 0.0010004043579101562,
"load scripts/sampler.py": 0.0,
"load scripts/seed.py": 0.0,
"load scripts": 8.075949668884277,
"load upscalers": 0.00600123405456543,
"refresh VAE": 0.3540916442871094,
"refresh textual inversion templates": 0.0,
"scripts list_optimizers": 0.0010001659393310547,
"scripts list_unets": 0.0,
"reload hypernetworks": 0.0010023117065429688,
"initialize extra networks": 0.010001420974731445,
"cleanup temp dir": 0.0010004043579101562,
"scripts before_ui_callback": 0.0010004043579101562,
"create ui": 5.570947885513306,
"gradio launch": 0.5461413860321045,
"add APIs": 0.006002187728881836,
"app_started_callback/lora_script.py": 0.0009999275207519531,
"app_started_callback/tag_autocomplete_helper.py": 0.0030007362365722656,
"app_started_callback/!adetailer.py": 0.0,
"app_started_callback/api.py": 0.003000497817993164,
"app_started_callback/tagger.py": 0.002000570297241211,
"app_started_callback": 0.009001731872558594
}
},
"Packages": [
"absl-py==2.1.0",
"accelerate==0.21.0",
"addict==2.4.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohttp==3.9.5",
"aiosignal==1.3.1",
"albumentations==1.4.3",
"altair==5.3.0",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"astunparse==1.6.3",
"async-timeout==4.0.3",
"attrs==23.2.0",
"beautifulsoup4==4.12.3",
"blendmodes==2022",
"certifi==2024.7.4",
"cffi==1.16.0",
"chardet==5.2.0",
"charset-normalizer==3.3.2",
"clean-fid==0.1.35",
"click==8.1.7",
"clip @ https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip#sha256=b5842c25da441d6c581b53a5c60e0c2127ebafe0f746f8e15561a006c6c3be6a",
"colorama==0.4.6",
"coloredlogs==15.0.1",
"colorlog==6.8.2",
"contourpy==1.2.1",
"controlnet-aux==0.0.9",
"cssselect2==0.7.0",
"cycler==0.12.1",
"Cython==3.0.10",
"deepdanbooru==1.0.3",
"deprecation==2.1.0",
"depth_anything @ https://github.com/huchenlei/Depth-Anything/releases/download/v1.0.0/depth_anything-2024.1.22.0-py2.py3-none-any.whl#sha256=26c1d38b8c3c306b4a2197d725a4b989ff65f7ebcf4fb5a96a1b6db7fbd56780",
"depth_anything_v2 @ https://github.com/MackinationsAi/UDAV2-ControlNet/releases/download/v1.0.0/depth_anything_v2-2024.7.1.0-py2.py3-none-any.whl#sha256=6848128867d1f7c7519d88df0f88bfab89100dc5225259c4d7cb90325c308c9f",
"diskcache==5.6.3",
"dsine @ https://github.com/sdbds/DSINE/releases/download/1.0.2/dsine-2024.3.23-py3-none-any.whl#sha256=b9ea3bacce09f9b3f7fb4fa12471da7e465b2f9a60412711105a9238db280442",
"easydict==1.13",
"einops==0.4.1",
"exceptiongroup==1.2.2",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.3.3",
"filelock==3.15.4",
"filterpy==1.4.5",
"flatbuffers==24.3.25",
"fonttools==4.53.1",
"frozenlist==1.4.1",
"fsspec==2024.6.1",
"ftfy==6.2.0",
"fvcore==0.1.5.post20221221",
"gast==0.6.0",
"gdown==5.2.0",
"geffnet==1.0.2",
"gitdb==4.0.11",
"GitPython==3.1.32",
"glob2==0.5",
"google-pasta==0.2.0",
"gradio==3.41.2",
"gradio_client==0.5.0",
"grpcio==1.65.1",
"h11==0.12.0",
"h5py==3.11.0",
"handrefinerportable @ https://github.com/huchenlei/HandRefinerPortable/releases/download/v1.0.1/handrefinerportable-2024.2.12.0-py2.py3-none-any.whl#sha256=1e6c702905919f4c49bcb2db7b20d334e8458a7555cd57630600584ec38ca6a9",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.24.3",
"humanfriendly==10.0",
"idna==3.7",
"imageio==2.34.2",
"importlib_metadata==8.2.0",
"importlib_resources==6.4.0",
"inflection==0.5.1",
"insightface @ https://github.com/Gourieff/Assets/raw/main/Insightface/insightface-0.7.3-cp310-cp310-win_amd64.whl#sha256=47aa0571b2aadd8545d4bc7615dfbc374c10180c283b7ac65058fcb41ed4df86",
"iopath==0.1.9",
"jax==0.4.31",
"jaxlib==0.4.31",
"Jinja2==3.1.4",
"joblib==1.4.2",
"jsonmerge==1.8.0",
"jsonschema==4.23.0",
"jsonschema-specifications==2023.12.1",
"keras==3.4.1",
"kiwisolver==1.4.5",
"kornia==0.6.7",
"lark==1.1.2",
"lazy_loader==0.4",
"libclang==18.1.1",
"lightning-utilities==0.11.6",
"llvmlite==0.43.0",
"lxml==5.2.2",
"manifold3d==2.5.1",
"Markdown==3.6",
"markdown-it-py==3.0.0",
"MarkupSafe==2.1.5",
"matplotlib==3.9.1",
"mdurl==0.1.2",
"mediapipe==0.10.14",
"ml-dtypes==0.4.0",
"mpmath==1.3.0",
"multidict==6.0.5",
"namex==0.0.8",
"networkx==3.3",
"numba==0.60.0",
"numpy==1.26.2",
"omegaconf==2.2.3",
"onnx==1.16.2",
"onnxruntime==1.18.1",
"open-clip-torch==2.20.0",
"opencv-contrib-python==4.10.0.84",
"opencv-python==4.10.0.84",
"opencv-python-headless==4.10.0.84",
"opt-einsum==3.3.0",
"optree==0.12.1",
"orjson==3.10.6",
"packaging==24.1",
"pandas==2.2.2",
"piexif==1.1.3",
"Pillow==9.5.0",
"pillow-avif-plugin==1.4.3",
"pip==24.2",
"platformdirs==4.2.2",
"portalocker==2.10.1",
"prettytable==3.10.2",
"protobuf==4.25.4",
"psutil==5.9.5",
"py-cpuinfo==9.0.0",
"pycollada==0.8",
"pycparser==2.22",
"pydantic==1.10.17",
"pydub==0.25.1",
"Pygments==2.18.0",
"pyparsing==3.1.2",
"pyreadline3==3.4.1",
"PySocks==1.7.1",
"python-dateutil==2.9.0.post0",
"python-multipart==0.0.9",
"pytorch-lightning==1.9.4",
"pytz==2024.1",
"PyWavelets==1.6.0",
"pywin32==306",
"PyYAML==6.0.1",
"referencing==0.35.1",
"regex==2024.7.24",
"reportlab==4.2.2",
"requests==2.32.3",
"resize-right==0.0.2",
"rich==13.7.1",
"rpds-py==0.19.1",
"Rtree==1.3.0",
"safetensors==0.4.2",
"scikit-image==0.21.0",
"scikit-learn==1.5.1",
"scipy==1.14.0",
"seaborn==0.13.2",
"semantic-version==2.10.0",
"sentencepiece==0.2.0",
"setuptools==69.5.1",
"shapely==2.0.5",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.1",
"sounddevice==0.4.7",
"soupsieve==2.5",
"spandrel==0.3.4",
"spandrel_extra_arches==0.1.1",
"starlette==0.26.1",
"svg.path==6.3",
"svglib==1.5.1",
"sympy==1.13.1",
"tabulate==0.9.0",
"tensorboard==2.17.0",
"tensorboard-data-server==0.7.2",
"tensorflow==2.17.0",
"tensorflow-intel==2.17.0",
"tensorflow-io-gcs-filesystem==0.31.0",
"termcolor==2.4.0",
"threadpoolctl==3.5.0",
"tifffile==2024.7.24",
"timm==0.6.7",
"tinycss2==1.3.0",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"tomli==2.0.1",
"toolz==0.12.1",
"torch==2.1.2+cu121",
"torchdiffeq==0.2.3",
"torchmetrics==1.4.0.post0",
"torchsde==0.2.6",
"torchvision==0.16.2+cu121",
"tqdm==4.66.4",
"trampoline==0.1.2",
"transformers==4.30.2",
"trimesh==4.4.3",
"typing_extensions==4.12.2",
"tzdata==2024.1",
"ultralytics==8.2.70",
"ultralytics-thop==2.0.0",
"urllib3==2.2.2",
"uvicorn==0.30.3",
"vhacdx==0.0.8.post1",
"wcwidth==0.2.13",
"webencodings==0.5.1",
"websockets==11.0.3",
"Werkzeug==3.0.3",
"wheel==0.43.0",
"wrapt==1.16.0",
"xatlas==0.0.9",
"xxhash==3.4.1",
"yacs==0.1.8",
"yapf==0.40.2",
"yarl==1.9.4",
"zipp==3.19.2"
]
}
```
### Console logs
```Shell
see above
```
### Additional information
_No response_
|
open
|
2024-08-11T09:24:02Z
|
2024-11-26T21:20:27Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16365
|
[
"bug-report"
] |
neojam
| 9
|
gradio-app/gradio
|
data-visualization
| 9,923
|
ChatInterface Appears Squished in Gradio Blocks
|
### Describe the bug
I am experiencing an issue with the ChatInterface component within a Blocks layout in Gradio. The problem manifests as the chat interface appearing squished, not filling the height of the window as expected. Only a minimal portion of the chat content is visible, impacting usability.
Here's my repo:
https://github.com/dino1729/LLM_QA_Bot.git
azure_gradioui.py
I have the app organized as follows:
Overall Structure :
Framework : The application uses Gradio's Blocks component to manage the overall user interface design.
Tabs : The application is divided into multiple tabs, each hosting distinct functionalities. Several of these tabs contain ChatInterface components.
ChatInterface Components :
Features :
All chat interfaces feature custom titles, descriptions, and functions to process user input.
Each has a submit button for sending input and predefined examples for ease of testing.
The attribute fill_height=True is applied to ensure they dynamically use the full height of their parent containers.
Problem Description :
Universal Issue : All ChatInterface instances across different tabs exhibit the issue of appearing squished.
Symptom : Regardless of the specific function or tab they are in, the chat interfaces do not expand to fill the available vertical space, resulting in content visibility issues.
Styling and Layout :
Consistency : There are no extensive custom CSS rules applied. The app is designed with standard Gradio styling, ensuring that layout handling should remain consistent across components.
Visual Design : The layout is intended to be user-friendly, with flexible and adaptive display suited to various screen sizes.
Development Environment :
Updates : All components are updated to use the latest stable version of Gradio, ensuring compatibility and bug fixes are considered.
Testing : Cross-browser testing performed to rule out issues specific to one browser or platform setup.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
Steps to Reproduce
Create a Gradio app using the Blocks layout.
Add a ChatInterface component inside a tab with the attribute fill_height=True.
Launch the app.
Observe that the ChatInterface does not expand to fill the available vertical space.
```
import gradio as gr
def query_memorypalace_stream(input):
# function logic
return input
with gr.Blocks() as demo:
with gr.Tab(label="Memory Palace"):
memory_palace_chat = gr.ChatInterface(
title="Memory Palace Chat",
description="Ask a question to generate a summary or lesson learned based on the search results from the memory palace.",
fn=query_memorypalace_stream,
submit_btn="Ask",
examples=["Example question 1", "Example question 2"],
fill_height=True
)
memory_palace_chat.style(height="100%")
demo.launch()
```
### Screenshot
<img width="745" alt="image" src="https://github.com/user-attachments/assets/6d190275-43cd-45e0-b481-bb915d4acee6">
### Logs
_No response_
### System Info
```shell
Gradio 5.5.0
Python 3.11.4
```
### Severity
Blocking usage of gradio
|
closed
|
2024-11-09T22:45:03Z
|
2025-01-17T22:54:25Z
|
https://github.com/gradio-app/gradio/issues/9923
|
[
"bug"
] |
dino1729
| 4
|
timkpaine/lantern
|
plotly
| 176
|
double check matplotlib bar widths
|
open
|
2018-10-10T15:49:55Z
|
2019-10-03T02:09:32Z
|
https://github.com/timkpaine/lantern/issues/176
|
[
"bug",
"matplotlib/seaborn",
"backlog"
] |
timkpaine
| 0
|
|
ultralytics/ultralytics
|
computer-vision
| 19,642
|
xywhr in OBB result have changed
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Predict
### Bug
Recently I updated my environment to the latest version of ultralytics and found that the xywhr result for the OBB task is somehow broken. I was using it to get the centre position of each detection, the width and height and the rotation. Before the update I always had the width as the largest dimension and the height as the smaller, and the rotation relative to the width.
After the update, I found that sometimes the width is the smaller dimension and the height is the larger one, so the rotation is also skewed by 90 degrees.
Going back to older version fix the problem.
### Environment
Ultralytics 8.3.87 🚀 Python-3.12.7 torch-2.6.0+cpu CPU (Intel Core(TM) i7-10850H 2.70GHz)
Setup complete ✅ (12 CPUs, 31.7 GB RAM, 609.9/952.5 GB disk)
OS Windows-11-10.0.22631-SP0
Environment Windows
Python 3.12.7
Install git
Path C:\Users\my_user\Desktop\yolo_test\.venv\Lib\site-packages\ultralytics
RAM 31.73 GB
Disk 609.9/952.5 GB
CPU Intel Core(TM) i7-10850H 2.70GHz
CPU count 12
GPU None
GPU count None
CUDA None
numpy ✅ 2.1.1<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.1>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.2>=1.4.1
torch ✅ 2.6.0>=1.8.0
torch ✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.21.0>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 7.0.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
### Minimal Reproducible Example
Input image

# Ultralitics 8.3.87
With ultralytics package version 8.3.87 run this code:
```
from pathlib import Path
import torch
from ultralytics import YOLO
model_file = Path("yolo11x-obb.pt")
model = YOLO(model_file, task="OBB")
img = Path("./images/P0006.png")
results = model.predict(
source=img,
conf=0.8,
imgsz=640,
device=torch.cuda.current_device() if torch.cuda.device_count() > 0 else "CPU",
)
for res in results:
for i, pts in enumerate(res.obb):
if int(pts.xywhr[0][2]) < int(pts.xywhr[0][3]):
print(
f"{i})\tcenter x {int(pts.xywhr[0][0])}\tcenter y {int(pts.xywhr[0][1])}\twidth {int(pts.xywhr[0][2])}\theight {int(pts.xywhr[0][3])}\trotation {pts.xywhr[0][4]}"
)
```
## Output:
```
# 3) center x 322 center y 796 width 24 height 104 rotation 0.7707133889198303
# 9) center x 329 center y 428 width 25 height 96 rotation 0.8094407916069031
# 11) center x 325 center y 470 width 24 height 95 rotation 0.8357727527618408
# 12) center x 324 center y 835 width 25 height 96 rotation 0.7769593000411987
# 16) center x 329 center y 387 width 26 height 96 rotation 0.7910935282707214
# 17) center x 586 center y 361 width 24 height 102 rotation 0.8455972075462341
# 19) center x 582 center y 1246 width 24 height 93 rotation 0.8130463361740112
# 20) center x 332 center y 345 width 25 height 95 rotation 0.8040628433227539
# 21) center x 593 center y 394 width 25 height 104 rotation 0.8151286244392395
# 22) center x 575 center y 1212 width 24 height 101 rotation 0.808992326259613
# 23) center x 325 center y 674 width 24 height 96 rotation 0.8091808557510376
# 24) center x 592 center y 673 width 25 height 102 rotation 0.8232788443565369
# 25) center x 591 center y 798 width 25 height 104 rotation 0.8026906847953796
# 35) center x 321 center y 878 width 25 height 93 rotation 0.787294864654541
# 40) center x 585 center y 763 width 24 height 103 rotation 0.7789315581321716
# 41) center x 590 center y 839 width 25 height 102 rotation 0.8089277148246765
# 44) center x 318 center y 920 width 26 height 103 rotation 0.8125594854354858
# 47) center x 592 center y 435 width 24 height 101 rotation 0.8140760660171509
# 50) center x 325 center y 512 width 25 height 100 rotation 0.8281809091567993
# 51) center x 312 center y 964 width 25 height 97 rotation 0.7953415513038635
# 54) center x 333 center y 1306 width 24 height 99 rotation 0.854654848575592
# 55) center x 317 center y 758 width 25 height 101 rotation 0.7970958948135376
# 60) center x 318 center y 1119 width 25 height 104 rotation 0.8262746930122375
# 66) center x 329 center y 708 width 24 height 94 rotation 0.8186021447181702
# 68) center x 316 center y 1160 width 24 height 102 rotation 0.8195444345474243
# 69) center x 583 center y 1286 width 24 height 100 rotation 0.8252146244049072
# 77) center x 585 center y 722 width 24 height 103 rotation 0.7846349477767944
# 79) center x 315 center y 1200 width 24 height 101 rotation 0.8189680576324463
# 81) center x 147 center y 1238 width 21 height 58 rotation 1.5592427253723145
# 82) center x 857 center y 1059 width 22 height 58 rotation 1.5689480304718018
# 84) center x 312 center y 1282 width 24 height 98 rotation 0.8073341846466064
# 88) center x 316 center y 1240 width 24 height 99 rotation 0.802503764629364
# 89) center x 313 center y 1004 width 26 height 105 rotation 0.8169794082641602
# 90) center x 582 center y 1163 width 23 height 98 rotation 0.8214647173881531
# 91) center x 148 center y 1295 width 22 height 60 rotation 1.564757227897644
# 92) center x 850 center y 1001 width 21 height 52 rotation 1.564372181892395
# 95) center x 588 center y 881 width 26 height 103 rotation 0.7998103499412537
# 96) center x 586 center y 1041 width 24 height 103 rotation 0.8159875273704529
# 98) center x 853 center y 1029 width 23 height 60 rotation 1.5595061779022217
# 100) center x 587 center y 959 width 25 height 103 rotation 0.8381613492965698
# 103) center x 583 center y 1084 width 23 height 101 rotation 0.8027064204216003
# 107) center x 603 center y 1310 width 24 height 101 rotation 0.8306283354759216
# 108) center x 589 center y 920 width 26 height 104 rotation 0.8245478272438049
# 109) center x 586 center y 1001 width 24 height 101 rotation 0.8359718322753906
# 110) center x 319 center y 1038 width 24 height 102 rotation 0.8157163858413696
# 113) center x 161 center y 364 width 22 height 66 rotation 1.5607739686965942
# 114) center x 851 center y 1088 width 22 height 61 rotation 1.5662956237792969
# 115) center x 581 center y 1124 width 23 height 101 rotation 0.8175578713417053
# 116) center x 316 center y 1080 width 25 height 103 rotation 0.8084856271743774
```
# Ultralitics 8.3.32
## yolo checks
Ultralytics 8.3.32 🚀 Python-3.12.7 torch-2.6.0+cpu CPU (Intel Core(TM) i7-10850H 2.70GHz)
Setup complete ✅ (12 CPUs, 31.7 GB RAM, 613.0/952.5 GB disk)
OS Windows-11-10.0.22631-SP0
Environment Windows
Python 3.12.7
Install git
RAM 31.73 GB
Disk 613.0/952.5 GB
CPU Intel Core(TM) i7-10850H 2.70GHz
CPU count 12
GPU None
GPU count None
CUDA None
numpy ✅ 2.1.1>=1.23.0
matplotlib ✅ 3.10.1>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.2>=1.4.1
torch ✅ 2.6.0>=1.8.0
torchvision ✅ 0.21.0>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 7.0.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
numpy ✅ 2.1.1<2.0.0; sys_platform == "darwin"
torch ✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"
Running same code as above no output, so no results with width < height as expected.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2025-03-11T14:21:07Z
|
2025-03-15T13:03:36Z
|
https://github.com/ultralytics/ultralytics/issues/19642
|
[
"bug",
"OBB"
] |
MarcelloCuoghi
| 7
|
TencentARC/GFPGAN
|
deep-learning
| 511
|
Kushwaha
|
open
|
2024-02-10T14:18:14Z
|
2024-02-10T14:18:14Z
|
https://github.com/TencentARC/GFPGAN/issues/511
|
[] |
Vishvajeet9170
| 0
|
|
roboflow/supervision
|
pytorch
| 1,450
|
Custom Symbol Annotators
|
### Search before asking
- [x] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
A way to add custom symbols of annotators onto the image. Or possibly even other images.
### Use case
I want to be able to annotator multiple classes of objects with different symbols, like Xs and Os.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2024-08-15T02:16:08Z
|
2024-08-27T10:54:43Z
|
https://github.com/roboflow/supervision/issues/1450
|
[
"enhancement"
] |
mhsmathew
| 2
|
pyppeteer/pyppeteer
|
automation
| 445
|
I am not able to connect to Bright Data's Scraping Browser.(their new proxy-zone)
|
(CJ_test) kirancj@Home AcceptTransfer % python3 original_at.py
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/encodings/idna.py", line 165, in encode
raise UnicodeError("label empty or too long")
UnicodeError: label empty or too long
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "original_at.py", line 27, in <module>
asyncio.get_event_loop().run_until_complete(connect_to_scraping_browser())
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "original_at.py", line 21, in connect_to_scraping_browser
browser = await pyppeteer.connect({
File "/Users/kirancj/.local/share/virtualenvs/CJ_test-RO4vW2LI/lib/python3.8/site-packages/pyppeteer/launcher.py", line 350, in connect
browserWSEndpoint = get_ws_endpoint(browserURL)
File "/Users/kirancj/.local/share/virtualenvs/CJ_test-RO4vW2LI/lib/python3.8/site-packages/pyppeteer/launcher.py", line 229, in get_ws_endpoint
with urlopen(url) as f:
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 542, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 1383, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py", line 1354, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1252, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1298, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1247, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 1007, in _send_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 947, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/http/client.py", line 918, in connect
self.sock = self._create_connection(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socket.py", line 787, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/socket.py", line 918, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
UnicodeError: encoding with 'idna' codec failed (UnicodeError: label empty or too long)
|
closed
|
2023-07-12T10:30:26Z
|
2024-02-09T06:00:27Z
|
https://github.com/pyppeteer/pyppeteer/issues/445
|
[] |
kiran-cj
| 0
|
OWASP/Nettacker
|
automation
| 119
|
Security issue (try, except, pass)
|
Hello,
We've used many try, except, pass in our code and after I read the project review from codacy, I've to notice that it's a low-risk [security issue](https://docs.openstack.org/bandit/latest/plugins/b110_try_except_pass.html).
Regards.
|
closed
|
2018-04-23T19:45:59Z
|
2021-02-02T20:23:34Z
|
https://github.com/OWASP/Nettacker/issues/119
|
[
"enhancement",
"possible bug"
] |
Ali-Razmjoo
| 3
|
microsoft/qlib
|
machine-learning
| 1,789
|
Can we directly use the backtest function of qlib with our predictions?
|
I already have my prediction results, and I would like to use Qlib's backtest function, I wonder if it's easy to implement, cause Qlib apparently has its own data format and file format to save the results.
|
closed
|
2024-05-21T03:33:24Z
|
2024-05-24T05:43:44Z
|
https://github.com/microsoft/qlib/issues/1789
|
[
"question"
] |
TompaBay
| 0
|
plotly/dash
|
data-science
| 2,667
|
Dropdown reordering options by value on search
|
dash 2.14.0, 2.9.2
Windows 10
Chrome 118.0.5993.71
**Description**
When entering a `search_value` in `dcc.Dropdown`, the matching options are ordered by _value_, ignoring the original option order. This behavior only happens when the option values are integers or integer-like strings (i.e. 3 or "3" or 3.0 but not "03" or 3.1).
**Expected behavior**
The original order of the dropdown options should be preserved when searching.
**Example**
Searching "*" in the dropdown below returns all three results, but the options are reordered by ascending value.
```python
from dash import Dash, html, dcc
app = Dash(
name=__name__,
)
my_options = [
{"label": "three *", "value": 3},
{"label": "two *", "value": 2},
{"label": "one *", "value": 1},
]
app.layout = html.Div(
[
dcc.Dropdown(
id="my-dropdown",
options=my_options,
searchable=True,
),
]
)
```

**Attempted fixes**
I expected Dynamic Dropdowns to give control over custom search behavior. However, even when a particular order of options is specified, the matching options are re-ordered by ascending value.
```python
# This callback should overwrite the built-in search behavior.
# Instead, the filtered options are sorted by ascending value.
@app.callback(
Output("my-dropdown", "options"),
Input("my-dropdown", "search_value"),
)
def custom_search_sort(search_value):
if not search_value:
raise PreventUpdate
return [o for o in my_options if search_value in str(o)]
```
|
open
|
2023-10-18T20:02:10Z
|
2024-08-13T19:41:22Z
|
https://github.com/plotly/dash/issues/2667
|
[
"bug",
"P3"
] |
TGeary
| 0
|
keras-team/keras
|
machine-learning
| 20,058
|
"No gradients provided for any variable." when variable uses an integer data type
|
When using an integer data type for a trainable variable, training will always throw a "No gradients provided for any variable." `ValueError`. Here is a very simple example to reproduce the issue:
```python
import keras
import tensorflow as tf
import numpy as np
variable_dtype = tf.int32
# variable_dtype = tf.float32 # Uncommenting this fixes the issue
class BugTestLayer(keras.layers.Layer):
# Layer is just y = self.var * x
def build(self, input_shape):
self.var = self.add_variable(
(1,), initializer="zeros", dtype=variable_dtype)
def call(self, input):
return input * self.var
input_layer = keras.Input((1,), dtype=tf.int32)
test_layer = BugTestLayer()
output = test_layer(input_layer)
model = keras.Model(inputs=[input_layer], outputs=[output])
values = np.array([[i] for i in range(1000)])
model.compile(
loss=[keras.losses.MeanSquaredError()],
optimizer=keras.optimizers.RMSprop(),
metrics=[keras.metrics.MeanSquaredError()],
)
# This will always raise a `ValueError: No gradients provided for any variable.`
# when using an integer type
history = model.fit(values, values, batch_size=1, epochs=2)
```
Unfortunately this error message is vague enough that the root of the issue is unclear. The full error message is:
```console
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File /workspaces/a8d3c9dff26b642ae4afaf1584a676512f9b8e8ce73bdaa449ed5ed373627eb7/test_bug.py:14
3 values = np.array([
4 [i]
5 for i in range(1000)
6 ])
8 model.compile(
9 loss=[keras.losses.MeanSquaredError()],
10 optimizer=keras.optimizers.RMSprop(),
11 metrics=[keras.metrics.MeanSquaredError()],
12 )
---> 14 history = model.fit(values, values, batch_size=1, epochs=2)
File ~/.local/lib/python3.12/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.__traceback__)
120 # To get the full stack trace, call:
121 # `keras.config.disable_traceback_filtering()`
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
File ~/.local/lib/python3.12/site-packages/keras/src/optimizers/base_optimizer.py:662, in BaseOptimizer._filter_empty_gradients(self, grads, vars)
659 missing_grad_vars.append(v.name)
661 if not filtered_grads:
--> 662 raise ValueError("No gradients provided for any variable.")
663 if missing_grad_vars:
664 warnings.warn(
665 "Gradients do not exist for variables "
666 f"{list(reversed(missing_grad_vars))} when minimizing the loss."
667 " If using `model.compile()`, did you forget to provide a "
668 "`loss` argument?"
669 )
ValueError: No gradients provided for any variable.
```
If it helps, here is what the model looks like:

Full disclosure: I'm not certain if this is a Keras bug or a Tensorflow bug. If this is suspected to be a Tensorflow bug, let me know and I'll file an issue there instead.
|
closed
|
2024-07-29T05:39:48Z
|
2024-08-28T04:48:49Z
|
https://github.com/keras-team/keras/issues/20058
|
[
"type:support"
] |
solidDoWant
| 9
|
akurgat/automating-technical-analysis
|
plotly
| 7
|
I am getting the following error in the system
|
File "/home/appuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 564, in _run_script
exec(code, module.__dict__)
File "/app/automating-technical-analysis/Trade.py", line 9, in <module>
data_update()
File "/app/automating-technical-analysis/app/data_sourcing.py", line 26, in data_update
update_market_data()
File "/app/automating-technical-analysis/app/update_market_data.py", line 9, in update_market_data
df_binance = pd.DataFrame(data['symbols'])[pd.DataFrame(data['symbols'])['status'] == 'TRADING'][['symbol', 'baseAsset', 'quoteAsset']]
|
closed
|
2023-01-22T11:36:32Z
|
2023-01-22T12:12:35Z
|
https://github.com/akurgat/automating-technical-analysis/issues/7
|
[] |
tempest298
| 1
|
lepture/authlib
|
django
| 609
|
JWTBearerTokenValidator don't send parameters now and leeway to claim.validate
|
```python
# authlib\oauth2\rfc7523\validator.py
class JWTBearerTokenValidator:
def authenticate_token(self, token_string):
try:
claims = jwt.decode( ... )
claims.validate()
return claims
except JoseError as error:
...
```
But:
```python
# authlib\jose\rfc7519\claims.py
class JWTClaims(BaseClaims):
...
def validate(self, now=None, leeway=0):
...
```
I see the solution in:
```python
def authenticate_token(self, token_string, now=None, leeway=0):
...
claims.validate(now, leeway)
...
```
The bug appears in testing phase.
|
open
|
2023-12-21T07:58:03Z
|
2025-02-20T20:15:21Z
|
https://github.com/lepture/authlib/issues/609
|
[
"bug",
"good first issue",
"jose"
] |
danilovmy
| 0
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 210
|
How to continue the train of the existing model and How to interpret accuracy of model on test dataset?
|
Hi,
I've trained model with custom dataset following example tutorial. Results are very good.
I have two question in this stage. I get following result when evaluate on test set. So should I interpret the success of the model as 98%? If not is there any way to get calculate accuracy of it?

And if i want to continue training from last saved best model what should i do?
|
closed
|
2020-05-18T19:11:01Z
|
2020-05-20T02:40:56Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/210
|
[] |
talatccan
| 4
|
skypilot-org/skypilot
|
data-science
| 4,350
|
[AWS] Not robust identity checking
|
<!-- Describe the bug report / feature request here -->
I create a `~/.aws/credentials` with a wrong key file, and I log in with `aws sso` with `AWS_PROFILE` env var set. The AWS CLIs picks up the SSO credentials correctly with `aws configure list` and `aws sts get-caller-identity`, but `sky check aws` shows
```
Details: `aws sts get-caller-identity` failed with error: [botocore.exceptions.ClientError] An error occurred (SignatureDoesNotMatch) when calling the GetCallerIdentity operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details..
```
|
closed
|
2024-11-13T21:34:06Z
|
2024-12-19T09:31:33Z
|
https://github.com/skypilot-org/skypilot/issues/4350
|
[] |
Michaelvll
| 1
|
lorien/grab
|
web-scraping
| 72
|
Хранение cookies с разделением по доменам
|
В текущей версии grab куки хранятся в виде dict вида имя=>значение.
Пожалуйста, реализуйте хранение куков, более приближенную к модели браузера, с сохранением информации о папке, домене и т.д.
``` javascript
{
"domain": ".github.com",
"expirationDate": 1476761637,
"hostOnly": false,
"httpOnly": false,
"name": "_ga",
"path": "/",
"secure": false,
"session": false,
"storeId": "0",
"value": "GA1.2.123456.78910",
"id": 1
}
```
Столкнулся с хитрым сайтом на Битриксе, который не отдает контент, если метод обработки куков отличается от браузерного (это наиболее вероятная причина, скажите механизм защиты, если кто знает :) ).
// English version
Current grab saves cookies using python dict with structure like name => value.
Please, implement cookie storage, which is alike browser model. I.e. with info about path, domain, etc.
P.S. I'v faced with complicated site based on Bitrix, which didn't gave me content if cookie handling is differs from browser's cookie-storage model (is it a real reasщт, maybe anybody may share details of this protection :) ).
Заранее спасибо / Thanks a lot
|
closed
|
2014-10-19T03:49:53Z
|
2014-10-20T08:51:03Z
|
https://github.com/lorien/grab/issues/72
|
[] |
userlond
| 2
|
sanic-org/sanic
|
asyncio
| 2,348
|
RFC: Multi-application serve
|
## Background
Running multiple applications has been possible in Sanic for a long time. It does, however, take a pretty decent understanding of the operation of the Sanic server to do it properly. Things to consider include:
- loop management
- listener execution
- startup time optimizations
- multi-worker and auto-reload multiprocessing
- cross-platform support issues (looking at you Windows)
Over the course of the last six months in particular, I have seen a noticeable rise in the number of support queries where people are trying to do this. Whether it is to have a HTTP >> HTTPS proxy, web server plus chatbot, or serve multiple routes on different host/port configurations, the point is that there is a definite rise in the number of people looking to run multiple Sanic applications side-by-side.
With the efforts we have taken to make sure that multiple applications can co-exist, it seems only natural that we provide a first-class API for running them.
## Proposed solution
We create a _new_ high-level API that can easily handle this.
First, we create a method with an **identical** signature to `app.run`. Its job is to do everything that `app.run` would do, cache the `server_settings` for later usage, and stop at the point where it starts the loop.
```python
app1 = Sanic("ApplicationOne")
app2 = Sanic("ApplicationTwo")
app1.prepare(port=7777)
app2.prepare(unix="/path/to/sock")
```
Second, there is a single class method that runs the loop and executes the server(s).
```python
Sanic.serve()
```
The first application defined in the application registry becomes the "primary." That application is run normally as would otherwise happen with `app.run`. Any other applications in the registry that have been "prepared" will be setup using `create_server` such that all of their lifecycle events are properly run.
|
closed
|
2022-01-02T20:40:50Z
|
2022-01-16T10:03:31Z
|
https://github.com/sanic-org/sanic/issues/2348
|
[
"RFC"
] |
ahopkins
| 6
|
iperov/DeepFaceLab
|
machine-learning
| 5,496
|
AMP model on CPU
|
Models arch AMP do not run on CPU (station w/o GPU or with param `--cpu-only`).
Tested on Windows 10 / Ubuntu 20.04LTS
Tensorflow 2.4, 2.7.1
Python 3.7.11, 3.9.7
Latest DFL version.
|
open
|
2022-03-19T10:32:18Z
|
2023-06-08T23:18:47Z
|
https://github.com/iperov/DeepFaceLab/issues/5496
|
[] |
andyst75
| 1
|
TracecatHQ/tracecat
|
pydantic
| 108
|
[v0] Case management
|
closed
|
2024-04-29T18:09:07Z
|
2024-04-29T18:10:02Z
|
https://github.com/TracecatHQ/tracecat/issues/108
|
[
"enhancement",
"frontend"
] |
daryllimyt
| 1
|
|
chatanywhere/GPT_API_free
|
api
| 60
|
能不能新增別的付款方式,國外的用戶用不了支付寶
|
如題,
能不能開個掏寶店家
或是串stripe金流?
|
closed
|
2023-07-17T03:17:00Z
|
2023-08-03T02:06:22Z
|
https://github.com/chatanywhere/GPT_API_free/issues/60
|
[] |
ooiplh20
| 1
|
open-mmlab/mmdetection
|
pytorch
| 11,972
|
'DetDataSample' object has no attribute 'text' when finetune on grounding task
|
I try to train flickr30 dataset, but always got DetDataSample' object has no attribute 'text' . my training script is
`./tools/dist_train.sh configs/mm_grounding_dino/grounding_dino_swin-t_pretrain_obj365_goldg.py 1`
and in the grounding_dino_swin-t_pretrain_obj365_goldg.py, i just use flickr30 as the train dataset.
|
closed
|
2024-09-25T06:32:49Z
|
2024-09-25T06:40:11Z
|
https://github.com/open-mmlab/mmdetection/issues/11972
|
[] |
xingmimfl
| 0
|
strawberry-graphql/strawberry
|
graphql
| 3,129
|
relay.node on subtype needs initialization
|
<!-- Provide a general summary of the bug in the title above. -->
Consider the code
```python
@strawberry.type
class Sub:
node: relay.Node = relay.node()
nodes: list[relay.Node] = relay.node()
@strawberry.type
class Query:
@strawberry.field
@staticmethod
def sub():
return Sub
schema = Schema(query=Query)
```
This code should work but doesn't, it requires an initialization for node
Note there is a workaround (initialization with None or empty list, or (I think whatever))
``` python
...
@strawberry.field
@staticmethod
def sub():
return Sub(node=None, nodes=None)
```
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system:
- Strawberry version (if applicable): since strawberry-django-plus times after a rewrite, there is a bugreport in the old repo
## Additional Context
<!-- Add any other relevant information about the problem here. -->
|
open
|
2023-10-02T13:04:02Z
|
2025-03-20T15:56:24Z
|
https://github.com/strawberry-graphql/strawberry/issues/3129
|
[
"bug"
] |
devkral
| 1
|
NVlabs/neuralangelo
|
computer-vision
| 125
|
switch to analytic gradients after delta is small enough?
|
would this bring any performance degradation?
or may speed up training a bit?
|
closed
|
2023-09-25T08:25:38Z
|
2023-09-26T04:50:17Z
|
https://github.com/NVlabs/neuralangelo/issues/125
|
[] |
blacksino
| 1
|
recommenders-team/recommenders
|
deep-learning
| 1,381
|
[FEATURE] New versioning
|
### Description
<!--- Describe your expected feature in detail -->
We are going to change the versioning name -> https://github.com/microsoft/recommenders/tags
| Original | Proposal |
|----------|-----------|
| 2021.2 | 0.5.0 |
| 2020.8 | 0.4.0 |
| 2019.09 | 0.3.1 |
| 2019.06 | 0.3.0 |
| 2019.02 | 0.2.0 |
| 0.1.1 | 0.1.1 |
| 0.1.0 | 0.1.0 |
The proposal is based on the delta I see between versions, for example between 0.2.0 and 0.3.0 there is a higher delta between 0.3.0 and 0.3.1.
Feel free to add other options
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
|
closed
|
2021-04-22T13:49:28Z
|
2021-05-04T12:12:31Z
|
https://github.com/recommenders-team/recommenders/issues/1381
|
[
"enhancement"
] |
miguelgfierro
| 0
|
lyhue1991/eat_tensorflow2_in_30_days
|
tensorflow
| 33
|
3-2在GPU上运行会报错,查了一下发现有人在CPU下可以运行
|
会报错如下
`` (0) Internal: No unary variant device copy function found for direction: 1 and Variant type_index: class tensorflow::data::`anonymous namespace'::DatasetVariantWrapper
[[{{node while_input_4/_12}}]]
(1) Internal: No unary variant device copy function found for direction: 1 and Variant type_index: class tensorflow::data::`anonymous namespace'::DatasetVariantWrapper
[[{{node while_input_4/_12}}]]
[[Func/while/body/_1/input/_60/_20]]``
|
closed
|
2020-04-23T06:27:33Z
|
2020-04-24T03:00:25Z
|
https://github.com/lyhue1991/eat_tensorflow2_in_30_days/issues/33
|
[] |
since2016
| 2
|
charlesq34/pointnet
|
tensorflow
| 165
|
about the dataset
|
hi,I'm very glad to see the excellent paper. It's very great. But I have some questions about this paper. Does it belong to Supervised classification and unsupervised classification?
|
open
|
2019-03-10T02:46:06Z
|
2019-05-21T19:58:53Z
|
https://github.com/charlesq34/pointnet/issues/165
|
[] |
zhonghuajiuzhou12138
| 1
|
satwikkansal/wtfpython
|
python
| 370
|
Add translation for Farsi
|
Expected time to finish: 3 weeks. I'll start working on it from ASAP after confirmation.
|
open
|
2025-02-22T20:00:21Z
|
2025-02-28T10:18:50Z
|
https://github.com/satwikkansal/wtfpython/issues/370
|
[] |
Alavi1412
| 10
|
allenai/allennlp
|
data-science
| 5,495
|
Updating model for Coreference Resolution
|
I noticed a new SoTA on Ontonotes 5.0 Coreference task on [paperswithcode](https://paperswithcode.com/paper/word-level-coreference-resolution#code)
The author provides the model (.pt) file in [their git repo](https://github.com/vdobrovolskii/wl-coref#preparation) and claims it to be faster (since it uses RoBERTa) while having an improvement on avg F1 score over SpanBERT.
What would be the steps to use this checkpoint in the AllenNLP Predictor?
|
closed
|
2021-12-06T05:26:48Z
|
2022-01-06T15:57:46Z
|
https://github.com/allenai/allennlp/issues/5495
|
[
"question"
] |
aakashb95
| 2
|
approximatelabs/sketch
|
pandas
| 10
|
Wrong result for query "Get the top 5 grossing states" in sample colab
|
According to data total value for every row should be calculated as Price Each * Quantity Ordered. In sample colab library summarizes prices but doesn't take in account quantity.
|
closed
|
2023-01-27T08:34:38Z
|
2023-01-27T22:08:53Z
|
https://github.com/approximatelabs/sketch/issues/10
|
[] |
lukyanenkomax
| 1
|
streamlit/streamlit
|
streamlit
| 10,022
|
Setting max duration configuration parameter for st.audio_input()
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
The st.audio_input() is a great new addition to the Streamlit library, however it is missing a critical element. One cannot set a maximum recording duration for the recording. In its present form, it is very easy for a user to start recording but forget to stop, which would result in the buffer (memory or disk) filling up unbounded. Can you please add a configuration parameter for maximum recording duration to st.audio_input()?
### Why?
It is problematic when the audio buffer starts filling up when the user forgets to stop recording. This can crash the computer if left unattended.
### How?
Add a new configuration parameter, max_duration, to the st.audio_input() API. Once the recording exceeds this duration, it will automatically stop regardless of whether the user has pressed the stop button.
### Additional Context
The present API call is st.audio_input(label, *, key=None, help=None, on_change=None, args=None, kwargs=None, disabled=False, label_visibility="visible"), which lack a parameter for max recording duration.
|
open
|
2024-12-13T23:34:50Z
|
2024-12-16T21:03:21Z
|
https://github.com/streamlit/streamlit/issues/10022
|
[
"type:enhancement",
"feature:st.audio_input"
] |
iscoyd
| 3
|
chezou/tabula-py
|
pandas
| 42
|
wrapper.py and tabula jar are missing
|
# Summary of your issue
I can import the library tabula, but the functions are still inaccessible. I checked the directory \site-packages\tabula. The wrapper.py and tabula jar file are missing.
# Environment
Write and check your environment.
- [x] `python --version`: Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
- [x] `java -version`: java version "1.8.0_131"
- [x] OS and it's version: Windows 10 64 bit
- [ ] Your PDF URL:
# What did you do when you faced the problem?
I tried to manually place them in the directory and run again. But it still doesn't work.
## Example code:
```
df = tb.read_pdf("D:\\pdf table extract\\clarkfilterdcccrossref.pdf")
```
## Output:
```
AttributeError Traceback (most recent call last)
<ipython-input-2-df8599025f3b> in <module>()
1 #df = pd.DataFrame()
----> 2 df = tb.read_pdf("D:\\pdf table extract\\clarkfilterdcccrossref.pdf")
3 tb.convert_into("D:\\pdf table extract\\clarkfilterdcccrossref.pdf","output.csv",output_format="csv")
AttributeError: module 'tabula' has no attribute 'read_pdf'
```
## What did you intend to be?
Read the pdf file.
|
closed
|
2017-07-11T11:44:44Z
|
2017-07-27T02:37:02Z
|
https://github.com/chezou/tabula-py/issues/42
|
[] |
Wind1002
| 2
|
iperov/DeepFaceLab
|
machine-learning
| 931
|
PC Stability
|
Hello. its me again, I am a little but concern about my PC.
My PC configurations are:-
MODEL:- HP Pavailion 15
PROCESSOR:- Intel(R) Core i7-9750H @ 2.60 GHz
RAM:- 8 GB DDR4
GRAPHICS CARD:- 4 GB NVIDIA GEFORCE GTX 1650
OPERATING SYSTEM:- Windows 10 x64 bit.
I am using SAEHD for training I mean the GPU for training.
The problem is that whenever I am running the Trainer Module[(5.XSeg) train.bat]:-
1. My LAPTOP's temperature is somewhat rising a little bit after sometime. Say after 1 hour or so.! Is this fine?
2. The trainer module is taking near about 17 hours to mask "266 segmented" images is this normal? cause my fan speed is rising exponentially.
Please Help.
|
open
|
2020-10-29T08:09:45Z
|
2023-06-08T21:22:01Z
|
https://github.com/iperov/DeepFaceLab/issues/931
|
[] |
Aeranstorm
| 2
|
tensorflow/tensor2tensor
|
machine-learning
| 1,639
|
Decoder outputs blank lines for translation
|
### Description
Using transformer model with transformer_base_single_gpu for En-De machine translation. For certain inputs, the decoder output is <EOS> <PAD> <PAD> .. .. (essentially a blank line) for certain translations. Tried to check the softmax output at the decoder end for the probabilities. I am unable to find the code for the same. Any help is appreciated.
### Environment information
```
OS: Linux
$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
tensor2tensor==1.13.4
tensorboard==1.14.0
tensorflow==1.13.1
tensorflow-datasets==1.0.2
tensorflow-estimator==1.14.0
tensorflow-gpu==1.14.0
tensorflow-metadata==0.14.0
tensorflow-probability==0.7.0
$ python -V
Python 3.7.3
```
### For bugs: reproduction and error logs
Blank line as inference output.
in log_decode_results
inputs= [[ 862]
[ 34]
[ 9]
[ 540]
[ 4802]
[ 36]
[ 939]
[ 570]
[13646]
[ 3808]
[ 15]
[14945]
[ 2773]
[ 10]
[ 1583]
[ 180]
[13646]
[21722]
[ 8219]
[13332]
[ 41]
[11789]
[ 101]
[ 1]]
outputs= [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
```
# Steps to reproduce:
t2t-trainer --data_dir=$DATA_DIR --output_dir=$OUTPUT_DIR --problem=translate_ende_wmt32k --model=transformer --hparams_set=transformer_big_single_gpu --hparams="batch_size=4096,max_length=150"
t2t-decoder --data_dir=$DATA_DIR --output_dir=$OUTPUT_DIR --problem=translate_ende_wmt32k --model=transformer --hparams_set=transformer_big_single_gpu --hparams="batch_size=4096, max_length=150" --decode_hparams="beam_size=4,alpha=0.6" --decode_from_file=$FILE_TO_DECODE --decode_to_file=$DECODED_FILE
```
```
# Error logs:
...
```
|
open
|
2019-07-23T18:04:02Z
|
2019-07-23T18:04:02Z
|
https://github.com/tensorflow/tensor2tensor/issues/1639
|
[] |
minump
| 0
|
jmcarpenter2/swifter
|
pandas
| 155
|
No time gain
|
I have a data frame with 20k rows and I am applying a non-vectorizable function on each row. However, there is not benefit gained since it is fall back to normal pandas apply with a single process.
```python
ret = df.swifter.set_npartitions(npartitions=2).set_dask_threshold(dask_threshold=0.01).allow_dask_on_strings(enable=True).apply(lambda Y: interpol_row(Y.to_frame().T, meas_columns), axis=1)
```
What should I do and how can I use this library to benefit? The calculation took 10 mins which is costly
|
closed
|
2021-06-24T09:42:04Z
|
2022-07-07T17:14:16Z
|
https://github.com/jmcarpenter2/swifter/issues/155
|
[] |
quancore
| 3
|
QingdaoU/OnlineJudge
|
django
| 101
|
使用nginx反向代理的时候静态资源会报404错误
|
在提交issue之前请
- 认真阅读文档 https://github.com/QingdaoU/OnlineJudge/wiki
- 搜索和查看历史issues
然后提交issue请写清楚下列事项
- 进行什么操作的时候遇到了什么问题
- 错误提示是什么,如果看不到错误提示,请去log文件夹查看。大段的错误提示请包在代码块标记里面。
- 你尝试修复问题的操作
- 页面问题请写清浏览器版本,尽量有截图
|
closed
|
2017-12-07T08:15:17Z
|
2017-12-07T11:22:33Z
|
https://github.com/QingdaoU/OnlineJudge/issues/101
|
[] |
starfire-lzd
| 0
|
chezou/tabula-py
|
pandas
| 242
|
how to read large table spread on multiple pages
|
Iam using tabula_py to read tables on a pdf.
Some are big. I have a lot of cases where a table is on more than one page. Isuue is tabula_py is treating as new table for each page, instead of reading as one large table.
|
closed
|
2020-06-12T18:15:12Z
|
2020-06-12T18:15:27Z
|
https://github.com/chezou/tabula-py/issues/242
|
[] |
idea1002
| 1
|
deezer/spleeter
|
tensorflow
| 767
|
[Discussion] Help Docker on Synology
|
Hello,
Could someone please explain me how to set up and use Spleeter under Synology Docker ?
Add image => ok
Add volume => ok
Add env MODEL_DIRECTORY, AUDIO_OUT, AUDIO_IN => ok
On start => error :
```
spleeter: error: the following arguments are required: command
```
Thanks
|
open
|
2022-05-23T20:36:49Z
|
2022-05-31T07:58:10Z
|
https://github.com/deezer/spleeter/issues/767
|
[
"question"
] |
Silbad
| 1
|
BeanieODM/beanie
|
asyncio
| 644
|
[BUG] get_settings() does not return the inherited settings
|
**Describe the bug**
When inheriting settings from a common class, get_settings() does not return the inherited settings.
**To Reproduce**
```
from beanie import Document, init_beanie
from motor.motor_asyncio import AsyncIOMotorClient
import asyncio
import datetime
class GlobalSettings(Document):
class Settings:
bson_encoders = {
datetime.datetime: lambda x: x,
datetime.date: lambda x: datetime.datetime.combine(x, datetime.time.min),
}
class MyDates(GlobalSettings):
d: datetime.date
dt: datetime.datetime
class Settings(GlobalSettings.Settings):
name = 'mydates'
async def example():
client = AsyncIOMotorClient("mongodb://localhost:27019/my_database")
await init_beanie(database=client.get_database(), document_models=[MyDates])
doc = MyDates(d=datetime.date.today(), dt=datetime.datetime.now())
print(doc.Settings.bson_encoders)
print(doc.get_settings().bson_encoders)
if __name__ == "__main__":
asyncio.run(example())
```
**Expected behavior**
doc.get_settings().bson_encoders (in this case) should return the same as doc.Settings.bson_encoder, i.e. the values from the inherited class.
**Additional context**
If
```
class Settings(GlobalSettings.Settings):
name = 'mydates'
```
is removed, get_settings() works as expected.
|
closed
|
2023-08-07T13:06:04Z
|
2023-08-22T10:09:04Z
|
https://github.com/BeanieODM/beanie/issues/644
|
[] |
piercsi
| 4
|
sherlock-project/sherlock
|
python
| 2,149
|
Watson
|
### Description
I created a GUI Assistant For Sherlock: [Watson](https://github.com/tf7software/Watson)
|
open
|
2024-06-02T03:51:13Z
|
2024-06-02T03:53:49Z
|
https://github.com/sherlock-project/sherlock/issues/2149
|
[
"enhancement"
] |
tf7software
| 1
|
scikit-optimize/scikit-optimize
|
scikit-learn
| 750
|
Callback with Optimizer()
|
Hi, could one add a callback in the Optimizer() function for the Optimizer to be saved? (and loaded afterwards)?
Usecase
We have a set of 3 weights for a model which need to be optimized and all we do is measure the Click Through Rate, which has to be maximized. SInce we don't know the function of the CTR we use Optimizer instead of gp_minimize.
We have to wait one day for the CTR to be measured and therefore would like to save and load our optimizer.
|
closed
|
2019-03-06T13:37:13Z
|
2019-04-21T18:33:28Z
|
https://github.com/scikit-optimize/scikit-optimize/issues/750
|
[] |
RakiP
| 3
|
roboflow/supervision
|
machine-learning
| 1,440
|
Invalid validations on KeyPoints class?
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
I'm trying to use the KeyPoints class, with my xy data as [ [x1,y1], [x2,y1], .... ]
Based on the information in the docs string this seems to be input needed. However, I run into an error.
```
ValueError: xy must be a 2D np.ndarray with shape ((68, 2),), but got shape (68, 2)
```

The validation for the shape check does not seems to be consistent. Either that or I may be doing something wrong

### Environment
Supervision: 0.22.0
Python: 3.10
### Minimal Reproducible Example
```python
sv.KeyPoints(xy=np.array([[1,3], [1,3], [1,3]]))
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2024-08-10T23:46:39Z
|
2024-08-27T11:07:05Z
|
https://github.com/roboflow/supervision/issues/1440
|
[
"bug"
] |
Chappie74
| 7
|
yinkaisheng/Python-UIAutomation-for-Windows
|
automation
| 237
|
EditContral 把IsReadOnly设置为False
|
我该怎么写才能把EditContral 把IsReadOnly设置为False ,然后setValue
|
open
|
2023-02-15T10:12:19Z
|
2023-03-18T13:31:24Z
|
https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/237
|
[] |
liyu133
| 1
|
3b1b/manim
|
python
| 1,121
|
Cannot set different color in specific Tex forms.
|
Let's say I have:
```
p_calc = TexMobject("p","=",r"\frac{n}{q}")
p_calc.set_color_by_tex_to_color_map({
"n": RED,
"p": BLUE,
"q": GREEN
})
```
which `p_calc ` is `p=n/q`.
I want p to be blue colored,n red colored and q green colored.
I have tried different methods,even `\frac{{\color{Red} a}}{b}` but still it changes all `n/q` to color green or is not acceptable.
|
closed
|
2020-06-02T05:03:36Z
|
2020-06-03T05:15:22Z
|
https://github.com/3b1b/manim/issues/1121
|
[] |
JimChr-R4GN4R
| 3
|
stanfordnlp/stanza
|
nlp
| 1,223
|
[QUESTION] How do declare an empty stanza.models.common.doc.Document
|
In my code I use a Stanza pipeline twice on two sections of an input document, imagine a TITLE and a BODY sections.
I normally insert the resulting Stanza Document in a MongoDB document.
It can happen that either the TITLE or the BODY sections are missing and therefore the corresponding Documents are too.
To avoid complicating the MongoDB insert I'd like to simply insert an empty Stanza Document
So imagine I use:
```
if titletext != "":
titlestanzadoc = do_stanza_nlp(
input_text=titletext, nlpipe=nlpipe
)
else:
titlestanzadoc = ?????????????
```
How would I create the empty Doc object? Thanks a lot
PS This seems an overkill, right? stanzadoc = nlp("")
|
closed
|
2023-03-22T16:38:00Z
|
2023-03-22T16:58:33Z
|
https://github.com/stanfordnlp/stanza/issues/1223
|
[
"question"
] |
rjalexa
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.