repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
LibreTranslate/LibreTranslate
|
api
| 62
|
Control panel to manage api Keys in a simpler way
|
It would be good if they added a control panel to control the access of the api Keys of the api, since doing it by commands is not very practical I think, in my opinion it would be a good idea
What can that control panel have?
1. See the api keys that are currently enabled
2. Disable or enable api key
3. Enable or disable api key system
4. And among other more options for the operation of the website or api
That would be a suggestion that sounds like a good idea to me
|
closed
|
2021-03-12T08:43:49Z
|
2021-11-19T00:57:03Z
|
https://github.com/LibreTranslate/LibreTranslate/issues/62
|
[
"enhancement"
] |
sawa-ko
| 2
|
pallets/flask
|
python
| 5,543
|
Pyright type errors: `src/flask/app.py`
|
Pyright reports type errors for `src/flask/app.py`:
```
flask/src/flask/app.py
flask/src/flask/app.py:352:20 - error: Expression of type "BufferedReader" is incompatible with return type "IO[AnyStr@open_resource]"
"BufferedReader" is incompatible with "IO[AnyStr@open_resource]"
Type parameter "AnyStr@IO" is invariant, but "bytes" is not the same as "AnyStr@open_resource" (reportReturnType)
flask/src/flask/app.py:1178:21 - error: Expression with type "Tuple[ResponseValue, HeadersValue] | Tuple[ResponseValue, int] | Tuple[ResponseValue, int, HeadersValue]" cannot be assigned to target tuple
Type "Tuple[ResponseValue, int, HeadersValue]" is incompatible with target tuple
Tuple size mismatch; expected 2 but received 3 (reportAssignmentType)
flask/src/flask/app.py:1205:28 - error: Argument of type "HeadersValue | int | None" cannot be assigned to parameter "status" of type "int | str | HTTPStatus | None" in function "__init__" (reportArgumentType)
flask/src/flask/app.py:1240:29 - error: Cannot assign to attribute "status" for class "Response" (reportAttributeAccessIssue)
flask/src/flask/app.py:1242:34 - error: Cannot assign to attribute "status_code" for class "Response"
Type "HeadersValue | int" is incompatible with type "int"
"Mapping[str, HeaderValue]" is incompatible with "int" (reportAttributeAccessIssue)
```
Command which was run:
```shell
source .venv/bin/activate
pyright
```
Environment:
- Python version: `3.12`
- Flask version: `3.1.0`
|
closed
|
2024-08-06T22:52:48Z
|
2024-08-23T00:06:51Z
|
https://github.com/pallets/flask/issues/5543
|
[] |
brendon-codes
| 3
|
HumanSignal/labelImg
|
deep-learning
| 153
|
Labeling round objects
|
Hi,
Marking round objects is quite hard. After creating an label I always have to correct it.
Idea for enhancement is simple. Before creating the bounding box, vertical and horizontal line across the image will be displayed. After creating the starting point the lines will disappear. It will speed up labeling! See example below:

|
closed
|
2017-08-29T04:00:28Z
|
2017-08-30T03:43:02Z
|
https://github.com/HumanSignal/labelImg/issues/153
|
[
"enhancement"
] |
szymonk92
| 2
|
Esri/arcgis-python-api
|
jupyter
| 1,603
|
Create a web map with my own custom layer set as Basemap
|
Currently code to create a web map will only allow setting the Basemap with a designated ESRI Basemap. What I would like to be able to do is create a web map and set a custom created layer as the Basemap instead. At the moment the only way to do it is manually, which is tedious when you need to create several maps with a custom Basemap. Even if a custom Basemap has been created in the Organization, it cannot be used or called. The code only allows and sees designated Esri created and owned Basemaps.
this is a problem if your organization requires their own custom Basemap to be used or are in a disconnected environment and can’t even access the Esri basemaps at all.
——————————————-
Use case examples:
#1
User is setting up disconnected/highly isolated environments and would like to automate creating several custom Basemap templates that would be shared to a custom group and then set as the Basemap gallery in the organization.
#2
User needs to create web maps reports often, but are required to use a custom Basemap per customer’s or their organizations policy/requirements.
——————————————-
Here is the documentation of what part of the code that I am recommending to update:
https://developers.arcgis.com/python/api-reference/arcgis.mapping.toc.html
Scroll down to following properties:
~ basemap
~ basemap_title
~ basemap_switcher
~ basemaps
|
closed
|
2023-07-14T16:42:50Z
|
2023-07-20T11:52:39Z
|
https://github.com/Esri/arcgis-python-api/issues/1603
|
[
"enhancement"
] |
previewqueen
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 719
|
Replacing synthesizer from Tacotron to Non Attentive Tacotron
|
Working on it!
|
closed
|
2021-04-02T07:59:16Z
|
2021-04-20T03:01:11Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/719
|
[] |
Garvit-32
| 2
|
pydantic/logfire
|
pydantic
| 336
|
pip install logfire (without version) installed the very old version of logfire
|
### Description
Hello,
with `pip install logfire` in a brand new venv, the version installed is:
```console
$ pip show logfire
Name: logfire
Version: 0.0.1
Summary: Better Logging
Home-page: https://github.com/logfire/logfire-python
Author: Samuel Colvin
Author-email: s@muelcolvin.com
License: MIT
Location: /home/xiang/git/fastapi-demo/.venv/lib/python3.11/site-packages
Requires: httpx
Required-by:
```
### Python, Logfire & OS Versions, related packages (not required)
```TOML
ubuntu 24.04 with python3.11
```
|
closed
|
2024-07-25T13:13:17Z
|
2024-07-25T14:28:41Z
|
https://github.com/pydantic/logfire/issues/336
|
[
"bug"
] |
copdips
| 2
|
pykaldi/pykaldi
|
numpy
| 262
|
Kaldi version used in PyKaldi is getting old
|
Seems that the Kaldi version is getting old and models trained with newer Kaldi versions might to run in PyKaldi anymore (I have a TED-LIUM v3 one that uses ivector options that pykaldi doesnt know). What would it take to update the Kaldi version in PyKaldi? The main page mentions:
"you might want or need to update Kaldi installation used for building PyKaldi. Rerunning the relevant install script in tools directory should update the existing installation. If this does not work, please open an issue."
"PyKaldi compatible fork of Kaldi. To comply with CLIF requirements we had to make some changes to Kaldi codebase."
What are these changes and are they documented anywhere? Might give it a try and port these changes to a new Kaldi version.
|
closed
|
2021-03-11T10:45:44Z
|
2021-05-30T11:36:52Z
|
https://github.com/pykaldi/pykaldi/issues/262
|
[] |
bmilde
| 2
|
onnx/onnx
|
pytorch
| 6,253
|
Shape inference for Reshape does not infer rank when it could
|
# Bug Report
### Is the issue related to model conversion?
no
### Describe the bug
Shape inference for Reshape does not infer rank when it could. I have a model containing an operator Reshape(X, shp), where shape inference knows that the 'shp' input has shape [3]. Therefore, it could infer that the output shape of the operator is [a, b, c], that is, rank 3 with all unknown sizes. But shape inference does not do that. Instead it infers no shape.
### System information
Windows, Python 11, latest onnx version.
### Reproduction instructions

See the attached netron screenshot. It knows that the 'shape' input is size [3].
### Expected behavior
It should set the output shape of the Reshape node to [a, b, c].
### Notes
It's important for me to know the rank of all the tensors. I don't need all the shapes, just the ranks.
|
open
|
2024-07-24T20:20:56Z
|
2024-09-20T15:03:09Z
|
https://github.com/onnx/onnx/issues/6253
|
[
"bug",
"module: shape inference",
"contributions welcome"
] |
mathisdon
| 8
|
huggingface/peft
|
pytorch
| 2,270
|
Different Results When Predicting with Multiple LoRA Adapters in a Loop VS. Using only One LoRA
|
### System Info
Linux, Python 3.8
A two-H100 node.
Name: transformers
Version: 4.34.1
Name: peft
Version: 0.11.1
### Who can help?
@BenjaminBossan
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
### Description
I encountered a strange issue while using PEFT LoRA adapters with the Hugging Face Trainer. When predicting using different LoRA adapters in a loop, the predictions are different compared to when using the same LoRA adapter (e.g., `m4`) individually. The issue arises when I predict using multiple LoRA adapters sequentially, and then compare the results of the `m4` adapter between the two scenarios.
### Steps to Reproduce
1. I have a dictionary `lora_map` that maps LoRA adapter names to their respective paths.
2. The code below iterates over `lora_map` and predicts using each LoRA adapter:
```python
dfs = []
for lora_name in lora_map:
pred_df = test_df[useful_columns].copy()
# model.set_adapter(lora_name)
model = PeftModel.from_pretrained(base_model, lora_map[lora_name], adapter_name=lora_name)
print("predicting with lora", lora_name)
trainer = Trainer(model=model, args=args, data_collator=data_collator)
preds = trainer.predict(token_test_dataset).predictions # logits
pred_df[['neu','pos','neg']] = torch.softmax(torch.tensor(preds), dim=-1).numpy()
pred_df['lora'] = lora_name
dfs.append(pred_df)
final_pred_df = pd.concat(dfs)
```
the `lora_map` is like `lora_map={'m1':xxx,'m2':xxx,...}`
I found the results in `final_pred_df[final_pred_df.lora == 'm4']` is different from predicting with loading `m4` only. But the results for `m1` is the same, probably because its the first in the lora_map.
What could be the problem? What happend when I load the second adapter using ` PeftModel.from_pretrained` ?
---
I'm sorry I can't share my lora weights (it was trained with **PiSSA**) since its a private model.
### Expected behavior
Same results.
|
closed
|
2024-12-10T06:53:14Z
|
2025-01-18T15:03:27Z
|
https://github.com/huggingface/peft/issues/2270
|
[] |
beyondguo
| 12
|
dmlc/gluon-nlp
|
numpy
| 1,508
|
Have problom in BERT pre-training: how to training on multiple GPUs
|
## Description
- I want to train BERT model on GPU, but have some problems. My configuration:
* Software environment: Python: 3.7.7, Cuda: 10.2
* Install MXNet: `pip install mxnet-cu102` , verion is 1.7.0
* Download Model script: `https://github.com/dmlc/gluon-nlp`, which branch is 2.0.
- Run script `gluon-nlp/scripts/bert/run_pretraining.py`:
* Reference the instruction: `https://nlp.gluon.ai/model_zoo/bert/index.html#bert-model-zoo`
* And download DataSet alse in above web.
```
$ mpirun -np 8 -H localhost:8 -mca pml ob1 -mca btl ^openib \
-mca btl_tcp_if_exclude docker0,lo --map-by ppr:4:socket \
--mca plm_rsh_agent 'ssh -q -o StrictHostKeyChecking=no' \
-x NCCL_MIN_NRINGS=8 -x NCCL_DEBUG=INFO -x HOROVOD_HIERARCHICAL_ALLREDUCE=1 \
-x MXNET_SAFE_ACCUMULATION=1 --tag-output \
python run_pretraining.py --verbose --model="bert_12_768_12" --warmup_ratio=1 --comm_backend="horovod" \
--accumulate=1 --max_seq_length=128 --raw --max_predictions_per_seq=20 --log_interval=1 --ckpt_interval=1000 \
--no_compute_acc --data=/home/yangshuo/mxnet/Dataset/pre-train-datasets/enwiki-feb-doc-split/*.train \
--num_steps=1000 --total_batch_size=128 --dtype="float16"
```
- Result error:

## Seek help:
I have read the guidance, but still don't known how to running.
Please help me, or can I have correct instruction or suggestion ? thanks.
|
open
|
2021-01-28T08:48:34Z
|
2021-02-01T08:11:17Z
|
https://github.com/dmlc/gluon-nlp/issues/1508
|
[
"enhancement"
] |
yangshuo0323
| 13
|
miLibris/flask-rest-jsonapi
|
sqlalchemy
| 39
|
Successful DELETE not returning propper response
|
Hi, I think that according to the jsonapi specification (http://jsonapi.org/format/#document-meta), DELETE should return a meta object, but this library is returning a string.
Incorrect:
```{
"meta": {
"Object successful deleted"
},
"jsonapi": {
"version": "1.0"
}
}
```
Correct:
```{
"meta": {
"result": "Object successful deleted"
},
"jsonapi": {
"version": "1.0"
}
}
```
I was having problems using this lib until I figured this out.
Cheers.
|
closed
|
2017-05-09T18:31:53Z
|
2017-05-19T15:27:00Z
|
https://github.com/miLibris/flask-rest-jsonapi/issues/39
|
[] |
renefs
| 1
|
iperov/DeepFaceLab
|
deep-learning
| 609
|
No "true face power" in training settings
|
So i can't set or change the true face power in liaehd. I'm using the newest build. I don't even see the option when i want to change the settings of my model in training
|
open
|
2020-02-05T21:02:29Z
|
2023-06-08T21:24:23Z
|
https://github.com/iperov/DeepFaceLab/issues/609
|
[] |
mpmo10
| 2
|
AutoGPTQ/AutoGPTQ
|
nlp
| 580
|
[BUG] marlin not support MixtralForCausalLM
|
**Describe the bug**
A clear and concise description of what the bug is.
**Hardware details**
Information about CPU and GPU, such as RAM, number, etc.
**Software version**
Version of relevant software such as operation system, cuda toolkit, python, auto-gptq, pytorch, transformers, accelerate, etc.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
Traceback (most recent call last):
File "/data/models/marlin.py", line 27, in <module>
model = AutoGPTQForCausalLM.from_pretrained(pretrained_model_dir, quantize_config, use_marlin=True, device="auto",
File "/home/jeeves/.local/lib/python3.10/site-packages/auto_gptq/modeling/auto.py", line 76, in from_pretrained
return GPTQ_CAUSAL_LM_MODEL_MAP[model_type].from_pretrained(
File "/home/jeeves/.local/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 787, in from_pretrained
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path, **merged_kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3596, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
TypeError: MixtralForCausalLM.__init__() got an unexpected keyword argument 'use_marlin'
|
open
|
2024-03-06T03:00:30Z
|
2024-03-28T04:21:26Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/580
|
[
"bug"
] |
Xu-Chen
| 2
|
donnemartin/data-science-ipython-notebooks
|
tensorflow
| 17
|
Add deep learning notebooks
|
closed
|
2015-12-27T10:44:44Z
|
2016-05-18T02:07:08Z
|
https://github.com/donnemartin/data-science-ipython-notebooks/issues/17
|
[
"feature-request"
] |
donnemartin
| 2
|
|
modelscope/data-juicer
|
streamlit
| 417
|
[Feat] Data-Juicer as a Service
|
### Search before continuing 先搜索,再继续
- [X] I have searched the Data-Juicer issues and found no similar feature requests. 我已经搜索了 Data-Juicer 的 issue 列表但是没有发现类似的功能需求。
### Description 描述
Invoking the functionality of Data-Juicer via API.
### Use case 使用场景
```bash
curl -X POST 'http://localhost:8000/data_juicer/ops/filter/TextLengthFilter/run?dataset=demos/data/demo-dataset.jsonl'
```
### Additional 额外信息
_No response_
### Are you willing to submit a PR for this feature? 您是否乐意为此功能提交一个 PR?
- [X] Yes I'd like to help by submitting a PR! 是的!我愿意提供帮助并提交一个PR!
|
closed
|
2024-09-05T12:04:05Z
|
2024-09-30T09:34:16Z
|
https://github.com/modelscope/data-juicer/issues/417
|
[
"enhancement",
"stale-issue"
] |
drcege
| 3
|
Lightning-AI/LitServe
|
fastapi
| 423
|
'timeout' not working
|
Here is the main.py
```
import litserve as ls
from pydantic import BaseModel, Field
from typing import Optional, Type, List, Dict, Any
from enum import Enum
from openai import AzureOpenAI
import pandas as pd
import time
class TextClassificationAPI(ls.LitAPI):
def setup(self, device):
self.version = 'test'
def decode_request(self, request):
return request
def predict(self, input):
t = input.get("time", 2)
time.sleep(t)
return "hello!"
def encode_response(self, output):
return output
if __name__ == "__main__":
api = TextClassificationAPI()
server = ls.LitServer(api, api_path = '/predict',workers_per_device=5,timeout = 1)
server.run(port=12345)
```
here is the test.py
```
import aiohttp
import asyncio
import time
url = 'http://0.0.0.0:12345/predict'
async def fetch(session, data):
start_time = time.monotonic()
async with session.post(url, json=data) as response:
response_json = await response.json()
end_time = time.monotonic()
latency = end_time - start_time
return response_json, latency
async def main(num_requests, requests_per_second):
async with aiohttp.ClientSession() as session:
tasks = []
interval = 1.0 / requests_per_second
for i in range(num_requests):
data = {"time":2}
tasks.append(fetch(session, data))
# await asyncio.sleep(interval)
responses = await asyncio.gather(*tasks)
print(responses)
total_latency = sum(latency for response, latency in responses)
average_latency = total_latency / num_requests
throughput = num_requests / total_latency
print(f'Average Latency: {average_latency:.4f} seconds')
print(f'Throughput: {throughput:.4f} requests per second')
num_requests = 1
requests_per_second = 1
# 运行异步主函数
if __name__ == '__main__':
asyncio.run(main(num_requests, requests_per_second))
```
Here is the output


I have set a timeout of 1 second, but my predict function contains a sleep statement that lasts 2 seconds. Despite this, I still receive a response. Why is this happening?
|
closed
|
2025-02-07T08:27:05Z
|
2025-02-10T02:41:42Z
|
https://github.com/Lightning-AI/LitServe/issues/423
|
[
"help wanted",
"question"
] |
lullabies777
| 1
|
paperless-ngx/paperless-ngx
|
django
| 9,053
|
[BUG] Document Importer - CommandError: Failed to read from original file
|
### Description
Currently paperless does not work when I try to import files into a new instance (via the Document Importer function.) after exporting them from another instance.
For further clarity - I carried out a successful Document Export, and then recreated/redeployed my docker instance, and have just gone in to the command line to import the documents, however I got the following error.
````
# document_importer /usr/src/paperless/export/2025_02/
Found existing user(s), this might indicate a non-empty installation
Checking the manifest
CommandError: Failed to read from original file /usr/src/paperless/export/2025_02/2022-10-29 JD Sports BRW008092DC5E53_233674.pdf
````
I checked permissions
````
# whoami
root
````
And had a look at the files in that folder. (Just an extract, there are almost 5000, just wanted to show the permissions)
````
rw-r--r-- 1 root root 16986 Feb 8 18:26 '2025-02-08 Homeprotect • Your Payment Confirmation.pdf-thumbnail.webp'
-rw-r--r-- 1 root root 76951973 Feb 8 22:25 manifest.json
-rw-r--r-- 1 root root 25 Feb 8 22:25 metadata.json
````
### Steps to reproduce
Command Line run the following..
````
document_importer /usr/src/paperless/export/2025_02/
````
### Webserver logs
```bash
can’t see anything ?
```
### Browser logs
```bash
N/A
```
### Paperless-ngx version
2.14.7
### Host OS
Linux (QNAP)
### Installation method
Docker - official image
### System status
```json
```
### Browser
Safari
### Configuration changes
version: "3.6"
services:
redis:
image: redis:7
container_name: paperless-redis
restart: unless-stopped
volumes:
- /share/Container/paperlessredis:/data
db:
image: postgres:16
container_name: paperless-db
restart: unless-stopped
volumes:
- /share/Container/paperlessdb:/var/lib/postgresql/data
environment:
POSTGRES_DB: paperless
POSTGRES_USER: paperless
POSTGRES_PASSWORD: paperless
webserver:
image: ghcr.io/paperless-ngx/paperless-ngx:latest
container_name: paperlessngx
restart: unless-stopped
privileged: true
depends_on:
- db
- redis
- gotenberg
- tika
ports:
- 8777:8000
healthcheck:
test: ["CMD", "curl", "-fs", "-S", "--max-time", "2", "http://localhost:8000"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- /share/Container/paperlessngx/data:/usr/src/paperless/data
- /share/Container/paperlessngx/media:/usr/src/paperless/media
- /share/Container/paperlessngx/export:/usr/src/paperless/export
- /share/Container/paperlessngx/consume:/usr/src/paperless/consume
- /share/Container/paperlessngx/scripts:/usr/src/paperless/scripts
- /share/Container/paperlessngx/trash:/usr/src/paperless/trash
environment:
PAPERLESS_REDIS: redis://redis:6379
PAPERLESS_DBHOST: db
USERMAP_UID: 1005
USERMAP_GID: 1000
PAPERLESS_TIME_ZONE: Europe/London
PAPERLESS_ADMIN_USER: ******
PAPERLESS_ADMIN_PASSWORD: *******
PAPERLESS_CONSUMER_RECURSIVE: true
PAPERLESS_CONSUMER_SUBDIRS_AS_TAGS: true
# PAPERLESS_CONSUMER_POLLING: 5
PAPERLESS_OCR_LANGUAGE: eng
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
PAPERLESS_TRASH_DIR: /usr/src/paperless/trash/
PAPERLESS_CONSUMER_DELETE_DUPLICATES: true
PAPERLESS_CONSUMER_ENABLE_BARCODES: true
PAPERLESS_CONSUMER_IGNORE_PATTERNS: '[".DS_STORE/*", "._*", ".stfolder/*", ".stversions/*", ".localized/*", ".@__thumb/*", "desktop.ini"]'
#PAPERLESS_FILENAME_FORMAT: {created_year}/{correspondent}/{title}
gotenberg:
image: docker.io/gotenberg/gotenberg:8.7
restart: unless-stopped
container_name: gotenberg
ports:
#- 3044:3000
- 3000:3000
command:
- "gotenberg"
- "--chromium-disable-routes=true"
tika:
image: ghcr.io/paperless-ngx/tika
container_name: tika
ports:
- 9998:9998
restart: always
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description.
|
closed
|
2025-02-09T12:00:33Z
|
2025-03-13T03:12:54Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/9053
|
[
"not a bug"
] |
nodecentral
| 18
|
mwaskom/seaborn
|
data-science
| 3,075
|
ax.set_aspect('equal')
|
Does seaborn object can make the scale of x and y equal, like `ax.set_aspect('equal')`.
|
closed
|
2022-10-11T07:40:17Z
|
2022-10-11T10:48:55Z
|
https://github.com/mwaskom/seaborn/issues/3075
|
[] |
jiajinbu
| 2
|
dpgaspar/Flask-AppBuilder
|
rest-api
| 1,800
|
Pinning to click 7.1.2 is causing conflicts
|
If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide
any extra information that may be useful
Responsible disclosure:
We want to keep Flask-AppBuilder safe for everyone. If you've discovered a security vulnerability
please report to danielvazgaspar@gmail.com.
### Environment
Flask-Appbuilder version:
all versions
### Steps to reproduce
combine in a program needing black >= 22.x
The issue is that the latest released, [stable](https://github.com/psf/black) black depends on [click](https://github.com/pallets/click) >= 8, while flask-appbuilder requires 7.1, an unresolvable conflict.
Click 8 has been released for 9 months (May 2021) so it's reasonable that flask-appbuilder should use/support it.
|
open
|
2022-02-08T22:25:08Z
|
2022-02-15T13:39:07Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1800
|
[
"dependency-bump"
] |
waTeim
| 1
|
Asabeneh/30-Days-Of-Python
|
numpy
| 297
|
Error in Examples of Checking Data types and Casting (Day 2)
|
# str to int or float
num_str = '10.6'
print('num_int', int(num_str)) # 10
ValueError: invalid literal for int() with base 10: '10.6'
|
closed
|
2022-08-31T06:29:15Z
|
2022-09-05T05:14:29Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/297
|
[] |
JedaiRussia
| 1
|
pywinauto/pywinauto
|
automation
| 530
|
Disable UserWarning: 32-bit
|
I use pywinauto to test 32 and 64 bit windows application and sometimes I need to interact with both types at the same time. Since I need to test both at times I use 64bit python. Can I disable the warning messages?
```
C:\Test\projects\marvlistest\venv64\lib\site-packages\pywinauto\application.py:1032: UserWarning: 32-bit application should be automated using 32-bit Python (you use 64-bit Python)
UserWarning)
```
|
closed
|
2018-08-06T16:31:51Z
|
2018-08-07T18:05:01Z
|
https://github.com/pywinauto/pywinauto/issues/530
|
[
"question"
] |
JohnS-BCS
| 2
|
allenai/allennlp
|
pytorch
| 4,875
|
ViLBERT demo not working: vqav2 not in acceptable choices for dataset_reader.type when loading model
|
Hey
I want to try to ViLBERT demo:
https://demo.allennlp.org/visual-question-answering
I cannot install the repo from the link you provided:
`pip install git+git://github.com/allenai/allennlp.git@0b20f80c1ea700766fe53d2eaf1c28de764f9710`,
because I receieve:
```python
(CondaEnv) a483e75eac4c:ChallangerCocoAlign yonatab$ pip install git+git://github.com/allenai/allennlp.git@0b20f80c1ea700766fe53d2eaf1c28de764f9710
Collecting git+git://github.com/allenai/allennlp.git@0b20f80c1ea700766fe53d2eaf1c28de764f9710
Cloning git://github.com/allenai/allennlp.git (to revision 0b20f80c1ea700766fe53d2eaf1c28de764f9710) to /private/var/folders/sp/0n98h0kn4w7dq4xhl02ljk4n6r2n52/T/pip-req-build-sc0vamlj
Running command git clone -q git://github.com/allenai/allennlp.git /private/var/folders/sp/0n98h0kn4w7dq4xhl02ljk4n6r2n52/T/pip-req-build-sc0vamlj
Running command git checkout -q 0b20f80c1ea700766fe53d2eaf1c28de764f9710
fatal: reference is not a tree: 0b20f80c1ea700766fe53d2eaf1c28de764f9710
ERROR: Command errored out with exit status 128: git checkout -q 0b20f80c1ea700766fe53d2eaf1c28de764f9710 Check the logs for full command output.
```
So I installed allennlp with pip.
When this command runs:
`predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/vilbert-vqa-2020.10.01.tar.gz")
`
I receieve the following stacktrace:
Traceback (most recent call last):
```python
File "/Users/yonatab/PycharmProjects/ChallangerCocoAlign/vilbert_allen.py", line 4, in <module>
predictor = Predictor.from_path(pred_p)
File "/Users/yonatab/opt/anaconda3/envs/CondaEnv/lib/python3.7/site-packages/allennlp/predictors/predictor.py", line 354, in from_path
load_archive(archive_path, cuda_device=cuda_device, overrides=overrides),
File "/Users/yonatab/opt/anaconda3/envs/CondaEnv/lib/python3.7/site-packages/allennlp/models/archival.py", line 206, in load_archive
config.duplicate(), serialization_dir
File "/Users/yonatab/opt/anaconda3/envs/CondaEnv/lib/python3.7/site-packages/allennlp/models/archival.py", line 232, in _load_dataset_readers
dataset_reader_params, serialization_dir=serialization_dir
File "/Users/yonatab/opt/anaconda3/envs/CondaEnv/lib/python3.7/site-packages/allennlp/common/from_params.py", line 581, in from_params
default_to_first_choice=default_to_first_choice,
File "/Users/yonatab/opt/anaconda3/envs/CondaEnv/lib/python3.7/site-packages/allennlp/common/params.py", line 352, in pop_choice
raise ConfigurationError(message)
allennlp.common.checks.ConfigurationError: vqav2 not in acceptable choices for dataset_reader.type: ['conll2003', 'interleaving', 'sequence_tagging', 'sharded', 'babi', 'text_classification_json']. You should either use the --include-package flag to make sure the correct module is loaded, or use a fully qualified class name in your config file like {"model": "my_module.models.MyModel"} to have it imported automatically.
```
I've tried installing allennlp from source, using the **vision** branch:
https://github.com/allenai/allennlp/tree/vision
I've also tried to download the model and loading from local path, getting the same problem.
How can I solve it?
|
closed
|
2020-12-21T15:08:08Z
|
2021-01-04T16:57:47Z
|
https://github.com/allenai/allennlp/issues/4875
|
[
"bug"
] |
yonatanbitton
| 2
|
Evil0ctal/Douyin_TikTok_Download_API
|
web-scraping
| 519
|
[Feature request] 能不能加一个根据关键词搜索视频的api
|
能不能加一个根据关键词搜索视频的api
|
open
|
2024-12-12T02:26:57Z
|
2024-12-12T02:27:27Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/519
|
[
"enhancement"
] |
lzl19990729
| 0
|
tiangolo/uwsgi-nginx-flask-docker
|
flask
| 24
|
Add additional Alpine base image for Python 3.5
|
Add Alpine base image for Python 3.5
Some users are fans of Alpine Linux, so it would be nice to have an additional base image based on Alpine.
This would depend on: https://github.com/tiangolo/uwsgi-nginx-docker/issues/12 being solved first.
|
closed
|
2017-09-30T15:40:27Z
|
2018-01-15T10:20:55Z
|
https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/24
|
[
"Hacktoberfest"
] |
tiangolo
| 1
|
gradio-app/gradio
|
python
| 10,023
|
gr.Plot does not work with matplotlib plots properly anymore
|
### Describe the bug
Hey Gradio Team,
I just wanted to let you know that the latest gradio version does not seem to be working properly with Matplotlib plots/figures.
Previous gradio versions (i.e 5.5) seem to work fine.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as demo:
test_plot = gr.Plot()
test_btn = gr.Button("Test", variant="primary")
test_btn .click(test_fn, inputs=[], outputs=[test,plot])
demo.launch()
```
I use the following code to create Matplotlib plot:
```python
def test_fn()
fig, ax = plt.subplots()
plt.close(fig)
return fig
```
==================================================================
I managed to make it work like so:
I used gr.Image instead of gr.Plot
and for matplotlib part:
```python
def test_fn():
fig, ax = plt.subplots()
import io
from PIL import Image
out_fig = io.BytesIO()
fig.savefig(out_fig, format='png')
out_fig.seek(0)
img = Image.open(out_fig)
return img
```
So there is some kinda issue between latest gradio and matplotlib when it comes to gr.Plot.
Please fix it. I would really appreciate it.
Thank you.
Sinceerly,
Alex
### Screenshot
_No response_
### Logs
```shell
======================================================================
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/blocks.py", line 2025, in process_api
data = await self.postprocess_data(block_fn, result["prediction"], state)
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/blocks.py", line 1831, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/components/plot.py", line 138, in postprocess
out_y = processing_utils.encode_plot_to_base64(value, self.format)
File "/home/ubuntu/.local/lib/python3.10/site-packages/gradio/processing_utils.py", line 158, in encode_plot_to_base64
plt.savefig(output_bytes, format=fmt)
File "/usr/lib/python3/dist-packages/matplotlib/figure.py", line 3019, in savefig
self.canvas.print_figure(fname, **kwargs)
File "/usr/lib/python3/dist-packages/matplotlib/backend_bases.py", line 2259, in print_figure
canvas = self._get_output_canvas(backend, format)
File "/usr/lib/python3/dist-packages/matplotlib/backend_bases.py", line 2188, in _get_output_canvas
raise ValueError(
ValueError: Format 'webp' is not supported (supported formats: eps, jpeg, jpg, pdf, pgf, png, ps, raw, rgba, svg, svgz, tif, tiff)
```
### System Info
```shell
Matplotlib version: 3.5.1
========================================================
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.6.0
gradio_client version: 1.4.3
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.4.3 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.0.3
markupsafe: 2.0.1
numpy: 1.21.5
orjson: 3.10.11
packaging: 21.3
pandas: 1.3.5
pillow: 9.0.1
pydantic: 2.10.1
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 5.4.1
ruff: 0.8.0
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit==0.12.0 is not installed.
typer: 0.13.1
typing-extensions: 4.12.2
urllib3: 1.26.5
uvicorn: 0.32.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.3.1
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 21.3
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it
|
closed
|
2024-11-23T00:41:31Z
|
2024-11-29T08:48:30Z
|
https://github.com/gradio-app/gradio/issues/10023
|
[
"bug",
"Regression"
] |
asigalov61
| 1
|
polakowo/vectorbt
|
data-visualization
| 33
|
Plotly Name Arg Not Cast to String
|
I noticed that the name is not being cast to a string for create_scatter(). It's possible that it's also not being cast to string on the other generic plotting functions in vectorbt, but I haven't taken a look.
Steps to reproduce:
```
# Get prices
prices = pd.DataFrame(np.array([[1, 1.1, 1.05, 1.11, 1.12], [1.03, 1.13, 1.07, 1.12, 1.13]]).T, columns=['BTC', 'ETH'])
prices.columns.name = 'asset'
prices.index.name = 'Date'
print(prices)
# Get order target percentages
cols = pd.MultiIndex.from_product([[1, 2], [2, 3], ['BTC', 'ETH']])
tgt_pct = pd.DataFrame(np.array([[0, 0.02, 0.03, 0, 0.05], [0.01, 0.03, 0.01, 0.02, 0.04],
[0, 0.04, 0.01, 0.02, 0.04], [0.03, 0.05, 0, 0.02, 0.03],
[0.01, 0.03, 0.01, 0.02, 0.04], [0, 0.04, 0.01, 0.02, 0.04],
[0.03, 0.05, 0, 0.02, 0.03], [0.01, 0.03, 0.01, 0.02, 0.04],
]).T, columns=cols)
tgt_pct.columns.names = ['custom_param1', 'custom_param2', 'asset']
tgt_pct.index.name = 'Date'
print(tgt_pct)
# Align prices
prices = prices.vbt.align_to(tgt_pct)
# Run the portfolio
size_type = getattr(vbt.portfolio.enums.SizeType, 'TargetPercent')
portfolio = vbt.Portfolio.from_orders(prices, tgt_pct, size_type=size_type, freq='1D')
# Plot a subset of the trades
res = portfolio.xs((2, 3, 'BTC'), axis=1).trades.pnl.to_matrix()
# Errors
# res.vbt.scatter()
# Works
res.name = str(res.name) # because of how we sliced the portfolio, "name" is a tuple that Plotly can't handle; casting is necessary
res.vbt.scatter()
```
Solution:
Change "vectorbt\generic\plotting.py" line 226 from "name=trace_name," to "name=str(trace_name),"
Would you prefer I do a PR with this (and check the other plotting functions), or would you rather merge this into some development branch yourself?
|
closed
|
2020-08-06T01:05:52Z
|
2020-08-06T16:14:51Z
|
https://github.com/polakowo/vectorbt/issues/33
|
[] |
kmcentush
| 1
|
K3D-tools/K3D-jupyter
|
jupyter
| 1
|
install
|
make install: cannot assume sudo, it should be expected to be run as root and/or in user directory
|
closed
|
2015-10-28T18:57:05Z
|
2015-11-14T07:29:56Z
|
https://github.com/K3D-tools/K3D-jupyter/issues/1
|
[] |
marcinofulus
| 1
|
ultralytics/yolov5
|
pytorch
| 13,145
|
Mosaic
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
if I set Mosaic=1, does it mean that Mosaic will be used throughout the entire training process, for example, if the training rounds are 300, will Mosaic be used in all these 300 rounds?
### Additional
_No response_
|
closed
|
2024-06-29T04:44:41Z
|
2024-10-20T19:49:04Z
|
https://github.com/ultralytics/yolov5/issues/13145
|
[
"question",
"Stale"
] |
wyyt1202
| 5
|
marshmallow-code/marshmallow-sqlalchemy
|
sqlalchemy
| 130
|
Initial Update
|
The bot created this issue to inform you that pyup.io has been set up on this repo.
Once you have closed it, the bot will open pull requests for updates as soon as they are available.
|
closed
|
2018-05-29T02:26:22Z
|
2018-05-29T12:45:08Z
|
https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/130
|
[] |
pyup-bot
| 0
|
sqlalchemy/alembic
|
sqlalchemy
| 406
|
alembic_version has no primary key
|
**Migrated issue, originally created by Michael Bayer ([@zzzeek](https://github.com/zzzeek))**
It's odd that we don't put a simple PK on this table and databases like Galera recommend PKs on all tables, in the case of some Percona variant the database won't run on a table without a primary key.
The value itself is unique so we just make that a PK.
|
closed
|
2017-01-11T14:19:29Z
|
2017-01-13T22:58:28Z
|
https://github.com/sqlalchemy/alembic/issues/406
|
[
"bug",
"versioning model"
] |
sqlalchemy-bot
| 3
|
3b1b/manim
|
python
| 1,854
|
[BUG]Error when using NumberPlane
|
Hi everyone. I'm new of manim, now i have a problem.I tried to add a numberplane in the code, but it caused error.
(My python file name: a.py)
Code here:
```
class NumberPlaneExample(Scene):
CONFIG = {
"axis_config": {
"stroke_color": WHITE,
"stroke_width": 2,
"include_ticks": False,
"include_tip": False,
"line_to_number_buff": SMALL_BUFF,
"label_direction": DR,
"number_scale_val": 0.5,
},
"y_axis_config": {
"label_direction": DR,
},
"background_line_style": {
"stroke_color": BLUE_D,
"stroke_width": 2,
"stroke_opacity": 1,
},
# Defaults to a faded version of line_config
"faded_line_style": None,
"x_line_frequency": 1,
"y_line_frequency": 1,
"faded_line_ratio": 1,
"make_smooth_after_applying_functions": True,
}
def construct(self):
number_plane = NumberPlane()
self.add(number_plane)
```
It caused error like this:
```
Traceback (most recent call last):
File "c:\beibi\a\manimlib\extract_scene.py", line 155, in main
scene = SceneClass(**scene_kwargs)
File "c:\beibi\a\manimlib\scene\scene.py", line 75, in __init__
self.construct()
File "c:\beibi\a\a.py", line 1180, in construct
ax = Axes().add_coordinates()
File "c:\beibi\a\manimlib\mobject\coordinate_systems.py", line 201, in add_coordinates
self.add(self.get_coordinate_labels(x_vals, y_vals, **kwargs))
File "c:\beibi\a\manimlib\mobject\coordinate_systems.py", line 194, in get_coordinate_labels
x_mobs = self.get_x_axis().get_number_mobjects(*x_vals, **kwargs)
File "c:\beibi\a\manimlib\mobject\coordinate_systems.py", line 52, in get_x_axis
return self.get_axis(0)
TypeError: **Axes.get_axis() missing 2 required positional arguments: 'max_val' and 'axis_config'**
```
Manim Community 0.16.0 post 0
python 3.10
I tried many solutions, but nothing worked. No one on the internet seems to have made the mistake.
If you can help me, thanks a lot!
|
closed
|
2022-08-18T02:32:59Z
|
2022-08-20T11:26:27Z
|
https://github.com/3b1b/manim/issues/1854
|
[] |
CaftBotti
| 1
|
microsoft/JARVIS
|
pytorch
| 164
|
so bigger bug
|
what happed !
Traceback (most recent call last):
File "run_gradio_demo.py", line 4, in <module>
from diffusers.utils import load_image
ImportError: cannot import name 'load_image' from 'diffusers.utils' (unknown location)
i find diffusers‘s source ,cannot found load_image function,so!!!
|
closed
|
2023-04-18T02:36:07Z
|
2023-04-18T10:45:27Z
|
https://github.com/microsoft/JARVIS/issues/164
|
[] |
g-wellsa
| 2
|
coqui-ai/TTS
|
deep-learning
| 3,303
|
Unable to start xtts v2 training process.
|
### Describe the bug
I have prepared my own dataset in LJSpeech format. Tried starting the training process based on the recipe, but was unable to do so. I think it's acting like this since the dataset is not, in supported list provided by xtts v2. I get the following error:
`AssertionError: ❗ len(DataLoader) returns 0. Make sure your dataset is not empty or len(dataset) > 0.
`
The same dataset, can be used in different training scripts/approaches, such as vits or yourtts.
### To Reproduce
Run training script with another language dataset!
### Expected behavior
Training should be started.
### Logs
```shell
> EPOCH: 0/1000
--> /TTS/run/training/GPT_XTTS_v2.0_LJSpeech_FT-November-24-2023_05+18PM-990b209
> Filtering invalid eval samples!!
[!] Warning: The text length exceeds the character limit of 250 for language 'sq', this might cause truncated audio.
[!] Warning: The text length exceeds the character limit of 250 for language 'sq', this might cause truncated audio.
> Total eval samples after filtering: 0
! Run is removed from /TTS/run/training/GPT_XTTS_v2.0_LJSpeech_FT-November-24-2023_05+18PM-990b209
Traceback (most recent call last):
File "tts/lib/python3.10/site-packages/trainer/trainer.py", line 1826, in fit
self._fit()
File "tts/lib/python3.10/site-packages/trainer/trainer.py", line 1780, in _fit
self.eval_epoch()
File "tts/lib/python3.10/site-packages/trainer/trainer.py", line 1628, in eval_epoch
self.get_eval_dataloader(
File "tts/lib/python3.10/site-packages/trainer/trainer.py", line 990, in get_eval_dataloader
return self._get_loader(
File "tts/lib/python3.10/site-packages/trainer/trainer.py", line 914, in _get_loader
len(loader) > 0
AssertionError: ❗ len(DataLoader) returns 0. Make sure your dataset is not empty or len(dataset) > 0.
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB"
],
"available": true,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.1+cu121",
"TTS": "0.21.1",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.8",
"version": "#202212290932~1674066459~20.04~3cd2bf3-Ubuntu SMP PREEMPT_DYNAMI"
}
}
```
### Additional context
_No response_
|
closed
|
2023-11-24T17:20:37Z
|
2024-04-18T08:30:10Z
|
https://github.com/coqui-ai/TTS/issues/3303
|
[
"bug"
] |
arbianqx
| 11
|
dynaconf/dynaconf
|
flask
| 1,047
|
Centralized config package | External hooks for `platformdirs`, __package__, package_dir, etc.
|
## **Problem**
I am trying to set up a single configuration package that all of my other packages will "inherit" from (let's call it `confpkg`).
The idea is to be able to change the code in `confpkg` to add "standard" configuration values, which can then be overridden in whatever package is being developed (call it `yourpkg`), or some dependency that also uses `confpkg` (call it `mid_pkg`).
## **Desired Solution**
It is desired to have this work with absolute minimal user interaction, and to have a very nice api exposed in the code of `yourpkg`.
For example, in `yourpkg`:
```python
from confpkg import pkg_setup
CONF, CACHE, LOG = pkg_setup()
```
Those two lines are all that need to be included.
Now this package ('yourpkg') has these available:
- `CONF['data_dir'] == "/home/user/.local/share/yourpkg/data"` (Or w/e `platformdirs` outputs)
- `CONF['cache_dir'] == "/home/user/.cache/yourpkg"`
- etc - many different directories from package dirs
- `CONF['pkg_name'] == __package__` (The package name will be 'yourpkg' here, but will change for different packages)
- `CONF['pkg_dir'] ==` {some logic typically like: `os.path.dirname(os.path.realpath(__file__))`} (This should work to give the 'yourpkg' top-level package directory, automatically)
These values should of course be *overridden* by the 'yourpkg' directory's config.toml file.
This can allow easily setting things like a 'default' database filepath, etc.
(The "Another Example / Perspective" section covers how `confpkg` should handle other logic around this.)
---
Another example (from `yourpkg` still) using `mid_pkg`, to demonstrate this further:
```python
from mid_pkg import BigDatabase # NOTE: Also uses this same `confpkg` line within the `mid_pkg` code
from confpkg import pkg_setup
CONF, LOG = pkg_setup()
db = BigDatabase()
# DB is created using it's packages code (where CONF from `confpkg` is called),
# which can contain a default `data_dir` computed by `platformdirs` - specific to the `mid_pkg`
db.query("select * from Docs")
```
However, specifically, it is desired to *not* have to have the same `config_maker.py` script in package A, B, C, etc - but rather just to have *one* `config_maker.py` or `config.py` in the `confpkg` package, which will load the directories and add the package name and dir - **for the package that is calling the CONF, LOG = pkgsetup()**.
What is a good way to make this happen?
## Solution Attempts
I have solved some of this with the `inspect` package, or having to load the *string* of the file from a package, and then execute it from within the calling package.
The solution used functions similar to this: https://gitlab.com/chase_brown/myappname/-/blob/12113f788b84a4d642e4f7f275fe4200b15f0685/myappname/util.py#L15-41
(\*Yes, the package is basically useless and you shouldn't use it, but it illustrates the point)
However, I just noticed `Dynaconf` exists, and it seems to be *very* close to what I need (if not a perfect fit - I am still learning this package).
So I figured I should not re-invent the wheel, and rather use the standard tools that are extremely popular in the language.
## Another Example / Perspective
A good example to illustrate the problem is the following:
- Pkg_A --(depends on)--> Pkg_B --(depends on)--> Pkg_C
- Pkg_C needs a `data_dir` which will hold, say **3TB** of space (to construct a database for **Pkg_A**).
This can be achieved with the system desired in this post - specifically a separate `confpkg` that can construct or append to a `config.toml`, (or just create entries in CONF without touching `config.toml`) for Pkg_A, Pkg_B, and Pkg_C, that has defaults from `platform_dirs`.
However, importantly here - the `confpkg` should check if that default can allow for this space, and ask the user for input to alter the `config.toml` **if and only if needed**.
It's clear that copy/pasting this code (probably a few hundred lines) to *every single package created by the user* is not a viable solution. Especially if an institution wants to make an addition for all of the packages that `confpkg` uses (e.g. a standard 'welcome message' for a LOG or something.
---
So what is a good way to deal with this (extremely common) type of scenario?
|
open
|
2024-02-01T20:36:52Z
|
2024-07-08T18:37:53Z
|
https://github.com/dynaconf/dynaconf/issues/1047
|
[
"Not a Bug",
"RFC"
] |
chasealanbrown
| 1
|
pydantic/logfire
|
pydantic
| 828
|
Remove `logfire._internal` from `logfire-api`
|
Having the `_internal` sub-package is great as it provides a boundary to not include in the `logfire-api` package. We should remove it, it would also make the `logfire-api` package lighter.
|
open
|
2025-01-29T15:37:36Z
|
2025-02-14T09:44:03Z
|
https://github.com/pydantic/logfire/issues/828
|
[] |
samuelcolvin
| 1
|
FlareSolverr/FlareSolverr
|
api
| 537
|
[btschool] (updating) FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Unable to process browser request. ProtocolError: Protocol error (Page.navigate): frameId not supported RemoteAgentError@chrome://remote/content/cdp/Error.jsm:29:5UnsupportedError@chrome://remote/content/cdp/Error.jsm:106:1navigate@chrome://remote/content/cdp/domains/parent/Page.jsm:103:13execute@chrome://remote/content/cdp/domains/DomainCache.jsm:101:25execute@chrome://remote/content/cdp/sessions/Session.jsm:64:25execute@chrome://remote/content/cdp/sessions/TabSession.jsm:67:20onPacket@chrome://remote/content/cdp/CDPConnection.jsm:248:36onMessage@chrome://remote/content/server/WebSocketTransport.jsm:89:18handleEvent@chrome://remote/content/server/WebSocketTransport.jsm:71:14
|
**Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
|
closed
|
2022-10-04T01:28:02Z
|
2022-10-07T05:45:29Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/537
|
[
"duplicate",
"invalid"
] |
yjlmiss
| 6
|
iperov/DeepFaceLab
|
deep-learning
| 5,518
|
GPU List
|
Could we make a list of gpus recommended for usage with DFL ?
Would like to buy the optimal : )
What GPU should i go for ? (for Live Deepfakes)
|
open
|
2022-05-16T14:42:03Z
|
2023-06-08T22:58:57Z
|
https://github.com/iperov/DeepFaceLab/issues/5518
|
[] |
H4xl0r
| 1
|
chaos-genius/chaos_genius
|
data-visualization
| 391
|
[BUG] Adding KPI with incorrect columns does not produce the correct errors
|
## Describe the bug
When selecting a non datetime column as the date column, we get an empty dataframe error instead of an error explaining that the datetime column is not parsable.
|
closed
|
2021-11-11T18:11:59Z
|
2021-11-15T08:03:07Z
|
https://github.com/chaos-genius/chaos_genius/issues/391
|
[
"🐛 bug",
"🛠️ backend"
] |
kartikay-bagla
| 1
|
allenai/allennlp
|
nlp
| 5,718
|
Can't load models with .zip extension
|
<!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [X] I have verified that the issue exists against the `main` branch of AllenNLP.
- [X] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/main/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [X] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [X] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [X] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/main/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/main) to find out if the bug was already fixed in the main branch.
- [X] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [X] I have included in the "Related issues or possible duplicates" section below all related issues and possible duplicate issues (If there are none, check this box anyway).
- [X] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [X] I have included in the "Environment" section below the output of `pip freeze`.
- [X] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
When using the Python API to import a pretrained model that has the extension `.zip`, fails due to "r:gz" specified [here](https://github.com/allenai/allennlp/blob/9f879b0964e035db711e018e8099863128b4a46f/allennlp/models/archival.py#L301). Models in question are from the [PURE repository](https://github.com/princeton-nlp/PURE#Pre-trained-Models).
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
~/anaconda3/envs/dygiepp/lib/python3.7/tarfile.py in gzopen(cls, name, mode, fileobj, compresslevel, **kwargs)
1645 try:
-> 1646 t = cls.taropen(name, mode, fileobj, **kwargs)
1647 except OSError:
~/anaconda3/envs/dygiepp/lib/python3.7/tarfile.py in taropen(cls, name, mode, fileobj, **kwargs)
1622 raise ValueError("mode must be 'r', 'a', 'w' or 'x'")
-> 1623 return cls(name, mode, fileobj, **kwargs)
1624
~/anaconda3/envs/dygiepp/lib/python3.7/tarfile.py in __init__(self, name, mode, fileobj, format, tarinfo, dereference, ignore_zeros, encoding, errors, pax_headers, debug, errorlevel, copybufsize)
1485 self.firstmember = None
-> 1486 self.firstmember = self.next()
1487
~/anaconda3/envs/dygiepp/lib/python3.7/tarfile.py in next(self)
2288 try:
-> 2289 tarinfo = self.tarinfo.fromtarfile(self)
2290 except EOFHeaderError as e:
~/anaconda3/envs/dygiepp/lib/python3.7/tarfile.py in fromtarfile(cls, tarfile)
1093 """
-> 1094 buf = tarfile.fileobj.read(BLOCKSIZE)
1095 obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)
~/anaconda3/envs/dygiepp/lib/python3.7/gzip.py in read(self, size)
286 raise OSError(errno.EBADF, "read() on write-only GzipFile object")
--> 287 return self._buffer.read(size)
288
~/anaconda3/envs/dygiepp/lib/python3.7/_compression.py in readinto(self, b)
67 with memoryview(b) as view, view.cast("B") as byte_view:
---> 68 data = self.read(len(byte_view))
69 byte_view[:len(data)] = data
~/anaconda3/envs/dygiepp/lib/python3.7/gzip.py in read(self, size)
473 self._init_read()
--> 474 if not self._read_gzip_header():
475 self._size = self._pos
~/anaconda3/envs/dygiepp/lib/python3.7/gzip.py in _read_gzip_header(self)
421 if magic != b'\037\213':
--> 422 raise OSError('Not a gzipped file (%r)' % magic)
423
OSError: Not a gzipped file (b'PK')
During handling of the above exception, another exception occurred:
ReadError Traceback (most recent call last)
/tmp/local/63761026/ipykernel_27406/2934808558.py in <module>
----> 1 scierc_pure = Model.from_archive('https://nlp.cs.princeton.edu/projects/pure/scierc_models/ent-scib-ctx300.zip')
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/model.py in from_archive(cls, archive_file, vocab)
480 from allennlp.models.archival import load_archive # here to avoid circular imports
481
--> 482 model = load_archive(archive_file).model
483 if vocab:
484 model.vocab.extend_from_vocab(vocab)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/archival.py in load_archive(archive_file, cuda_device, overrides, weights_file)
218 serialization_dir = resolved_archive_file
219 else:
--> 220 with extracted_archive(resolved_archive_file, cleanup=False) as tempdir:
221 serialization_dir = tempdir
222
~/anaconda3/envs/dygiepp/lib/python3.7/contextlib.py in __enter__(self)
110 del self.args, self.kwds, self.func
111 try:
--> 112 return next(self.gen)
113 except StopIteration:
114 raise RuntimeError("generator didn't yield") from None
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/archival.py in extracted_archive(resolved_archive_file, cleanup)
299 tempdir = tempfile.mkdtemp()
300 logger.info(f"extracting archive file {resolved_archive_file} to temp dir {tempdir}")
--> 301 with tarfile.open(resolved_archive_file, "r:gz") as archive:
302 archive.extractall(tempdir)
303 yield tempdir
~/anaconda3/envs/dygiepp/lib/python3.7/tarfile.py in open(cls, name, mode, fileobj, bufsize, **kwargs)
1591 else:
1592 raise CompressionError("unknown compression type %r" % comptype)
-> 1593 return func(name, filemode, fileobj, **kwargs)
1594
1595 elif "|" in mode:
~/anaconda3/envs/dygiepp/lib/python3.7/tarfile.py in gzopen(cls, name, mode, fileobj, compresslevel, **kwargs)
1648 fileobj.close()
1649 if mode == 'r':
-> 1650 raise ReadError("not a gzip file")
1651 raise
1652 except:
ReadError: not a gzip file
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS:
CentOS Linux release 7.9.2009
Linux Kernel 3.10.0-1160.36.2.el7.x86_64
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version:
Python 3.6.4
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
absl-py==0.5.0
alabaster==0.7.12
alembic==1.7.5
allennlp==1.1.0
allennlp-models==1.1.0
anyio==3.6.1
appdirs==1.4.3
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
artemis==0.1.4
ase==3.17.0
asn1crypto==0.24.0
astor==0.7.1
atomicwrites==1.3.0
attrs==19.1.0
autopage==0.4.0
awscli==1.18.75
Babel==2.9.1
backcall==0.1.0
backports.csv==1.0.7
bcrypt==3.1.4
beartype==0.3.2
beautifulsoup4==4.8.1
biom-format==2.1.7
biopython==1.72
bitstring==3.1.5
bleach==3.1.4
blessings==1.7
blis==0.4.1
blist==1.3.6
bokeh==2.3.2
BoltzTraP2==20.7.1
boto3==1.20.20
botocore==1.23.20
brewer2mpl==1.4.1
BUSCO==3.1.0
bz2file==0.98
cachetools==4.2.2
catalogue==1.0.0
certifi==2020.6.20
cffi==1.14.3
cftime==1.3.1
chardet==3.0.4
charset-normalizer==2.0.12
checksumdir==1.2.0
cli-helpers==0.2.3
click==7.1.2
cliff==3.10.0
cmaes==0.8.2
cmd2==2.3.3
colorama==0.4.3
colored==1.4.2
colorlog==6.6.0
commonmark==0.9.1
configobj==5.0.6
conllu==4.1
contextvars==2.4
cryptography==2.1.4
css-html-js-minify==2.5.5
cupy-cuda100==6.0.0
cutadapt==3.7
cycler==0.10.0
cymem==2.0.3
Cython==0.27.3
dataclasses==0.8
deap==1.2.2
decorator==4.4.2
deepTools==3.1.3
defusedxml==0.6.0
deprecation==2.1.0
dnaio==0.7.1
docopt==0.6.2
docutils==0.14
doqu==0.28.2
ecdsa==0.13
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.5/en_core_web_sm-2.2.5.tar.gz
entrypoints==0.3
enum34==1.1.10
fastrlock==0.4
filelock==3.0.12
Flask==1.0.3
Forthon==0.8.49
frozenlist==1.2.0
ftfy==6.0.3
future==0.16.0
gast==0.2.0
ghp-import==2.0.2
git-filter-repo==2.34.0
gitdb==4.0.9
GitPython==3.1.18
globus-cli==2.1.0
globus-sdk==2.0.1
google-auth==1.32.1
gpustat==0.5.0
greenlet==1.1.2
GridDataFormats==0.5.0
grpcio==1.15.0
gudhi==3.4.1.post1
gurobipy==8.0.1
h5py==2.8.0
HTSeq==0.12.4
huggingface-hub==0.4.0
humanize==3.4.1
idna==2.10
idr==2.0.2
imagesize==1.1.0
iminuit==2.4.0
immutables==0.19
importlib-metadata==4.8.2
importlib-resources==5.4.0
intervaltree==3.0.2
ipykernel==5.2.1
ipython==7.2.0
ipython-genutils==0.2.0
ipywidgets==7.5.1
isal==0.11.1
itsdangerous==1.1.0
jedi==0.13.1
Jinja2==3.0.3
jmespath==0.10.0
joblib==0.17.0
json5==0.9.10
jsonnet==0.17.0
jsonpickle==2.0.0
jsonschema==3.2.0
jupyter==1.0.0
jupyter-client==7.0.6
jupyter-console==6.1.0
jupyter-core==4.8.1
jupyter-server==1.13.1
jupyterlab==3.2.9
jupyterlab-server==2.10.3
kaleido==0.0.3.post1
kiwisolver==1.0.1
lapels==1.1.1
liac-arff==2.1.1
llvmlite==0.36.0
lxml==4.6.4
Mako==1.1.6
mappy==2.20
Markdown==3.3.6
MarkupSafe==2.0.1
matlabengineforpython===R2019b
matplotlib==3.3.2
matplotlib-venn==0.11.5
mergedeep==1.3.4
MicrobeCensus==1.1.0
misopy==0.5.4
mistune==0.8.4
mkdocs==1.2.3
mkdocs-git-revision-date-localized-plugin==0.11.1
mkdocs-material==8.1.7
mkdocs-material-extensions==1.0.3
mkdocs-redirects==1.0.4
mmtf-python==1.1.2
mock==2.0.0
modtools==1.0.2
more-itertools==6.0.0
mpi4py==3.0.0
msgpack==1.0.0
multidict==5.2.0
murmurhash==1.0.2
NanoComp==1.15.0
NanoFilt==2.7.1
nanoget==1.15.0
NanoLyse==1.2.0
nanomath==1.0.1
nanopack==1.1.0
NanoPlot==1.38.0
nanoQC==0.9.4
NanoStat==1.5.0
natsort==7.1.1
nbclassic==0.3.5
nbconvert==5.6.1
nbformat==4.4.0
ncbi-genome-download==0.2.9
nest-asyncio==1.5.1
netaddr==0.7.19
netCDF4==1.5.5.1
netifaces==0.10.6
networkx==2.5
nltk==3.6.5
nmslib==2.0.6
nose==1.3.7
notebook==6.0.3
numba==0.53.1
numpy==1.19.2
numpydoc==0.8.0
nvidia-ml-py3==7.352.0
oauthlib==3.1.1
opencv-python==4.4.0.44
optuna==2.10.0
overrides==3.1.0
packaging==21.3
pandas==0.25.2
pandocfilters==1.4.2
paramiko==2.4.0
parso==0.3.1
pauvre==0.2
paycheck==1.0.2
pbr==3.1.1
pexpect==4.6.0
pickleshare==0.7.5
Pillow==7.2.0
plac==1.1.3
plotly==4.10.0
pluggy==0.9.0
preshed==3.0.2
prettytable==2.4.0
prometheus-client==0.7.1
prompt-toolkit==2.0.7
protobuf==3.17.3
psutil==5.6.1
ptyprocess==0.6.0
py==1.8.0
py-enigma==0.1
py-rouge==1.1
py2bit==0.3.0
pyarrow==1.0.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pybedtools==0.8.0
pyBigWig==0.3.12
pybind11==2.5.0
pycairo==1.19.1
pycparser==2.20
pycrypto==2.6.1
pygist==2.2.0
Pygments==2.11.2
PyJWT==1.7.1
pymdown-extensions==9.1
PyNaCl==1.2.1
pyparsing==2.2.0
pyperclip==1.8.2
pyrsistent==0.18.0
pysam==0.15.1
pysbd==0.2.3
pytest==4.3.1
python-dateutil==2.8.1
Python-Deprecated==1.1.0
python-igraph==0.8.0
python-Levenshtein==0.12.2
python-magic==0.4.18
pytz==2017.3
PyYAML==5.3.1
pyyaml_env_tag==0.1
pyzmq==19.0.0
qtconsole==4.7.3
QtPy==1.9.0
regex==2021.11.10
requests==2.25.1
requests-oauthlib==1.3.0
retrying==1.3.3
rich==9.13.0
rsa==3.4.2
s3cmd==2.1.0
s3transfer==0.5.0
sacremoses==0.0.46
schematics==2.1.0
scikit-learn==0.23.1
scipy==1.5.4
scispacy==0.2.4
screed==1.0
seaborn==0.10.1
Send2Trash==1.5.0
sentencepiece==0.1.96
seqmagick==0.7.0
six==1.16.0
slurm-gpustat==0.0.7
smmap==5.0.0
sniffio==1.2.0
snowballstemmer==1.2.1
sortedcontainers==2.1.0
soupsieve==2.3.1
sourmash==3.5.0
spacy==2.3.7
spglib==1.16.0
Sphinx==1.8.3
sphinxcontrib-websupport==1.1.0
SQLAlchemy==1.4.27
sqlparse==0.2.4
srsly==1.0.2
statistics==1.0.3.5
stevedore==3.5.0
suspenders==0.2.6
tensorboard==1.10.0
tensorboard-plugin-wit==1.8.0
tensorboardX==2.4.1
tensorflow==1.10.1
termcolor==1.1.0
terminado==0.8.3
terminaltables==3.1.0
testpath==0.4.4
texttable==1.6.2
Theano==1.0.3
thinc==7.4.5
threadpoolctl==2.1.0
tigmint==1.1.2
tinydb==3.13.0
tokenizers==0.8.1rc1
torch==1.6.0
torchvision==0.8.2
tornado==6.1
tqdm==4.46.1
traitlets==4.3.2
transformers==3.0.2
trash-cli==0.17.1.14.post0
typing-extensions==3.7.4.3
umi-tools==1.0.0
urllib3==1.26.7
virtualenv==15.1.0
wasabi==0.6.0
watchdog==2.1.6
wcwidth==0.1.7
webencodings==0.5.1
websocket-client==1.3.1
Werkzeug==0.14.1
widgetsnbextension==3.5.1
word2number==1.1
xopen==1.4.0
yapf==0.31.0
yarl==1.7.2
zipp==3.1.0
```
</p>
</details>
## Steps to reproduce
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
from allennlp.models.model import Model
scierc_pure = Model.from_archive('https://nlp.cs.princeton.edu/projects/pure/scierc_models/ent-scib-ctx300.zip')
```
</p>
</details>
|
closed
|
2022-10-10T22:27:29Z
|
2022-10-17T21:39:10Z
|
https://github.com/allenai/allennlp/issues/5718
|
[
"bug"
] |
serenalotreck
| 2
|
ShishirPatil/gorilla
|
api
| 910
|
Test friendly examples?
|
Hey, I want testing my own model with specific chat template.
I will need to write a lot in the Model Handler (600+ lines). I did not find any category in the repo so that it would be convenient to debug my code.
The question is, is there anything like that?
|
closed
|
2025-02-17T08:45:16Z
|
2025-02-18T07:27:02Z
|
https://github.com/ShishirPatil/gorilla/issues/910
|
[
"BFCL-New Model"
] |
DmitryDiTy
| 4
|
ionelmc/pytest-benchmark
|
pytest
| 62
|
benchmark-compare no longer defaulting to comparing against the previous run
|
My test suite passes `--benchmark-compare` with no value specified, so that it defaults to the latest saved run. Upgrading from 3.0.0 to 3.1.0a1 broke this such that pytest-benchmark now always compares the current run against itself, rather than against the previous run.
|
open
|
2016-11-21T05:34:19Z
|
2017-01-03T23:38:33Z
|
https://github.com/ionelmc/pytest-benchmark/issues/62
|
[] |
jab
| 3
|
streamlit/streamlit
|
machine-learning
| 10,754
|
make does not install yarn, yet calls yarn, thus failing with "yarn: command not found"
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Currently, when I run `make all`, `make all-devel`, `make mini-devel`, or (the unadvertised) `make build-deps`, it installs some python packages, then some apt packages (it doesn't mention yarn in the output about these), then it prints:
```
cd frontend/ ; yarn install --immutable
/bin/bash: line 1: yarn: command not found
make: *** [Makefile:296: react-init] Error 127
```
This is especially bad because it halts the script before protobuf is installed, so even though I'm only working on the python thing, I don't have that python part.
Due to some other confusing problems also in the build process, I also tried running `uv run make all-devel`, and eventually got the same `yarn: command not found` error.
### Debug info
- Streamlit version: develop branch (1.43.2)
- Python version: 3.10.12
- Operating System: Windows 10 (latest)
- Browser: n/a
|
open
|
2025-03-12T18:59:04Z
|
2025-03-18T10:50:30Z
|
https://github.com/streamlit/streamlit/issues/10754
|
[
"type:enhancement",
"area:contribution"
] |
wyattscarpenter
| 5
|
pyqtgraph/pyqtgraph
|
numpy
| 2,610
|
Support "QFileDialog.getOpenFileName" for exporting images via context menu
|
Hello,
I found out that the export function uses the following way
`self.fileDialog.setFileMode(QtGui.QFileDialog.AnyFile)`
in the source code.
The file dialog style is different than using
`QFileDialog.getOpenFileName`
I use Win10 and the built-in file dialog is not so convenient for me.
(Matplotlib's toolbar "Save the figure" function use the latter native way.)
I wonder if this could be added to the export function of context menu?
I also found one example in
[https://github.com/pyqtgraph/pyqtgraph/blob/f24335dc71b66d3b2075c197367b3290e2ea4d11/pyqtgraph/examples/relativity/relativity.py](url)
which has an explicit export button that also uses the latter file dialog style.
When pyqtgraph is integrated with my own GUI (open file dialogs with `QFileDialog.getOpenFileName`), this leads to the problem of having two file dialogs with different styles, which may cause confusion.
|
closed
|
2023-02-11T04:38:05Z
|
2023-02-11T05:23:46Z
|
https://github.com/pyqtgraph/pyqtgraph/issues/2610
|
[] |
Lexachoc
| 1
|
KaiyangZhou/deep-person-reid
|
computer-vision
| 96
|
Where can I find the trained model
|
Where can I find the trained model of the result in MODEL_ZOO ?Thank you very much! @KaiyangZhou
|
closed
|
2018-12-28T09:18:28Z
|
2019-02-04T23:18:39Z
|
https://github.com/KaiyangZhou/deep-person-reid/issues/96
|
[] |
ciwei123
| 5
|
sktime/sktime
|
data-science
| 7,200
|
[DOC] Feature importances of direct-strategy reduced forecasters with exogenous variables are not documented
|
**Problem Statement**
It is not clear on the documentation of [sktime.forecasting.compose.make_reduction](https://www.sktime.net/en/latest/api_reference/auto_generated/sktime.forecasting.compose.make_reduction.html) how one can interpret the `feature_importances_` of the `estimator`, in the case of direct-strategy forecasting through reduction.
**Minimum Reproducible Example**
```python
import numpy as np
import pandas as pd
from sklearn.ensemble import ExtraTreesRegressor
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import temporal_train_test_split
if __name__ == "__main__":
y = load_airline()
X = pd.DataFrame(
{
"x1": np.random.rand(y.shape[0]),
"x2": np.random.rand(y.shape[0]),
}
).set_index(y.index)
y_train, y_test, X_train, X_test = temporal_train_test_split(y, X, test_size=5)
regressor = ExtraTreesRegressor()
forecaster = make_reduction(regressor, window_length=5, strategy="direct")
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster.fit(y_train, X=X_train, fh=fh)
feature_importances = forecaster.get_fitted_params()["estimators"][0].feature_importances_
print(feature_importances)
```
This yields the following array:
```
[0.08666385 0.0838308 0.15618081 0.14396575 0.45666079 0.00584428
0.00518489 0.00855417 0.00749582 0.00788163 0.0085631 0.00395616
0.00599802 0.00733155 0.01188838]
```
While all `15` values in this array sum to 1.0 (typical for feature importances), one cannot point out which value corresponds to which original feature.
**Solution**
First of all, given that we have selected:
- `window_length = 5`
- `2` exogenous variables (`x1`, `x2`)
it makes sense that the array has length `15` since we have essentially `3` features in total (1 target variable + 2 exogenous), each with 5 lags (due to window length). In short: `15 = 3 * 5`.
While this by itself is not a trivial result (_i think sktime users will find it very useful to have something like this documented_), even less trivial is figuring out how these `15` values (feature importances) are associated to the original 3 features (target + exogenous).
By introducing the random-variabled `x1` and `x2`, as well as adding another "pseudo-feature" such as `"x3": 2 * y` in the design matrix `X` of the example above, one can make the following observations:
- the first 5 values in the feature importances array correspond to the 5 lags of the target variable `y` (first value is`lag_5` and 5th value is `lag_1`, based on feature importance)
- each subsequent group of values of size 5 corresponds to each exogenous variable in the original design matrix `X` (left to right).
My suggestion is that this particular behaviour should be:
1. validated from at least one `sktime` contributor (*i may have missed something myself*)
2. documented somewhere in the `sktime` documentation (*i think it's something valuable and i am willing to help contribute to it - if you find it valuable as well*)
|
open
|
2024-09-30T10:37:18Z
|
2024-09-30T11:57:10Z
|
https://github.com/sktime/sktime/issues/7200
|
[
"implementing algorithms",
"documentation",
"module:forecasting"
] |
ilias-ant
| 4
|
anselal/antminer-monitor
|
dash
| 140
|
DECRED ANTMINER SUPORT
|
Well like you said ... i will open this issue to we see what you can do about it. but to help i need know what info you need to make this ?
|
open
|
2018-11-01T14:22:59Z
|
2018-11-28T13:17:41Z
|
https://github.com/anselal/antminer-monitor/issues/140
|
[] |
red0bear
| 5
|
django-import-export/django-import-export
|
django
| 1,494
|
"TypeError: __init__() missing 1 required positional argument: 'search_help_text'"
|
Go to jet/utils.py and add "model_admin.search_help_text" at last in change_list_args.
Line number 240-244
change_list_args = [
request, model, list_display, list_display_links, list_filter,
model_admin.date_hierarchy, search_fields, list_select_related,
model_admin.list_per_page, model_admin.list_max_show_all,
model_admin.list_editable, model_admin, model_admin.search_help_text]
|
closed
|
2022-10-03T18:08:27Z
|
2023-02-20T13:51:32Z
|
https://github.com/django-import-export/django-import-export/issues/1494
|
[
"bug"
] |
shashi2098
| 3
|
tox-dev/tox
|
automation
| 3,320
|
Way to choose python architecture?
|
## What's the problem this feature will solve?
<!-- What are you trying to do, that you are unable to achieve with tox as it currently stands? -->
I have arm64 python and x86-64 python, and I want to run tests on arm64 one. I see no way to explicitly select either one.
## Describe the solution you'd like
<!-- Clear and concise description of what you want to happen. -->
<!-- Provide examples of real world use cases that this would enable and how it solves the problem described above. -->
I'm fixing https://github.com/pypa/setuptools/issues/4553 but tox can't select arm64 python. Something like `-e py312-arm64` would help.
## Alternative Solutions
<!-- Have you tried to workaround the problem using tox or other tools? Or a different approach to solving this issue?
Please elaborate here. -->
## Additional context
<!-- Add any other context, links, etc. about the feature here. -->
|
open
|
2024-08-10T20:51:50Z
|
2024-08-11T03:31:51Z
|
https://github.com/tox-dev/tox/issues/3320
|
[
"enhancement"
] |
saschanaz
| 1
|
keras-team/keras
|
data-science
| 20,835
|
ValueErrors when calling Model.export() for TF SavedModel format on Keras Models with dict inputs
|
**Problem**
The [`Model.export()` API](https://keras.io/api/models/model_saving_apis/export/) in Keras 3 supports exporting to a TensorFlow SavedModel artifact for inference. When trying to export [Gemma 2](https://www.kaggle.com/models/google/gemma-2) and [ShieldGemma](https://www.kaggle.com/models/google/shieldgemma) to [TF SavedModel](https://www.tensorflow.org/guide/saved_model), I ran into two different `ValueError`s:
* If no `input_signature` is provided, a `ValueError` will be thrown related to a structural mismatch between the expected and actual inputs passed to the `GemmaCausalLM` class; and
* If an `input_signature` is provided as a `list[keras.InputSpec]`, a `ValueError` will be thrown related the the wrong number of values being passed to a TF function.
However, if yo uwrap the `dict` from `model.input` in a `list`, as `input_signature=[model.input]`, the export runs to completion.
This is not restricted to Gemma models, as shown in this [minimal reproducible example](https://gist.github.com/RyanMullins/42e123d94e70d62f7413545e780e6064).
Thanks to @mattdangerw for helping to isolate this minimal example.
|
open
|
2025-01-31T19:53:26Z
|
2025-02-27T17:17:08Z
|
https://github.com/keras-team/keras/issues/20835
|
[
"stat:awaiting keras-eng",
"Gemma"
] |
RyanMullins
| 1
|
amdegroot/ssd.pytorch
|
computer-vision
| 540
|
After training model the onnx export fails
|
jetson_nano@Jetson:~/jetson-inference/python/training/detection/ssd$ python3 onnx_export.py --model-dir=models/markusmodel
Namespace(batch_size=1, height=300, input='', labels='labels.txt', model_dir='models/markusmodel', net='ssd-mobilenet', output='', width=300)
running on device cuda:0
found best checkpoint with loss 0.760933 (models/markusmodel/mb1-ssd-Epoch-29-Loss-0.7609327760609713.pth)
creating network: ssd-mobilenet
num classes: 4
loading checkpoint: models/markusmodel/mb1-ssd-Epoch-29-Loss-0.7609327760609713.pth
Traceback (most recent call last):
File "onnx_export.py", line 86, in <module>
net.load(args.input)
File "/home/jetson_nano/jetson-inference/python/training/detection/ssd/vision/ssd/ssd.py", line 135, in load
self.load_state_dict(torch.load(model, map_location=lambda storage, loc: storage))
File "/home/jetson_nano/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1045, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for SSD:
size mismatch for classification_headers.0.weight: copying a param with shape torch.Size([30, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([24, 512, 3, 3]).
size mismatch for classification_headers.0.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([24]).
size mismatch for classification_headers.1.weight: copying a param with shape torch.Size([30, 1024, 3, 3]) from checkpoint, the shape in current model is torch.Size([24, 1024, 3, 3]).
size mismatch for classification_headers.1.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([24]).
size mismatch for classification_headers.2.weight: copying a param with shape torch.Size([30, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([24, 512, 3, 3]).
size mismatch for classification_headers.2.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([24]).
size mismatch for classification_headers.3.weight: copying a param with shape torch.Size([30, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([24, 256, 3, 3]).
size mismatch for classification_headers.3.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([24]).
size mismatch for classification_headers.4.weight: copying a param with shape torch.Size([30, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([24, 256, 3, 3]).
size mismatch for classification_headers.4.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([24]).
size mismatch for classification_headers.5.weight: copying a param with shape torch.Size([30, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([24, 256, 3, 3]).
size mismatch for classification_headers.5.bias: copying a param with shape torch.Size([30]) from checkpoint, the shape in current model is torch.Size([24]).
|
open
|
2021-03-18T14:07:02Z
|
2021-03-18T14:07:15Z
|
https://github.com/amdegroot/ssd.pytorch/issues/540
|
[] |
MST2010Markus
| 0
|
keras-team/keras
|
python
| 20,931
|
TF+XLA+Mixed Precision: Keras fails to compute gradients
|
Keras fails to compute gradients for autoencoder-esce model using Tensorflow backend with mixed precision and jit compilation enabled.
See code here: [colab](https://colab.research.google.com/gist/itmo153277/6f38f43d724e173a30115d12f2a40e07/upsamplerelugrad.ipynb).
This is caused by `UpSampling2D` layer. When gradients are computed, the type is resolved as `float32` instead of `float16`, and this causes Relu that comes next to throw a dtype mismatch exception.
The only working workaround I found is explicitly setting dtype to `float32` for `UpSampling2D` layer. This inserts a `cast` node inbetween `relu` and `upsample` which helps dealing with dtype conversion.
Not sure which project this issue should be submitted to: Keras, TF or XLA
|
open
|
2025-02-20T12:11:14Z
|
2025-02-21T04:52:28Z
|
https://github.com/keras-team/keras/issues/20931
|
[
"type:Bug"
] |
itmo153277
| 0
|
ray-project/ray
|
pytorch
| 50,976
|
[Ray Core] Ray task error stack trace is incomplete
|
### What happened + What you expected to happen
If the error message is too long, it seems that it cannot be fully displayed. Is there a length limit somewhere? Is there an environment variable that can be configured?

### Versions / Dependencies
Ray v2.38.0
### Reproduction script
Calling a C++ binary program with many parameters and then reporting an error internally
### Issue Severity
Medium: It is a significant difficulty but I can work around it.
|
closed
|
2025-02-28T03:56:51Z
|
2025-03-03T03:58:10Z
|
https://github.com/ray-project/ray/issues/50976
|
[
"bug",
"dashboard",
"observability"
] |
Moonquakes
| 1
|
Baiyuetribe/kamiFaka
|
flask
| 133
|
能否在移动支付时增加打开支付宝按钮?
|
开通了当面付,才发现手机端唤醒点付款二维码就行了,可不可以加个明显按钮如:打开支付宝,打开app支付的按钮来提示唤醒移动端呢?或者手机端自动调用支付宝支付呢?还有,仅有一种支付方式的时候能否默认选中那个支付,而不需要再点选那个唯一的支付方式
|
open
|
2021-12-18T08:21:44Z
|
2021-12-18T08:21:44Z
|
https://github.com/Baiyuetribe/kamiFaka/issues/133
|
[
"bug",
"good first issue",
"question"
] |
meetcode
| 0
|
roboflow/supervision
|
computer-vision
| 1,715
|
`move_masks` only supports movement in positive direction
|
If you compare the code of `move_masks`, `move_detections` and `move_oriented_boxes`, you'll find that only mask code restricts offset direction:
```python
if offset[0] < 0 or offset[1] < 0:
raise ValueError(f"Offset values must be non-negative integers. Got: {offset}")
```
It should be possible to move masks in either direction, even if it results in cropping.
To complete this:
- [ ] Change the code so masks can be moved with negative offset
- [ ] Create a unit test suite for `move_masks`
---
It would help us immensely and speed up the review process if you could create a [Colab](https://colab.research.google.com/) showcasing the changes, but for this task it is optional. You may use the [Starter Template](https://colab.research.google.com/drive/1rin7WrS-UvVIe-_Gfxmu-yVslGphOq89?usp=sharing).
|
closed
|
2024-12-06T09:17:48Z
|
2024-12-16T14:08:27Z
|
https://github.com/roboflow/supervision/issues/1715
|
[
"bug",
"help wanted"
] |
LinasKo
| 3
|
onnx/onnx
|
scikit-learn
| 6,385
|
Add coverage badge to readme (https://app.codecov.io/gh/onnx/onnx/tree/main)
|
Add coverage badge of project https://app.codecov.io/gh/onnx/onnx/tree/main to readme:
<img width="636" alt="image" src="https://github.com/user-attachments/assets/d2593d04-13f9-4410-8d35-e65087ee9d89">
|
open
|
2024-09-22T07:29:05Z
|
2024-09-22T07:36:45Z
|
https://github.com/onnx/onnx/issues/6385
|
[
"topic: documentation"
] |
andife
| 1
|
huggingface/transformers
|
python
| 36,025
|
HIGGS Quantization not working properly
|
### System Info
**Environment**
```
- `transformers` version: 4.48.2
- Platform: Linux-5.4.210-39.1.pagevecsize-x86_64-with-glibc2.27
- Python version: 3.11.10
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-80GB
- fast_hadamard_transform 1.0.4.post1
```
### Who can help?
@BlackSamorez
@SunMarc
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Recently, in the [PR](https://github.com/huggingface/transformers/pull/34997) HIGGS quantization from the paper [Pushing the Limits of Large Language Model Quantization via the Linearity Theorem](https://arxiv.org/abs/2411.17525) was introduced.
But when attempting to load the quantized `Llama-3.1-8B-Instruct `model in this format as follows:
```python
model_name = "meta-llama/Llama-3.1-8B-Instruct"
quantization_config = HiggsConfig(bits=4, p=2)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
quantization_config=quantization_config
)
model.config.use_cache = False
```
And doing forward pass with dummy inputs
```python
inputs = torch.randint(0, model.config.vocab_size, device="cuda", size=(8,))
with torch.no_grad():
outputs = model(inputs)
```
I get the following error in the RoPE:
```bash
File ~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:271, in LlamaAttention.forward(self, hidden_states, position_embeddings, attention_mask, past_key_value, cache_position, **kwargs)
[268](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:268) value_states = self.v_proj(hidden_states).view(hidden_shape).transpose(1, 2)
[270](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:270) cos, sin = position_embeddings
--> [271](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:271) query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
[273](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:273) if past_key_value is not None:
[274](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:274) # sin and cos are specific to RoPE models; cache_position needed for the static cache
[275](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:275) cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
File ~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:169, in apply_rotary_pos_emb(q, k, cos, sin, position_ids, unsqueeze_dim)
[167](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:167) cos = cos.unsqueeze(unsqueeze_dim)
[168](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:168) sin = sin.unsqueeze(unsqueeze_dim)
--> [169](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:169) q_embed = (q * cos) + (rotate_half(q) * sin)
[170](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:170) k_embed = (k * cos) + (rotate_half(k) * sin)
[171](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:171) return q_embed, k_embed
RuntimeError: The size of tensor a (32) must match the size of tensor b (128) at non-singleton dimension 3
```
### Expected behavior
I would expect successful forward pass through the quantized model.
|
closed
|
2025-02-04T08:55:00Z
|
2025-02-19T05:35:52Z
|
https://github.com/huggingface/transformers/issues/36025
|
[
"bug"
] |
Godofnothing
| 3
|
plotly/dash-cytoscape
|
plotly
| 128
|
Edge Styling makes edges Unselectable
|
#### Description
When any styling is applied to a an edge element, it is no longer selectable by the user. I want to design an app with directed edges that can be selected by the user to trigger callback to display info about that relationship. Currently, I have to choose between edge styling (e.g. showing the edge direction with an arrow) or interactivity.
#### Steps/Code to Reproduce
apply styling to any element in the sample code, here the two node example is slightly edited, and the only edge thats unselectable is the one assigned a styling in the style dictionary. neither events: yes or selectable:yes seem to work, which is what most closely aligns with what i can find in the cytoscape.js documentation
```python
import dash
import dash_cytoscape as cyto
import dash_html_components as html
app = dash.Dash(__name__)
app.layout = html.Div([
cyto.Cytoscape(
id='cytoscape-two-nodes',
layout={'name': 'preset'},
style={'width': '100%', 'height': '400px'},
elements=[
{'data': {'id': 'one', 'label': 'Node 1'}, 'position': {'x': 75, 'y': 75}},
{'data': {'id': 'two', 'label': 'Node 2'}, 'position': {'x': 200, 'y': 200}},
{'data': {'id': 'three', 'label': 'Node 3'}, 'position': {'x':200, 'y': 75}},
{'data': {'source': 'one', 'target': 'two'}, 'classes': 'test'},
{'data': {'source': 'two', 'target': 'three'}},
{'data': {'source': 'three', 'target': 'one'}}
],
stylesheet=[
{
'selector': '.test',
'style': {
'curve-style': 'taxi',
'label': 'bezier',
'line-color': 'red',
'target-arrow-color': 'blue',
'target-arrow-shape': 'triangle',
'selectable': 'yes',
'events': 'yes'
}
},
]
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
#### Expected Results
I would expect simply changing color or style of an edge would not remove selectability, or if this were the default to allow some keyword in the stylesheet to override the default functionality.
#### Actual Results
Applying any styling to an edge element makes it such that it cannot be selected by the user mouse-events
#### Versions
Dash 1.18.1
Dash Core Components 1.1.1
Dash HTML Components 1.14.1
Dash CYTO Components 0.2.0
|
closed
|
2021-03-11T21:24:07Z
|
2021-08-17T15:10:32Z
|
https://github.com/plotly/dash-cytoscape/issues/128
|
[] |
bfraile5
| 1
|
marcomusy/vedo
|
numpy
| 58
|
Options not working with vtkplotter.dolfin.plot
|
Hi, I am using vtkplotter with FEniCS and any option that i pass to vtkplotter.dolfin.plot() do not work. In my code I try to use the alpha=0.3 and lw=0 options but, the mesh always shows opaque and with lines.
I have installed vtkplotter with `pip3 install vtkplotter` in Ubuntu 18.04
My code is the following:
```
from fenics import *
from mshr import *
import vtkplotter as vtkp
import dolfin as df
import numpy as np
from math import sin, cos, pi, sqrt
# My domain is a cylinder
height_cylinder = 150.00
r = 25.0
my_domain = Cylinder(Point(0, 0, 0), Point(0, 0, height_cylinder), r, r)
# I generate the mesh
mesh = generate_mesh(my_domain, 100)
# I have one subdomains which is a cylindrical shell inside the domain
ri_shell = 13.00
ro_shell = 17.00
tol = 5e-1
class Shell(SubDomain):
def __init__(self):
super().__init__()
def inside(self, x, on_boundary):
if ((x[0]**2 + x[1]**2 <= (ro_shell + tol)**2) and (x[0]**2 + x[1]**2 >= (ri_shell - tol)**2 ) and between(x[2], (0, height_cylinder)) ):
return True
shell = Shell()
# I have a second subdomain which is a small cylinder inside the shell in coordinates x = 3 y y = 2.1
r_small = 3
class SmallCylinder(SubDomain):
def __init__(self):
super().__init__()
def inside(self, x, on_boundary):
if ( ( (x[0] - 3.0)**2 + (x[1] - 2.1)**2 <= (r_small + tol)**2 ) and between(x[2], (0, height_cylinder)) ):
return True
smallCylinder = SmallCylinder()
# After the subdomains have been created I make a MeshFunction to mark the subdomains
materials = MeshFunction("size_t", mesh, 0, mesh.domains())
shell.mark(materials, 1)
smallCylinder.mark(materials, 2)
# I generate the function spaces
V = FunctionSpace(mesh, 'P', 2)
dx = Measure('dx', domain=mesh, subdomain_data=materials)
V0 = FunctionSpace(mesh, 'DG', 0)
vtkp.dolfin.plot(materials, lw=0, alpha=0.35)
```
|
closed
|
2019-10-03T22:17:39Z
|
2019-10-03T22:44:22Z
|
https://github.com/marcomusy/vedo/issues/58
|
[] |
SiriusFuenmayor
| 0
|
axnsan12/drf-yasg
|
django
| 775
|
Add a way to specify path component on a needed basis
|
# Feature Request
## problem Description
Currently it's not possible to specify path components for individual endpoints without using the FORCE_SCRIPT_NAME but there are cases whereby it would be required that each endpoint in an app or various apps have varying path components
## Potential Solution
It would be great if there was a way of supplying the path component to be attached to a URL within the get_schema_view like done below using the `pathcomponent` parameter
```
schema_view = get_schema_view(
openapi.Info(
title="Celecast Internal API for User Related Operations",
default_version='v1',
description='This is the official documentation for all internal APIs that\
would be used by the Celecast development team, this section is for the APIs that power all\
user related operations',
terms_of_service="https://api.example.co/policies/terms/",
contact=openapi.Contact(email="support@example.co"),
license=openapi.License(name="BSD License"),
),
url='https://api.example.co',
pathcomponent='/users'
urlconf='users.api.urls',
public=False,
permission_classes=[AllowAny]
)
```
## Alternatives considered
I have tried the FORCE_SCRIPT_NAME but this has a project-wide scope and affects all the URLs on the project
## Sample Usecase
Assuming I have a URL `api.example.com/users/login`, and the prefix `/login` is just one of many in the _users_ app
In the root url.py file we have something like:
```
urlpatterns = [
path('admin/', admin.site.urls),
# urls for the user on-boarding and user related functions
path('users/', include('users.api.urls')),
path('podcasts/', include('podcast.api.urls')),
path('audiostream/', include('audiostreaming.api.urls')),]
```
in the users app we have something like:
```
from users.api.views import (signup_user_api, login_api,
email_verification_request, login_api,
logout_api, activate_api, subscribe, unsubscribe
)
app_name = 'users'
urlpatterns = [
path('signup', signup_user_api, name='signup_user_api'),
path('login', login_api, name='login_api'),]
```
so each app URL is routed to it's corresponding app in the fashion above, when using the URL parameter in the _get_schema_view_ it doesn't display the full path with the prefix as I supply, it trims off the path component, and using the FORCE_SCRIPT_NAME solution won't work here as it would affect the whole project and changing the URL structure would cause a lot of breaking changes, so I was wondering if it would be possible to add a means of stating what URL the documentation would use on an individual app basis just as it's supplied to it
currently the URL produced is `api.example.co/login` ommiting the `/users` path component
if there's a way you would recommend that won't cause breaking changes I would be very much open to it
|
open
|
2022-02-16T15:50:30Z
|
2025-03-07T12:11:20Z
|
https://github.com/axnsan12/drf-yasg/issues/775
|
[
"triage"
] |
nseetim
| 0
|
adbar/trafilatura
|
web-scraping
| 55
|
Getting "ERROR: file too small" when trying to use trafilatura
|
Hello everybody,
I'm fairly new to using trafilatura. When I try to run " trafilatura -u 'insert random news article' " and execute it in the shell, I always get "ERROR: file too small". I currently use zsh as a shell and also tried it on bash with no success. I checked, that the latest version is installed. I'm currently working on MAC OS Catalina (10.15.7). I've never worked with the command line/shell before so I would really appreciate if somebody could help me with this issue
|
closed
|
2021-01-15T15:25:50Z
|
2021-01-19T18:00:15Z
|
https://github.com/adbar/trafilatura/issues/55
|
[
"bug"
] |
melonlucie
| 8
|
keras-team/keras
|
deep-learning
| 20,724
|
UnexpectedTracerError: JAX had a side effect - DynamicJaxprTracer - set JAX_CHECK_TRACER_LEAKS
|
The following code breaks.
```python
import os
os.environ["KERAS_BACKEND"] = "jax"
import keras
from keras import layers
class RNL(layers.Layer):
def __init__(self, noise_rate, **kwargs):
super().__init__(**kwargs)
self.noise_rate = noise_rate
self.seed_generator = keras.random.SeedGenerator(seed=1337)
def call(self, inputs):
apply_noise = keras.random.uniform([], seed=self.seed_generator) < self.noise_rate
outputs = keras.ops.cond(
pred=apply_noise,
true_fn=lambda: inputs + keras.random.uniform(
shape=keras.ops.shape(inputs),
minval=0,
maxval=self.noise_rate,
seed=self.seed_generator
),
false_fn=lambda: inputs,
)
return inputs
def compute_output_shape(self, input_shape):
return input_shape
```
```python
import numpy as np
from keras import layers, models
def create_dummy_model(noise_rate=0.1):
model = models.Sequential([
layers.Input(shape=(10,)),
RNL(noise_rate=noise_rate),
layers.Dense(32, activation="relu"),
layers.Dense(1, activation="sigmoid")
])
return model
model = create_dummy_model(noise_rate=0.2)
model.compile(
optimizer="adam",
loss="binary_crossentropy",
metrics=["accuracy"]
)
x_dummy = np.random.rand(100, 10)
y_dummy = np.random.randint(0, 2, size=(100,))
model.fit(x_dummy, y_dummy, epochs=5, batch_size=10)
```
```bash
---------------------------------------------------------------------------
UnexpectedTracerError Traceback (most recent call last)
Cell In[64], line 4
2 x_dummy = np.random.rand(100, 10)
3 y_dummy = np.random.randint(0, 2, size=(100,))
----> 4 model.fit(x_dummy, y_dummy, epochs=5, batch_size=10)
File /opt/conda/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.__traceback__)
120 # To get the full stack trace, call:
121 # `keras.config.disable_traceback_filtering()`
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
[... skipping hidden 15 frame]
File /opt/conda/lib/python3.10/site-packages/jax/_src/interpreters/partial_eval.py:1720, in DynamicJaxprTracer._assert_live(self)
1718 def _assert_live(self) -> None:
1719 if not self._trace.main.jaxpr_stack: # type: ignore
-> 1720 raise core.escaped_tracer_error(self, None)
UnexpectedTracerError: Encountered an unexpected tracer. A function transformed by JAX had a side effect, allowing for a reference to an intermediate value with type uint32[2] wrapped in a DynamicJaxprTracer to escape the scope of the transformation.
JAX transformations require that functions explicitly return their outputs, and disallow saving intermediate values to global state.
The function being traced when the value leaked was <lambda> at /tmp/ipykernel_34/1644797871.py:17 traced for cond.
------------------------------
The leaked intermediate value was created on line /tmp/ipykernel_34/1644797871.py:17 (<lambda>).
------------------------------
When the value was created, the final 5 stack frames (most recent last) excluding JAX-internal frames were:
------------------------------
/tmp/ipykernel_34/1281975580.py:4 (<module>)
/tmp/ipykernel_34/1644797871.py:15 (call)
/tmp/ipykernel_34/1644797871.py:17 (<lambda>)
------------------------------
To catch the leak earlier, try setting the environment variable JAX_CHECK_TRACER_LEAKS or using the `jax.checking_leaks` context manager.
See https://jax.readthedocs.io/en/latest/errors.html#jax.errors.UnexpectedTracerError
```
|
closed
|
2025-01-04T19:51:32Z
|
2025-01-08T07:50:31Z
|
https://github.com/keras-team/keras/issues/20724
|
[
"stat:awaiting response from contributor",
"type:Bug",
"backend:jax"
] |
innat
| 3
|
httpie/cli
|
api
| 846
|
What tool was used to create the anitmated gif?
|
I realize this is off topic / not a real issue but what tool was used to create animated gif on the home page? cc: @loranallensmith
|
closed
|
2020-02-01T05:46:27Z
|
2020-04-13T16:08:28Z
|
https://github.com/httpie/cli/issues/846
|
[] |
martindsouza
| 1
|
deezer/spleeter
|
deep-learning
| 632
|
UnsatisfiableError: The following specifications were found to be incompatible with the existing python installation in your environment:[Discussion] your question
|
<!-- Please respect the title [Discussion] tag. -->
Hi, I am upgrading the version of tensorflow from 1.14.0 to a value greater than or equal to 2.2 in Anaconda navigator's environment "tsa_course", but I am getting this error:
UnsatisfiableError: The following specifications were found to be incompatible with the existing python installation in your environment
argon2-cffi -> python[version='>=2.7,<2.8.0a0|>=3.5,<3.6.0a0']
bleach -> python[version='>=3.8,<3.9.0a0|>=3.9,<3.10.0a0']
grpcio -> python[version='>=2.7,<2.8.0a0']
h5py -> python[version='<3']
html5lib=1.0.1 -> python[version='>=3.8,<3.9.0a0|>=3.9,<3.10.0a0']
And the list goes on....
Current system: Python 3.7.2, Windows 10, Cython 0.29.23
Can anyone help me update the version of tensorflow in my tsa_course environment
|
closed
|
2021-06-21T12:53:24Z
|
2021-07-16T09:26:19Z
|
https://github.com/deezer/spleeter/issues/632
|
[
"question"
] |
surajk150741
| 1
|
christabor/flask_jsondash
|
plotly
| 115
|
Image type should not resize width in grid mode
|
If the layout is grid, it should not set the width to 100% as this guarantees the aspect ratio will almost never be exactly right.
If in grid mode, width should just be auto, and will automatically be filled in based on the given height.
|
closed
|
2017-05-17T21:59:58Z
|
2018-03-22T09:20:51Z
|
https://github.com/christabor/flask_jsondash/issues/115
|
[
"bug"
] |
christabor
| 2
|
statsmodels/statsmodels
|
data-science
| 8,921
|
TST/Maint: test failures in MICE, pandas compat
|
most likely problem with pandas compatibility
```
FAILED statsmodels/imputation/tests/test_mice.py::TestMICEData::test_default
FAILED statsmodels/imputation/tests/test_mice.py::TestMICEData::test_pertmeth
FAILED statsmodels/imputation/tests/test_mice.py::TestMICEData::test_set_imputer
```
test failure are in cycle_order
```
> assert_equal(imp_data._cycle_order, ['x5', 'x3', 'x4', 'y', 'x2', 'x1'])
E AssertionError:
E Items are not equal:
E item=1
E
E ACTUAL: 'x4'
E DESIRED: 'x3'
```
pandas version: pandas-2.0.2
|
open
|
2023-06-20T21:00:32Z
|
2023-06-20T21:43:20Z
|
https://github.com/statsmodels/statsmodels/issues/8921
|
[
"type-bug",
"comp-imputation",
"maintenance",
"backport"
] |
josef-pkt
| 2
|
autogluon/autogluon
|
computer-vision
| 4,841
|
[BUG] References to Python 3.8 in workflow files may break builds
|
**Describe the bug**
Some GitHub workflow configuration files still reference Python 3.8, which is no longer supported by AutoGluon. These outdated references may lead to errors during the build process or unintended issues in the CI/CD pipeline.
The following files and lines contain references to Python 3.8:
https://github.com/autogluon/autogluon/blob/f1bd5f42b2da0099c8d7319f38f811127446d9af/.github/workflows/pythonpublish.yml#L23
https://github.com/autogluon/autogluon/blob/f1bd5f42b2da0099c8d7319f38f811127446d9af/.github/workflows/pythonpublish_testpypi.yml#L19
https://github.com/autogluon/autogluon/blob/f1bd5f42b2da0099c8d7319f38f811127446d9af/.github/workflows/pypi_release.yml#L22
**Proposed solution**
Update the workflow files to reference supported Python versions only (e.g., 3.9+). Additionally, review the affected files to ensure all configurations are up-to-date with AutoGluon's current requirements.
|
closed
|
2025-01-26T10:06:07Z
|
2025-01-29T20:13:24Z
|
https://github.com/autogluon/autogluon/issues/4841
|
[
"code cleanup",
"Needs Triage"
] |
celestinoxp
| 0
|
ijl/orjson
|
numpy
| 203
|
Adding OPT_TIMESTAMP
|
Is the option to serialize/deserialize datetimes as POSIX timestamp somewhere in the roadmap? I'm considering implementing it although pyo3's PyDateTime_CAPI does not implement it.
|
closed
|
2021-09-04T21:46:30Z
|
2022-01-21T16:30:41Z
|
https://github.com/ijl/orjson/issues/203
|
[] |
Mifour
| 1
|
wkentaro/labelme
|
computer-vision
| 902
|
[Feature] save the json quitely
|
**Is your feature request related to a problem? Please describe.**
I'm always frustrated when labeling with a lot of images, the rhythm is always Interrupted by the `save` operations.
**Describe the solution you'd like**
as far as I know, the `save` is not necessary, especially for `open-dir` scene. it would be better if the saving operation is done quietly(by back ground).
|
closed
|
2021-08-03T03:34:21Z
|
2021-11-10T13:42:28Z
|
https://github.com/wkentaro/labelme/issues/902
|
[] |
sycophant-stone
| 1
|
FujiwaraChoki/MoneyPrinter
|
automation
| 63
|
GPT returned an unformatted response
|
This error randomly happen and stoping the progress;
```bash
...
[*] GPT returned an unformatted response. Attempting to clean...
[-] Could not parse response.
[-] Error: the JSON object must be str, bytes or bytearray, not list
127.0.0.1 - - [08/Feb/2024 12:29:48] "POST /api/generate HTTP/1.1" 200 -
127.0.0.1 - - [08/Feb/2024 12:30:50] "OPTIONS /api/generate HTTP/1.1" 200 -
...
```
|
closed
|
2024-02-08T05:42:38Z
|
2024-02-08T05:45:51Z
|
https://github.com/FujiwaraChoki/MoneyPrinter/issues/63
|
[] |
raniaamina
| 0
|
matterport/Mask_RCNN
|
tensorflow
| 2,187
|
The "compute_ap" function uses which pattern? COCO or Pascal 2012?
|
The "**compute_ap**" function uses which standard to calculate Average Precision, Pascal VOC 2012 or COCO? I am confused by this because the way COCO calculates tests several thresholds, and in the function it uses only one, and the way Pascal calculates makes the 11 point interpolation, I cannot see any of these patterns applied to this "**compute_ap**" function , does anyone know which mathematical formula this function uses?
|
closed
|
2020-05-14T22:12:33Z
|
2021-03-22T03:49:30Z
|
https://github.com/matterport/Mask_RCNN/issues/2187
|
[] |
WillianaLeite
| 2
|
hbldh/bleak
|
asyncio
| 1,280
|
Got exception: OSError: [WinError -2147483629] The object has been closed
|
* bleak version: 0.20.1
* Python version: 3.10.8 and 3.11.2
* Operating System: Windows 10 Enterprise 22H2
* BlueZ version (`bluetoothctl -v`) in case of Linux:
### Description
I'm trying to figure out why I have a higher connect-success on one board and not another.
Same software on board, same software on PC.
I have two Silabs EFR32MG24 boards (efr32xg24 Explorer Kit, https://www.silabs.com/development-tools/wireless/efr32xg24-explorer-kit?tab=overview) for which I have difficulties (on both) in obtaining a connection.
For both, I get an OSError-exception when calling BleakClient.connect():
OSError: [WinError -2147483629] The object has been closed
However, for one board, I get much fewer of these exceptions, and for those I get, the verbose output from Bleak is somewhat different.
### What I Did
This is my simple application - it tries to connect and disconnect again and again:
```python
import asyncio
import logging
import sys
from bleak import BleakClient
# address = "34:25:B4:A0:B8:0F" # The good board
address = '34:25:B4:A0:B1:CB' # The bad board
logger = logging.getLogger(__name__)
async def main(address):
connect_success_count = 0
connect_fail_count = 0
while True:
logger.info('Stats: %u/%u success', connect_success_count, (connect_success_count+connect_fail_count) )
logger.info("Connecting...")
client = BleakClient(address)
try:
await client.connect()
logger.info(f"Connected: {client.is_connected}")
connect_success_count += 1
except Exception as e:
logger.warning('Got exception: %s: %s', type(e).__name__, str(e))
connect_fail_count += 1
finally:
try:
await client.disconnect()
except Exception as e:
logger.warning('Got exception during disconnect: %s: %s', type(e).__name__, str(e))
logger.info('Disconnected...')
if __name__ == "__main__":
LOGFORMAT=' %(asctime)s.%(msecs)03d [%(thread)05d] %(name).8s:\t[%(levelname).1s] %(message)s'
logging.basicConfig(level=logging.DEBUG, format=LOGFORMAT, datefmt='%H:%M:%S')
logger.info('Main running - Python %s', sys.version)
asyncio.run(main(address))
```
Running this against the "good board" (34:25:B4:A0:B8:0F) I get a connect success of ~95%:
`15:31:53.950 [22312] __main__: [I] Stats: 119/125 success`
while the same test on the "bad board" (34:25:B4:A0:B1:CB) I get a connect success of ~73%:
`15:43:15.176 [12684] __main__: [I] Stats: 33/45 success`
### Logs
On the good board, there is one noticeable difference in the logs - there is always a series of ACTIVE->CLOSED->ACTIVE:
```
<_bleak_winrt_Windows_Devices_Bluetooth.BluetoothDeviceId object at 0x0000018DC2D62C90>, error: BluetoothError.SUCCESS, status: GattSessionStatus.ACTIVE
<_bleak_winrt_Windows_Devices_Bluetooth.BluetoothDeviceId object at 0x0000018DC2D62990>, error: BluetoothError.SUCCESS, status: GattSessionStatus.CLOSED
<_bleak_winrt_Windows_Devices_Bluetooth.BluetoothDeviceId object at 0x0000018DC2D62770>, error: BluetoothError.SUCCESS, status: GattSessionStatus.ACTIVE
```
E.g. as here:
```
15:27:15.659 [22312] __main__: [I] Connecting...
15:27:15.685 [22312] bleak.ba: [D] Received EC:C5:7F:35:6F:03: BLE-Device-356F03.
15:27:15.685 [22312] bleak.ba: [D] Received EC:C5:7F:35:6F:03: .
15:27:16.265 [22312] bleak.ba: [D] Received 6F:00:76:6E:E2:47: .
15:27:16.266 [22312] bleak.ba: [D] Received 6F:00:76:6E:E2:47: .
15:27:16.395 [22312] bleak.ba: [D] Received 34:25:B4:A0:B8:0F: Empty Example.
15:27:16.396 [22312] bleak.ba: [D] Received 34:25:B4:A0:B8:0F: .
15:27:16.397 [22312] bleak.ba: [D] 3 devices found. Watcher status: 3.
15:27:16.398 [22312] bleak.ba: [D] Connecting to BLE device @ 34:25:B4:A0:B8:0F
15:27:16.412 [22312] bleak.ba: [D] getting services (service_cache_mode=None, cache_mode=None)...
15:27:16.412 [22312] bleak.ba: [D] calling get_gatt_services_async
15:27:16.713 [04996] bleak.ba: [D] session_status_changed_event_handler: id: <_bleak_winrt_Windows_Devices_Bluetooth.BluetoothDeviceId object at 0x0000018DC2D62C90>, error: BluetoothError.SUCCESS, status: GattSessionStatus.ACTIVE
15:27:16.751 [33232] bleak.ba: [D] session_status_changed_event_handler: id: <_bleak_winrt_Windows_Devices_Bluetooth.BluetoothDeviceId object at 0x0000018DC2D62990>, error: BluetoothError.SUCCESS, status: GattSessionStatus.CLOSED
15:27:16.751 [22312] bleak.ba: [D] closing requester
15:27:17.029 [33232] bleak.ba: [D] session_status_changed_event_handler: id: <_bleak_winrt_Windows_Devices_Bluetooth.BluetoothDeviceId object at 0x0000018DC2D62770>, error: BluetoothError.SUCCESS, status: GattSessionStatus.ACTIVE
15:27:17.029 [04980] bleak.ba: [D] max_pdu_size_changed_handler: 247
15:27:17.029 [22312] bleak.ba: [D] closing session
15:27:17.031 [22312] __main__: [W] Got exception: OSError: [WinError -2147483629] The object has been closed
15:27:17.031 [22312] bleak.ba: [D] Disconnecting from BLE device...
15:27:17.031 [22312] __main__: [I] Disconnected...
```
Whereas the bad board never exhibits this (the last ACTIVE is omitted):
```
15:42:23.298 [12684] __main__: [I] Connecting...
15:42:23.549 [12684] bleak.ba: [D] Received 6D:37:84:47:AF:1F: .
15:42:23.550 [12684] bleak.ba: [D] Received 6D:37:84:47:AF:1F: .
15:42:24.036 [12684] bleak.ba: [D] Received 46:73:7C:16:F7:55: .
15:42:24.036 [12684] bleak.ba: [D] Received 46:73:7C:16:F7:55: .
15:42:24.038 [12684] bleak.ba: [D] Received 34:25:B4:A0:B1:CB: Empty Example.
15:42:24.039 [12684] bleak.ba: [D] Received 34:25:B4:A0:B1:CB: .
15:42:24.040 [12684] bleak.ba: [D] 3 devices found. Watcher status: 3.
15:42:24.040 [12684] bleak.ba: [D] Connecting to BLE device @ 34:25:B4:A0:B1:CB
15:42:24.054 [12684] bleak.ba: [D] getting services (service_cache_mode=None, cache_mode=None)...
15:42:24.055 [12684] bleak.ba: [D] calling get_gatt_services_async
15:42:24.354 [29836] bleak.ba: [D] max_pdu_size_changed_handler: 247
15:42:24.354 [22268] bleak.ba: [D] session_status_changed_event_handler: id: <_bleak_winrt_Windows_Devices_Bluetooth.BluetoothDeviceId object at 0x0000017515004FF0>, error: BluetoothError.SUCCESS, status: GattSessionStatus.ACTIVE
15:42:24.653 [12684] bleak.ba: [D] returned from get_gatt_services_async
15:42:24.654 [12684] bleak.ba: [D] calling get_characteristics_async
15:42:24.699 [22268] bleak.ba: [D] 34:25:B4:A0:B1:CB: services changed
15:42:24.751 [22268] bleak.ba: [D] 34:25:B4:A0:B1:CB: services changed
15:42:24.760 [12684] bleak.ba: [D] returned from get_characteristics_async
15:42:24.761 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:24.764 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:24.765 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:24.767 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:24.768 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:24.769 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:24.770 [12684] bleak.ba: [D] calling get_characteristics_async
15:42:24.800 [22268] bleak.ba: [D] 34:25:B4:A0:B1:CB: services changed
15:42:24.848 [22268] bleak.ba: [D] 34:25:B4:A0:B1:CB: services changed
15:42:24.855 [12684] bleak.ba: [D] returned from get_characteristics_async
15:42:24.856 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:24.858 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:24.858 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:24.860 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:24.860 [12684] bleak.ba: [D] calling get_characteristics_async
15:42:24.900 [22268] bleak.ba: [D] 34:25:B4:A0:B1:CB: services changed
15:42:24.942 [22268] bleak.ba: [D] 34:25:B4:A0:B1:CB: services changed
15:42:24.977 [12684] bleak.ba: [D] returned from get_characteristics_async
15:42:24.977 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:24.979 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:24.980 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:24.981 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:24.982 [12684] bleak.ba: [D] calling get_characteristics_async
15:42:25.021 [12684] bleak.ba: [D] returned from get_characteristics_async
15:42:25.021 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.049 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.050 [12684] bleak.ba: [D] calling get_characteristics_async
15:42:25.098 [12684] bleak.ba: [D] returned from get_characteristics_async
15:42:25.099 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.124 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.125 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.155 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.156 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.185 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.185 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.230 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.231 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.260 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.261 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.289 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.290 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.320 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.321 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.350 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.351 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.387 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.388 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.417 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.417 [12684] bleak.ba: [D] calling get_characteristics_async
15:42:25.456 [12684] bleak.ba: [D] returned from get_characteristics_async
15:42:25.456 [12684] bleak.ba: [D] calling get_descriptors_async
15:42:25.469 [12684] bleak.ba: [D] returned from get_descriptors_async
15:42:25.471 [12684] __main__: [I] Connected: True
15:42:25.472 [12684] bleak.ba: [D] Disconnecting from BLE device...
15:42:28.545 [09720] bleak.ba: [D] max_pdu_size_changed_handler: 23
15:42:28.546 [31160] bleak.ba: [D] session_status_changed_event_handler: id: <_bleak_winrt_Windows_Devices_Bluetooth.BluetoothDeviceId object at 0x00000175150053F0>, error: BluetoothError.SUCCESS, status: GattSessionStatus.CLOSED
15:42:28.546 [12684] bleak.ba: [D] closing requester
15:42:28.546 [12684] bleak.ba: [D] closing session
15:42:28.548 [12684] __main__: [I] Disconnected...
15:42:28.548 [12684] __main__: [I] Stats: 32/42 success
```
Any idea on how to deal/proceed from here?
|
closed
|
2023-04-14T14:15:23Z
|
2023-04-19T21:58:03Z
|
https://github.com/hbldh/bleak/issues/1280
|
[
"bug",
"Backend: WinRT"
] |
megholm
| 7
|
allenai/allennlp
|
pytorch
| 4,868
|
Make sure tensors have dimensions multiple of 8
|
The `torch.amp` automatically enabled mixed-precision training but on the latest GPUs, it's faster (according to the documentation) because of Tensor Cores. But latter is enabled (almost) for fp16 calculations if the tensor dimensions are multiple of 8.
The embedding size and the batch size is easy to maintain as multiples of 8 manually.
However, we may need to modify `TokenIndexer` to pad it automatically, similar to: https://github.com/YerevaNN/allennlp/commit/374acec5e62d6d74586b18a3d5bb2f9b5a169da4
We also may need to modify `max_tokens`-based samplers to have 8x batch_sizes.
|
closed
|
2020-12-16T00:31:26Z
|
2021-02-22T19:11:02Z
|
https://github.com/allenai/allennlp/issues/4868
|
[
"Contributions welcome",
"Feature request"
] |
mahnerak
| 2
|
jina-ai/serve
|
machine-learning
| 5,539
|
chore: draft release note v3.13.1
|
# Release Note (3.13.1)
This release contains 3 bug fixes and 1 documentation improvement.
## 🐞 Bug Fixes
### Support Gateway with multiple protocols for Kubernetes export ([#5532](https://github.com/jina-ai/jina/pull/5532))
You can now export Flows with multiple protocols to Kubernetes. Previously this would cause an error.
```python
flow = Flow().config_gateway(protocol=['http', 'grpc'])
flow.to_kubernetes_yaml('k8s_flow_folder')
```
### Fix Python 3.11 support ([#5529](https://github.com/jina-ai/jina/pull/5529))
It was previously impossible to install Jina with Python 3.11 due to a `grpcio` dependency problem. `grpcio` added support for Python 3.11 only with version 1.49.0, [causing potential problems when used by Jina and other projects](https://github.com/grpc/grpc/issues/30303).
In this release `grpcio>=1.49.0` is installed alongside Jina when using Python 3.11. However, be aware of potential problems related to [grpc hanging](https://github.com/grpc/grpc/issues/30843).
### Unary RPC from Client respects `results_in_order` ([#5513](https://github.com/jina-ai/jina/pull/5513))
In prior releases, calling the `post` method of a client with `grpc` and using `stream=False` did not respect the `results_in_order` parameter and results were always returned in order:
```python
# this wrongly returns results in order
c = Client(protocol='grpc')
c.post(on='/', inputs=DocumentArray.empty(10000), stream=False, results_in_order=False)
```
Also this implied that using the Client with `asyncio=True` and `stream=False` in the post call would return results in the order that they were returned by the Flow, rather than respecting the input order:
```python
# this wrongly returns results in order
c = Client(protocol='grpc', asyncio=True)
async for resp in c.post(on='/', inputs=DocumentArray.empty(10000), stream=False, results_in_order=False)
print(resp)
```
This release fixes the ordering bug.
## 📗 Documentation Improvements
- Document inheritance of arguments from Flow API to Executors and Gateway ([#5535](https://github.com/jina-ai/jina/pull/5535))
## 🤘 Contributors
We would like to thank all contributors to this release:
- AlaeddineAbdessalem ([@alaeddine-13](https://github.com/alaeddine-13))
- Joan Fontanals ([@JoanFM](https://github.com/JoanFM))
- Jackmin801 ([@Jackmin801](https://github.com/Jackmin801))
- Anne Yang ([@AnneYang720](https://github.com/AnneYang720))
|
closed
|
2022-12-20T10:47:35Z
|
2022-12-22T20:07:31Z
|
https://github.com/jina-ai/serve/issues/5539
|
[] |
alexcg1
| 0
|
JaidedAI/EasyOCR
|
machine-learning
| 1,148
|
Requesting Training Parameters and Data Generation Methods For Available Models
|
Hi, I'm humbly requesting the model training parameters for available language models for improve my knowledge on the EasyOCR.
I have few questions for the developers.
1. How many images are using for one language( I used up to 1M images using English dict but predictions are bad)?
2. What is the learning rate for existing models(trainer suggested 1 but i used 0.01)?
3. What methods are you using for text generation. I used the given generator but when it comes to real world the predictions are worse.
|
open
|
2023-10-03T00:15:26Z
|
2023-10-03T00:15:26Z
|
https://github.com/JaidedAI/EasyOCR/issues/1148
|
[] |
yasaslive
| 0
|
recommenders-team/recommenders
|
deep-learning
| 2,073
|
[ASK] Stopped the support of new Tensorflow versions >= 2.16
|
### Description
<!--- Describe your general ask in detail -->
We had so many problems with Tensorflow, in terms of security, breaking changes, etc, that the team decided to stop the support of Tensorflow over 2.16.
Here are some of the issues we had:
- https://github.com/recommenders-team/recommenders/issues/2018
- https://github.com/recommenders-team/recommenders/issues/2072
- https://github.com/recommenders-team/recommenders/issues/2064
- https://github.com/recommenders-team/recommenders/pull/1565
- https://github.com/recommenders-team/recommenders/pull/2017
- https://github.com/recommenders-team/recommenders/issues/1883
- https://github.com/recommenders-team/recommenders/issues/1969
- https://github.com/recommenders-team/recommenders/issues/1915
- https://github.com/recommenders-team/recommenders/issues/1825
- https://github.com/recommenders-team/recommenders/issues/1560
- https://github.com/recommenders-team/recommenders/issues/1513
- https://github.com/recommenders-team/recommenders/issues/953
- https://github.com/recommenders-team/recommenders/issues/683
### Other Comments
If a developer wants to add new code with TF dependencies, please consider this situation.
|
open
|
2024-03-19T06:13:12Z
|
2025-01-27T09:57:56Z
|
https://github.com/recommenders-team/recommenders/issues/2073
|
[] |
miguelgfierro
| 2
|
open-mmlab/mmdetection
|
pytorch
| 12,010
|
Grounding-Dino: BERT model frozen or not?
|
Hello!
Thanks for the great re-implementation of GroundingDino. I am trying to understand you code.
In the [usage.md](https://github.com/open-mmlab/mmdetection/blob/main/configs/mm_grounding_dino/usage.md) file, you walk us through the cat dataset example. For this example, [this config](https://github.com/open-mmlab/mmdetection/blob/main/configs/mm_grounding_dino/grounding_dino_swin-t_finetune_8xb4_20e_cat.py) is used. This cat config config file inherits from this [config-file](https://github.com/open-mmlab/mmdetection/blob/main/configs/mm_grounding_dino/grounding_dino_swin-t_pretrain_obj365.py).
In the usage.md readme, you say "We did not train the language model, only the visual model."
For the visual backbone, this makes sense to me, since it is initialized with frozen_stages=-1, meaning no stage is frozen.
For the bert model, I cannot follow, though, because it is initialized like this:

In the config-file, we did not define anything about frozen stages. Also when looking into detail of the bert implementation [here](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/language_models/bert.py), I cannot see any option to freeze / unfreeze the encoder. At no point in the code, .requires_grad() is being used.
My question is: Where did you implement that the bert model is frozen? How can I freeze / unfreeze the language model in grounding dino?
Thank you in advance!
|
open
|
2024-10-22T14:49:51Z
|
2024-11-27T10:04:32Z
|
https://github.com/open-mmlab/mmdetection/issues/12010
|
[] |
laurenzheidrich
| 2
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 726
|
How To upload input image at webpage demo?
|
Output image could be saved, however, input image can't be uploaded.
My model requires complex input. Using line tool is really difficult.
So is it possible to upload image file as input?
I understand a little of Python, but nothing of JavaScript.
Maybe I shouldn't put this issue here, but I can't find a better place.
Thanks!

|
closed
|
2019-08-07T10:13:09Z
|
2022-02-10T10:04:58Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/726
|
[] |
RuikangSun
| 2
|
nolar/kopf
|
asyncio
| 474
|
[archival placeholder]
|
This is a placeholder for later issues/prs archival.
It is needed now to reserve the initial issue numbers before going with actual development (PRs), so that later these placeholders could be populated with actual archived issues & prs with proper intra-repo cross-linking preserved.
|
closed
|
2020-08-18T20:07:33Z
|
2020-08-18T20:07:34Z
|
https://github.com/nolar/kopf/issues/474
|
[
"archive"
] |
kopf-archiver[bot]
| 0
|
okken/pytest-check
|
pytest
| 84
|
Raises as a context manager: log failure when exception is not raised
|
Reference:
* pytest 7.1.1
* pytest_check 1.0.5
In following example:
```
from pytest_check import raises
class CustomException(Exception):
pass
def test_context_manager_not_raised_1():
with raises(CustomException):
print("Custom Exception is not raised")
def test_context_manager_not_raised_2():
with raises(AssertionError):
print("Assertion Exception is not raised")
```
I expected these tests pass both. But both fail. E.g: The first one:

Please let me know if my understanding is right.
|
closed
|
2022-06-20T16:36:41Z
|
2022-06-21T05:20:31Z
|
https://github.com/okken/pytest-check/issues/84
|
[] |
alblasco
| 1
|
Tanuki/tanuki.py
|
pydantic
| 114
|
Optionally delegate classifiers to XGBoost for finetuning and inference
|
**Is your feature request related to a problem? Please describe.**
LLMs are extremely inefficient at classification. XGBoost is better if the data is available. We could use the aligned data from the LLM to train an XGBoost model, which would be much faster to run.
**Describe the solution you'd like**
When the output types denote a classification task (i.e where the goal is to sample one type in a union of literal types, or an enum), we optionally distil the teacher model into a decision forest using the XGBoost library.
**Additional context**
We could represent student models as optional packages, sort of like drivers, that the user could install through PIP.
E.g `pip3 install tanuki.py[xgboost]`
|
open
|
2023-12-04T01:48:04Z
|
2023-12-04T01:48:04Z
|
https://github.com/Tanuki/tanuki.py/issues/114
|
[
"enhancement"
] |
JackHopkins
| 0
|
bauerji/flask-pydantic
|
pydantic
| 58
|
Why doesn't the `make_json_response` function have a default value for the `by_alias` argument?
|
```python
def make_json_response(
content: Union[BaseModel, Iterable[BaseModel]],
status_code: int,
by_alias: bool,
exclude_none: bool = False,
many: bool = False,
) -> Response:
...
js = content.json(exclude_none=exclude_none, by_alias=by_alias)
...
```
The `content.json` method has a default value for the `by_alias` argument.
|
open
|
2022-09-20T06:44:17Z
|
2022-09-20T06:44:17Z
|
https://github.com/bauerji/flask-pydantic/issues/58
|
[] |
NiKuma0
| 0
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,347
|
400 bad request when gunicorn worker class type is set to "gthread"
|
I am creating an ML app. I'm using gunicorn with no NGINX.
The worker class is set to gthread, since there are a lot of CPU bound tasks.
When I am using flask socketio with the worker type "gevent", it is working fine irrelevant of the number of workers(1,2,4).
But when I change it to "gthread", it gives 400 bad request error.
**Gunicorn configuration :**
`
# import multiprocessing
bind = "0.0.0.0:8081"
workers = 1
#threads = 10
timeout = 10000000
keepalive = 24 * 60 * 60 # 1 day
#keyfile = "staPrivateKey.key"
#certfile = "staCertificate.crt"
worker_class = "gthread"
`
Any help on this, please.
|
closed
|
2020-08-04T11:59:33Z
|
2020-08-05T07:55:32Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1347
|
[
"question"
] |
shadylpstan
| 4
|
reiinakano/xcessiv
|
scikit-learn
| 39
|
Valid values for metric to optimise in bayesian optimisation?
|
Is there a list of valid metric_to_optimise for Bayesian Optimisation?
I am using sklearn mean_squared_regression for my base learning but when I enter that into the Bayesian Optimisation menu under metric_to_optimise I get:
assert module.metric_to_optimize in automated_run.base_learner_origin.metric_generators
AssertionError
|
closed
|
2017-06-17T16:49:07Z
|
2017-06-17T16:54:10Z
|
https://github.com/reiinakano/xcessiv/issues/39
|
[
"question"
] |
Data-drone
| 1
|
biolab/orange3
|
pandas
| 6,203
|
Large integers should be read as strings, not as numbers
|
**What's wrong?**
I have a list of user_IDs (often 17-digit numbers, but also shorter ones). After data import through the Import Document widget, some of these "longer" user IDs get changed with regard to the original IDs that are listed in the imported file.
Rendition in Orange :

Orange -----> TSV data imported -----> Timestamp ----->Note
10213407285517228----->10213407285517229----->2017-06-25T17:37:17+0000 -----> in Orange: DIFFERS from the imported data
10155206846088648----->10155206846088649----->2017-06-25T17:45:31+0000 -----> in Orange: DIFFERS from the imported data
2049903568368818----->2049903568368818----->2014-01-08T09:03:21+0000 -----> in Orange: IDENTICAL to the imported data
10211717072377002----->10211717072377002----->2014-01-07T16:56:31+0000 -----> in Orange: IDENTICAL to the imported data
**What's your environment?**
- Operating system: Windows
- Orange version: 3.33.0
|
open
|
2022-11-16T09:57:24Z
|
2023-01-10T10:52:40Z
|
https://github.com/biolab/orange3/issues/6203
|
[
"bug",
"meal"
] |
kristinapdm
| 5
|
keras-team/keras
|
machine-learning
| 21,009
|
Keras featurespace error in environment without TensoFlow
|
I am trying to use a Keras feature space during inference to create a data window.
`input_window= input_featurespace({'temp': [0, 0, 0, 0, 0, 0, 0, 0], 'proc': [0, 0, 0, 0, 0, 0, 0, 0], 'dsp_temp': [0, 0, 0, 0, 0, 0, 0, 0]}).
`
However, I am getting the following error:
```
File "/usr/local/lib/python3.12/site-packages/keras/src/layers/preprocessing/feature_space.py", line 709, in __call__
data = {key: self._convert_input(value) for key, value in data.items()}
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/keras/src/layers/preprocessing/feature_space.py", line 693, in _convert_input
if not isinstance(x, (tf.Tensor, tf.SparseTensor, tf.RaggedTensor)):
^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/keras/src/utils/module_utils.py", line 35, in __getattr__
self.initialize()
File "/usr/local/lib/python3.12/site-packages/keras/src/utils/module_utils.py", line 29, in initialize
raise ImportError(self.import_error_msg)
ImportError: This requires the tensorflow module. You can install it via `pip install tensorflow`
```
I understand that this is because Tensorflow has not been installed. However, since the inference device has storage constraints, I don't want to use Tensorflow in my inference environment. Is there any way to get feature space to work with TensorFlow?
|
open
|
2025-03-10T16:56:38Z
|
2025-03-20T06:59:28Z
|
https://github.com/keras-team/keras/issues/21009
|
[
"type:Bug"
] |
sibyjackgrove
| 10
|
joouha/euporie
|
jupyter
| 86
|
Wrap text around
|
How can I wrap text around?

|
closed
|
2023-07-19T13:27:41Z
|
2023-10-11T16:55:56Z
|
https://github.com/joouha/euporie/issues/86
|
[] |
ghost
| 2
|
numba/numba
|
numpy
| 9,129
|
Symbol not found error when calling a recursive function + cache from a jitclass
|
Hello.
Here is a weird I have seen; I checked if anything similar was reported but didn't found.
The code below calls a recursive function from a jitclass. Such function has `@njit(cache=True)` as header. It runs well on first time, but fails on second time:
```
(venv) best@ottokar:~/dev/splendor$ python3 numba_debug.py
All good !
(venv) best@ottokar:~/dev/splendor$ python3 numba_debug.py
**LLVM ERROR: Symbol not found: .numba.unresolved$_ZN8__main__10_recursiveB3v12B62c8tJTIcFKzyF2ILShI4CrgQElYZ5yRbdT9XqICn1Wk1gsBbBVCOlJ7CIJgA_3dEx**
Abandon
```
If I change to `cache=False` or if I call the recursive function outside of the class, that's fine. I am using numba 0.57.1 and python 3.11.2 on Linux.
Here is the code:
```
#!/usr/bin/env python3
import numpy as np
from numba import njit
import numba
spec = [
('misc', numba.int8[:,:]),
]
@numba.experimental.jitclass(spec)
class MyClass():
def __init__(self):
self.misc = np.zeros((2,2), dtype=np.int8)
def run(self):
_recursive(0)
print('All good !')
@njit(cache=True)
def _recursive(length):
if length < 3:
_recursive(length+1)
myobject = MyClass()
myobject.run()
```
|
open
|
2023-08-11T15:41:36Z
|
2023-08-15T18:01:28Z
|
https://github.com/numba/numba/issues/9129
|
[
"jitclass",
"caching",
"bug - incorrect behavior"
] |
cestpasphoto
| 3
|
deezer/spleeter
|
tensorflow
| 390
|
[Bug] Can't use custom trained model?
|
<!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Description
How do I use a model I have just trained? I have tried to declare the directory its in -p ~/modelDirectory.
Do I need to use spleeter validate first? Or have I missed a step? There isn't any documentation on how to actually use the model once its built.
<!-- Give us a clear and concise description of the bug you are reporting. -->
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Open PowerShell in my directory with my config, model and dataset and type `spleeter separate -i 'F:\SpleetTest\Validate\Mixture\S05E02 - A War on Two Fronts\fullmix.wav' -o F:\SpleetTest\Configs\output -p F:\SpleetTest\Configs\pretrained_models\filmModel'
2. Run.
3. Got `PermissionError: [Errno 13] Permission denied: 'F:\\SpleetTest\\Configs\\pretrained_models\\filmModel'` error
## Output
```
Traceback (most recent call last):
File "c:\users\joe93\appdata\local\programs\python\python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\joe93\AppData\Local\Programs\Python\Python37\Scripts\spleeter.exe\__main__.py", line 9, in <module>
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\__main__.py", line 45, in main
params = load_configuration(arguments.configuration)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\utils\configuration.py", line 46, in load_configuration
with open(descriptor, 'r') as stream:
PermissionError: [Errno 13] Permission denied: 'F:\\SpleetTest\\Configs\\pretrained_models\\filmModel'
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10 (fully updated) |
| Installation type | pip |
| RAM available | 16GB |
| Hardware spec | RTX2080 / Ryzen R2600 |
## Additional context
I have reinstalled python, spleeter and retrained the model. Every time getting the same error [errno 13].
<!-- Add any other context about the problem here, references, cites, etc.. -->
|
closed
|
2020-05-22T14:23:55Z
|
2020-05-22T15:24:18Z
|
https://github.com/deezer/spleeter/issues/390
|
[
"bug",
"invalid"
] |
JavaShipped
| 2
|
ageitgey/face_recognition
|
python
| 761
|
ectangle documentation
|
* face_recognition version:
* Python version:
* Operating System:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
|
open
|
2019-02-28T22:52:37Z
|
2019-02-28T22:52:37Z
|
https://github.com/ageitgey/face_recognition/issues/761
|
[] |
viniciusdoss
| 0
|
Neoteroi/BlackSheep
|
asyncio
| 352
|
Constructor of JWTBearerAuthentication dosn't have algorithms parameter
|
**Describe the bug**
The constructor of _JWTBearerAuthentication_ doesn't accept _algorithms_ parameter and doesn't pass it to _JWTValidator_. [1] In the same time docstring of the constructor of _JWTBearerAuthentication_ contains a description of t _algorithms_ parameter.
1 - https://github.com/Neoteroi/BlackSheep/blob/c27b0f18337450ef69b04d474c091d324b61c184/blacksheep/server/authentication/jwt.py#L25
|
closed
|
2023-05-05T20:51:55Z
|
2023-05-11T21:18:10Z
|
https://github.com/Neoteroi/BlackSheep/issues/352
|
[] |
tyzhnenko
| 1
|
nteract/papermill
|
jupyter
| 211
|
HDFS File System support?
|
Hi @MSeal I was wondering if there is any plan or way that we can support HDFS as well along with the storage systems like s3 and ADL?. Please let me know your thoughts :)
|
closed
|
2018-09-19T16:20:26Z
|
2020-02-08T22:55:07Z
|
https://github.com/nteract/papermill/issues/211
|
[
"enhancement"
] |
harsham4026
| 6
|
huggingface/peft
|
pytorch
| 1,913
|
continue training model in transformers (after LoRA training)
|
### System Info
huggingface. transformers, peft, accelerate
### Who can help?
@BenjaminBossan
@younesbelkada
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
### Expected behavior
Hi, could someone please help me?
I have trained an LLM model of Llama3 using the AutoModelForSequenceClassification class. Before training I used LoraConfig and get_peft_model. Now I have saved the checkpoint and want to continue my training. I loaded the model from the checkpoint as follows:
`model = PeftModel.from_pretrained(base_model, path_to_checkpoint, is_trainable=True)
`
I then set the same Training arguments and Trainer and did
`trainer.train(resume_from_checkpoint="path_to_checkpoint")`
After this code I get error:
`object of type method has no len()
`
**My packages:
transformers==4.41.2
Peft==0.4.0**
Do you think it could be because I'm using an older version of the Peft package? I can't install a newer version at the moment due to a system error in the company.
Thank you.
.
|
closed
|
2024-07-08T10:16:15Z
|
2024-07-20T08:58:14Z
|
https://github.com/huggingface/peft/issues/1913
|
[] |
BigDataMLexplorer
| 2
|
d2l-ai/d2l-en
|
machine-learning
| 2,639
|
Please do not talk too much about the broadcast mechanism.
|
About Chapter 2.1.4 and 2.3.7, there is too much talk about the broadcast mechanism. It's like the implicit type conversion rule, which is not a good programming style.
About 2.3.7, a good programming style is to call the functions supplied by `torch.nn.functional`:
```python
import torch
import torch.nn.functional as F
# 创建示例张量
data = torch.arange(20, dtype=torch.float32).reshape(5, 4)
# 使用 torch.nn.functional.normalize 进行 L2 归一化
normalized_data_l2 = F.normalize(data, p=2, dim=1)
# 使用 torch.nn.functional.normalize 进行 L1 归一化
normalized_data_l1 = F.normalize(data, p=1, dim=1)
print("Original data:")
print(data)
print("Normalized data (L2 norm):")
print(normalized_data_l2)
print("Normalized data (L1 norm):")
print(normalized_data_l1)
```
If there is really a need to use the broadcast mechanism, recommend first checkout the broadcast output:
```
np.broadcast_arrays(x, y)
```
```
torch.broadcast_tensors(x, y)
```
This helps to check what the internal broadcast mechanism will output. The broadcast mechanism is an internal mechanism in NumPy designed for speeding up computations. However, encouraging the use of this style of code is not a good idea.
|
open
|
2025-03-21T08:10:28Z
|
2025-03-21T08:37:41Z
|
https://github.com/d2l-ai/d2l-en/issues/2639
|
[] |
a358003542
| 0
|
huggingface/transformers
|
tensorflow
| 36,576
|
Some methods in TrainerControl seem not to be utilized.
|
Looking at the callback code has caused me a great deal of confusion. It seems that this function has never been used. I'm not sure if I've missed something.
https://github.com/huggingface/transformers/blob/6966fa190172b48b2fb46fe4552a13b943e692cf/src/transformers/trainer_callback.py#L275
|
closed
|
2025-03-06T07:19:53Z
|
2025-03-13T16:21:17Z
|
https://github.com/huggingface/transformers/issues/36576
|
[] |
mst272
| 2
|
python-arq/arq
|
asyncio
| 192
|
Allow job to finish during SIGINT for graceful shutdown
|
I've deployed multiple workers in docker containers via swarm in our VM.
During a release, swarm will send a `SIGINT` (followed eventually by a `SIGKILL`) to bring down old images and bring up new ones.
During a `SIGINT`, instead of doing a `task.cancel()` and retrying, would it be possible to:
- Stop it from taking new jobs from the queue if it gets a `SIGINT`
- Try to finish what it has now & mark job as done if success
- Then shutdown on the `SIGKILL`
Our jobs are retryable so the existing behavior of cancelling and retrying isn't a huge problem, but it can rarely lead to scenarios that require us to cleanup after (the workers do a bit of fetching files from sources, manipulating them, moving them to other buckets, updating DBs, etc., so some artifacts can remain if cancelled in the middle of a job).
Looking at the worker source code, it looks like on both `SIGINT` and `SIGKILL` it does `handle_sig` (which just gets the tasks and cancels if not done).
Would it be possible to expose `handle_sig` so a user can specify custom handling on `SIGINT` (e.g. what I described above)?
Thanks for reading (and for the awesome lib!!)
|
closed
|
2020-05-26T16:25:01Z
|
2020-05-26T17:00:20Z
|
https://github.com/python-arq/arq/issues/192
|
[] |
u-ashish
| 2
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,465
|
Help, infrared video colorization
|
At present, I want to color infrared video. Is there any good way? #1436 #
|
open
|
2022-07-22T08:04:11Z
|
2022-11-27T09:14:03Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1465
|
[] |
songyn95
| 1
|
JaidedAI/EasyOCR
|
pytorch
| 392
|
deep-text-recognition-benchmark usage
|
This is not an issue but rather a guide to test or train EasyOCR.
---
I have seen many issues about testing the model with [deep-text-recognition-benchmark](https://github.com/clovaai/deep-text-recognition-benchmark). So, here is the command for a quick demo. Any suggestion is welcome.
```
python3 demo.py --image_folder /path/to/images/folder/ \
--saved_model ~/.EasyOCR/model/latin.pth \
--Transformation None \
--FeatureExtraction ResNet \
--SequenceModeling BiLSTM \
--Prediction CTC \
--hidden_size 512 \
--character "$(cat characters.txt)"
```
where `characters.txt` contains
```
0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzÀÁÂÃÄÅÆÇÈÉÊËÍÎÑÒÓÔÕÖØÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿąęĮįıŁłŒœŠšųŽž
```
See [model language](https://github.com/JaidedAI/EasyOCR/blob/cdef0b4ae3109b72e4610706089f17a457e576dc/easyocr/easyocr.py#L145) and `easyocr/config.py` for characters of other models.
|
closed
|
2021-03-11T13:21:22Z
|
2021-03-24T18:27:58Z
|
https://github.com/JaidedAI/EasyOCR/issues/392
|
[] |
burak-yildizoz
| 3
|
yeongpin/cursor-free-vip
|
automation
| 329
|
[Bug]: 注册失败
|
### 提交前检查
- [x] 我理解 Issue 是用于反馈和解决问题的,而非吐槽评论区,将尽可能提供更多信息帮助问题解决。
- [x] 我已经查看了置顶 Issue 并搜索了现有的 [开放 Issue](https://github.com/yeongpin/cursor-free-vip/issues)和[已关闭 Issue](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20),没有找到类似的问题。
- [x] 我填写了简短且清晰明确的标题,以便开发者在翻阅 Issue 列表时能快速确定大致问题。而不是“一个建议”、“卡住了”等。
### 平台
Windows x32
### 版本
1.7.3
### 错误描述
密码: BQMt2OpneR72
📝 名字: Tmelia
📝 姓氏: Martin
🚀 正在启动浏览器...
ℹ️ 启动浏览器
ℹ️ 访问 邮箱服务网站
⚠️ email.blocked_domains_error
ℹ️ email.local_blocked_domains_loaded
❌ 邮箱创建错误: {error}:
该元素没有位置及大小。
版本: 4.1.0.17
❌ register.email_create_failed
### 相关日志输出
```shell
```
### 附加信息
从1.7.2开始一直出现这个报错
|
open
|
2025-03-20T11:11:47Z
|
2025-03-20T14:10:37Z
|
https://github.com/yeongpin/cursor-free-vip/issues/329
|
[
"bug"
] |
1812095643
| 5
|
numpy/numpy
|
numpy
| 27,764
|
ENH: Streamline and improve the origin and license documentation of third party bundled in wheels
|
### Proposed new feature or change:
The current wheel builds (as of 2.1.3) may contain not entirely correct license or origin information for bundled third-party components. As a result, it may be difficult to collect missing information for the wheels, and one needs to get back to the sdist or a checkout for a proper picture of 3rd-party with the inclusion of correct, compliant license notices and actionable origin details.
- For instance, pocketfft is neither attributed nor referenced in the wheel, but is part of numpy/fft/_pocketfft_umath.cpython-310-x86_64-linux-gnu.so
- Or lapack-lite is missing its license, though we have a license for the full lapack used with openblas
These are just two examples, and there are likely several small incorrect, missing or inaccurate data because numpy is big and it is hard to keep track of all these.
The reason why this matters is that:
1. It is important to provide proper license notice and credits for all the code bundled
2. It is even more important to provide accurate origin information to support vulnerability management and reporting that may exists in the bundled code.
The proposed enhancement would consists in:
1. Running a detailed baseline scan to ensure that there is a clear record of every bits of 3rd party code bundled in wheel
3. Update the package(s) metadata to ensure that they are comprehensive
4. Automate 2. in the CI to avoid any regression
PS: I maintain popular open source Python tools to do just that https://github.com/aboutcode-org/ and https://aboutcode.org/ and I can help with this enhancement!
|
open
|
2024-11-14T10:44:01Z
|
2024-11-16T08:29:49Z
|
https://github.com/numpy/numpy/issues/27764
|
[
"01 - Enhancement",
"component: distribution"
] |
pombredanne
| 5
|
Miserlou/Zappa
|
flask
| 1,238
|
Basic Flask application won't run once deployed with Zappa
|
<!--- Provide a general summary of the issue in the Title above -->
My basic flask application (whose sole job is to echo back any json it's sent) won't run once it is deployed. Every time I try to update my deployment, it updates/deploys no problem but when I visit the link I get either a 'forbidden' or 'internal server issue' message. The logs show the same repeating error: `Unable to import module 'handler': No module named 'werkzeug'`
Here is a link to my stack overflow question regarding this [issue](https://stackoverflow.com/questions/47146586/aws-lambda-with-zappa-fails-on-import-module-handler-no-module-named-werkze).
Here is the App.py file
```
from flask import Flask
from flask import request, jsonify, abort
import sys
app = Flask(__name__)
@app.route('/')
def home():
return "hello world!"
@app.route('/task', methods=['POST'])
def echoBackJSON():
# 1.) grab json from request
json = request.get_json()
# echo back the json
return jsonify(json)
if __name__ == '__main__':
app.run(debug=True)
```
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
I'm using Python 3.6
I'm using a python virtual enviornment called `env`
The following error message dominates my entire zappa log:
`Unable to import module 'handler': No module named 'werkzeug'`
Sometimes a second error is raised on toml, a package which is automatically brought in as a dependency.
```
Could not find a version that satisfies the requirement toml==0.9.3 (from -r requirements.txt (line 23)) (from versions: 0.6.0, 0.6.5, 0.7.0, 0.7.1, 0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.9.1, 0.9.2, 0.9.3.1)
No matching distribution found for toml==0.9.3 (from -r requirements.txt (line 23))
```
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
Here is a zip of the simple project. You can follow the README.txt or the instructions below to install, deploy and see the bug.
[FlaskZappa.zip](https://github.com/Miserlou/Zappa/files/1472630/FlaskZappa.zip)
Setup Steps:
```
1.) Create local virtual environment
$ virtualenv env
2.) Activate your virtual environment with the .env file
$ source .env
3.) Install depedencies
$ pip install -r requirements.txt
5.) run application
$ flask run
6.) update/deploy to dev
$ zappa update dev
7.) check the logs
$ zappa tail dev
```
## Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: `zappa==0.45.1`
* Operating System and Python version: MacOS 10.12.6 and Python 3.6.2
## requirements.txt
```
argcomplete==1.9.2
base58==0.2.4
boto3==1.4.7
botocore==1.7.43
certifi==2017.11.5
chardet==3.0.4
click==6.7
docutils==0.14
durationpy==0.5
future==0.16.0
hjson==3.0.1
idna==2.6
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.19.0
placebo==0.8.1
python-dateutil==2.6.1
python-slugify==1.2.4
PyYAML==3.12
requests==2.18.4
s3transfer==0.1.11
six==1.11.0
toml==0.9.3
tqdm==4.19.1
troposphere==2.0.2
Unidecode==0.4.21
urllib3==1.22
Werkzeug==0.12
wsgi-request-logger==0.4.6
zappa==0.45.1
```
## zappa_settings.json
```
{
"dev": {
"app_function": "src.App.app",
"aws_region": "us-east-1",
"profile_name": "default",
"project_name": "flaskserver",
"runtime": "python3.6",
"s3_bucket": "zappa-python",
"slim_handler": true
}
}
```
|
closed
|
2017-11-14T21:55:32Z
|
2018-02-24T03:31:19Z
|
https://github.com/Miserlou/Zappa/issues/1238
|
[
"non-bug"
] |
ndortega
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.