repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
koxudaxi/datamodel-code-generator
|
fastapi
| 1,496
|
Optional not generated for nullable array items defined according to OpenAPI 3.0 spec, despite using --strict-nullable
|
Items in an array are not generated as `Optional`, despite adding the `nullable: true` property.
**To Reproduce**
Example schema:
```yaml
# test.yml
properties:
list1:
type: array
items:
type: string
nullable: true
required:
- list1
```
Used commandline:
```
datamodel-codegen --input test.yml --output model.py --strict-nullable
```
This generates a datamodel in which the list items are not nullable:
```py
class Model(BaseModel):
list1: List[str]
```
**Expected behavior**
The definition for a nullable string is correct according to the OpenAPI 3.0 spec.
```yaml
type: string
nullable: true
```
So I would expect Optional array items to be generated:
```py
class Model(BaseModel):
list1: List[Optional[str]]
```
**Version:**
- OS: macOS 13.4.1
- Python version: 3.11
- datamodel-code-generator version: 0.21.4
|
closed
|
2023-08-18T05:50:09Z
|
2023-12-04T15:11:57Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/1496
|
[
"bug"
] |
tfausten
| 1
|
huggingface/transformers
|
pytorch
| 36,109
|
Wrong corners format of bboxes in function center_to_corners_format (image_transforms.py)
|
In [image_transforms.py](https://github.com/huggingface/transformers/blob/main/src/transformers/image_transforms.py), function `center_to_corners_format` describes the corners format as "_corners format: contains the coordinates for the top-left and bottom-right corners of the box (top_left_x, top_left_y, bottom_right_x, bottom_right_y)_". All subsequent functions such as `_center_to_corners_format_torch` compute the center format in the following way:
```
def _corners_to_center_format_torch(bboxes_corners: "torch.Tensor") -> "torch.Tensor":
top_left_x, top_left_y, bottom_right_x, bottom_right_y = bboxes_corners.unbind(-1)
b = [
(top_left_x + bottom_right_x) / 2, # center x
(top_left_y + bottom_right_y) / 2, # center y
(bottom_right_x - top_left_x), # width
(bottom_right_y - top_left_y), # height
]
return torch.stack(b, dim=-1)
```
The problem is that the height of the bounding box would have a negative value. This code computes the center format from the corners format (bottom_left_x, bottom_left_y, top_right_x, top_right_y). The expected format or the code should be changed.
|
closed
|
2025-02-10T09:30:54Z
|
2025-02-12T14:31:14Z
|
https://github.com/huggingface/transformers/issues/36109
|
[] |
majakolar
| 3
|
davidteather/TikTok-Api
|
api
| 479
|
How to run the API in FastAPI / in a testing environment
|
Hi there.
We are trying to add some testing around the API interactions. Namely `getUser`.
We are doing something like this within a FastAPI endpoint.
```python
tik_tok_api = TikTokApi.get_instance()
return tik_tok_api.getUser(handle)
```
This is fine when making this call from the `requests` library. However, when running from FastAPI docs page. Or the FastAPI test client. The response is `Can only run one Playwright at a time.`
Any ideas?
|
closed
|
2021-01-26T13:28:29Z
|
2022-02-14T03:08:27Z
|
https://github.com/davidteather/TikTok-Api/issues/479
|
[
"bug"
] |
jaoxford
| 4
|
FactoryBoy/factory_boy
|
sqlalchemy
| 635
|
Mutually dependent django model fields, with defaults
|
#### The problem
Suppose I have a model with two fields, which need to be set according to some relationship between them. Eg (contrived example) I have a product_type field and a size field. I'd like to be able to provide either, neither or both fields. If I provide eg. product_type, I'd like to be able to set the size field accordingly (eg. if product_type is "book", I'd like something like "299 pages". And if the size is provided as "299 pages", I'd like to be able to set the product type to something for which that is a valid size (ie. "book").
#### Proposed solution
What I've tried so far is having both fields as lazy_attributes, but I get a "Cyclic lazy attribute definition" error if I don't supply either of the dependent fields. I don't really have a proposed solution in terms of coding it, but I'd like my factory to be able to provide a default pair of field values if neither is provided by the factory caller (eg. product_type='shoes', size='42'). Or maybe there's a way of doing this already that I haven't found?
#### Extra notes
Just for context, I'm currently trying to replace a lot of django_any factories in a large django codebase, and there are a fair number of places where something like this would be very useful. Otherwise we have to write a bunch of things like "ProductGivenSizeFactory", which is much less convenient than having a single factory with the proper behaviour.
|
closed
|
2019-08-15T13:09:19Z
|
2019-08-22T16:28:43Z
|
https://github.com/FactoryBoy/factory_boy/issues/635
|
[] |
Joeboy
| 2
|
laughingman7743/PyAthena
|
sqlalchemy
| 13
|
jpype._jexception.OutOfMemoryErrorPyRaisable: java.lang.OutOfMemoryError: GC overhead limit exceeded
|
Hi,
I am running into this error after running several big queries and reusing the connection:
```
jpype._jexception.OutOfMemoryErrorPyRaisable: java.lang.OutOfMemoryError: GC overhead limit exceeded
```
Any ideas? Any way to pass the JVM args to JPype to increase the mem given to the JVM on start?
Thanks!
|
closed
|
2017-06-19T19:12:23Z
|
2017-06-22T12:51:19Z
|
https://github.com/laughingman7743/PyAthena/issues/13
|
[] |
arnaudsj
| 2
|
psf/requests
|
python
| 6,223
|
requests.package.chardet* references not assigned correctly?
|
I found that packages.py is assigning sys.modules['requests.package.chardet'] over and over with different modules, is this intentional or a bug?
I looked at the code and this `target` variable confuses me, it is assigned again in loop and placed with itself(so completely?), looks like a name confliction to me. Code is referenced from installed version 2.28.1.
```
target = chardet.__name__
for mod in list(sys.modules):
if mod == target or mod.startswith(f"{target}."):
target = target.replace(target, "chardet")
sys.modules[f"requests.packages.{target}"] = sys.modules[mod]
```
## Expected Result
every chardet.* package maps to requests.packages.chardet.* respectively
## Actual Result
only requests.package.chardet is assigned at last.
## Reproduction Steps
import requests
import sys
print([m for m in sys.modules if name.startswith('requests.packages.chardet')])
|
closed
|
2022-08-31T10:56:22Z
|
2024-05-14T22:53:44Z
|
https://github.com/psf/requests/issues/6223
|
[] |
babyhoo976
| 2
|
HIT-SCIR/ltp
|
nlp
| 506
|
ltp.seg: illegal instruction(core dumped) when using machine with ubuntu 14
|
机器: Ubuntu 14
初始化: ltp = LTP(path=model_path) model_path对应small模型的本地路径
调用: seg, hidden = ltp.seg([text]) => 出现 illegal instruction(core dumped) 错误 .
部署: 使用docker 部署
之前Ubuntu 16.04是没有遇到这个问题的, 我是使用docke部署的 理论上应该不会和系统有耦合. 不知道ltp分词是否会依赖机器操作系统的相关版本或者底层库
|
closed
|
2021-04-22T06:13:20Z
|
2021-04-22T08:04:31Z
|
https://github.com/HIT-SCIR/ltp/issues/506
|
[] |
cl011
| 1
|
prisma-labs/python-graphql-client
|
graphql
| 3
|
Is there any way to specify an authorization path, a token for example?
|
closed
|
2017-10-09T21:33:01Z
|
2017-12-08T21:32:58Z
|
https://github.com/prisma-labs/python-graphql-client/issues/3
|
[] |
eamigo86
| 3
|
|
521xueweihan/HelloGitHub
|
python
| 2,514
|
【开源自荐】为数据科学家和算法工程师打造的AI IDE
|
## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/BaihaiAI/IDP
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:TypeScript, JavaScript, Python, Rust
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:为数据科学家和算法工程师打造的易用AI IDE
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:IDP是一款自研AI IDE(人工智能集成开发环境),天然支持Python和SQL这两种在数据科学和AI领域使用最广泛的语言。IDP专为AI和数据科学开发人员(如数据科学家、算法工程师)打造,针对其使用习惯,天然内置版本管理、环境管理等效率工具,从而帮助其最大化提升AI开发效率。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:
- 易用易上手,提供版本管理、环境管理与克隆、变量管理、预置代码片段、智能代码辅助等,帮助数据科学家和算法工程师提升效率
- 后端Rust语言,具有更优的运行性能。
- 自主研发,自主可控
- 截图:

- 后续更新计划:
IDP采取插件式架构,可便捷集成AI开发全流程所需的插件,如数据标注插件、超参数优化插件等。后续我们将:
- 构建插件库,欢迎感兴趣的开发者们共同参与插件的打造
- 完善现有功能,增强其易用性
|
open
|
2023-03-04T02:53:54Z
|
2023-03-04T02:58:35Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2514
|
[] |
liminniu
| 0
|
Evil0ctal/Douyin_TikTok_Download_API
|
web-scraping
| 540
|
[BUG] 抖音-获取指定视频的评论回复数据 返回400
|
大佬你好, 拉去项目后,测试 抖音-获取指定视频的评论回复数据 返回400
之后仔细查看文档,并在你的在线接口测试同样也返回400
https://douyin.wtf/docs#/Douyin-Web-API/fetch_video_comments_reply_api_douyin_web_fetch_video_comment_replies_get

|
closed
|
2025-01-17T09:47:12Z
|
2025-02-14T09:04:35Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/540
|
[
"BUG"
] |
yumingzhu
| 4
|
ray-project/ray
|
pytorch
| 50,917
|
[Data] Timestamptz type loses its time zone after map transforming.
|
### What happened + What you expected to happen
Timestamptz type loses its time zone after map transforming.
### Versions / Dependencies
```
In [45]: ray.__version__
Out[45]: '2.42.1'
```
### Reproduction script
```
import ray
import pyarrow as pa
ray.init()
table = pa.table({
"ts": pa.array([1735689600, 1735689600, 1735689600], type=pa.timestamp("s", tz='UTC')),
"id": [1, 2, 3]
})
ds = ray.data.from_arrow(table)
print(ds.schema())
#Column Type
#------ ----
#ts timestamp[s, tz=UTC]
#id int64
ds2 = ds.map_batches(lambda batch: batch)
print(ds2.schema())
#Column Type
#------ ----
#ts timestamp[s]
#id int64
```
### Issue Severity
None
|
closed
|
2025-02-26T11:37:27Z
|
2025-03-03T11:28:21Z
|
https://github.com/ray-project/ray/issues/50917
|
[
"bug",
"triage",
"data"
] |
sharkdtu
| 1
|
plotly/dash-table
|
plotly
| 591
|
row/column selectable single applies to all tables on the page
|
Make 2 tables with row or column selectable=single - you can only select one row and one column on the page, should be one per table.
reported in https://community.plot.ly/t/multiple-dash-tables-selection-error/28802
```
import dash
import dash_html_components as html
import dash_table
keys = ['a', 'b', 'c']
data = [{'a': 'a', 'b': 'b', 'c': 'c'}, {'a': 'aa', 'b': 'bb', 'c': 'cc'}]
app = dash.Dash(__name__)
app.layout = html.Div([
dash_table.DataTable(
id='a',
columns=[{"name": i, "id": i, "selectable": True} for i in keys],
data=data,
column_selectable='single',
row_selectable='single'
),
dash_table.DataTable(
id='b',
columns=[{"name": i, "id": i, "selectable": True} for i in keys],
data=data,
column_selectable='single',
row_selectable='single'
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
|
closed
|
2019-09-19T05:03:09Z
|
2019-09-19T20:41:36Z
|
https://github.com/plotly/dash-table/issues/591
|
[
"dash-type-bug",
"size: 0.5"
] |
alexcjohnson
| 3
|
gradio-app/gradio
|
python
| 10,864
|
Cannot Upgrade Gradio - rvc_pipe Is Missing, Cannot Be Installed
|
### Describe the bug
When attempting to upgrade Gradio, it requires music_tag, after that is installed, rvc_pipe is required.
C:\Users\Mehdi\Downloads\ai-voice-cloning-3.0\src>pip install rvc_pipe
Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement rvc_pipe (from versions: none)
ERROR: No matching distribution found for rvc_pipe
I cannot find it online.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
It cannot be updated!
```
### Severity
I can work around it
|
closed
|
2025-03-22T17:06:19Z
|
2025-03-23T21:31:56Z
|
https://github.com/gradio-app/gradio/issues/10864
|
[
"bug"
] |
gyprosetti
| 1
|
wkentaro/labelme
|
computer-vision
| 976
|
[Feature] Create specific-sized box
|
I want to create a box with (256x256).
How can I do with labelme?
|
closed
|
2022-01-14T03:54:12Z
|
2022-06-25T04:30:44Z
|
https://github.com/wkentaro/labelme/issues/976
|
[] |
alicera
| 6
|
tableau/server-client-python
|
rest-api
| 973
|
Create/Update site missing support for newer settings
|
The server side API does not appear to support interacting with the following settings:
- Schedule linked tasks
- Mobile offline favorites
- Flow web authoring enable/disable
- Mobile app lock
- Tag limit
- Cross database join toggle
- Toggle personal space
- Personal space storage limits
|
open
|
2022-01-13T14:51:43Z
|
2022-01-25T22:08:21Z
|
https://github.com/tableau/server-client-python/issues/973
|
[
"gap"
] |
jorwoods
| 0
|
ydataai/ydata-profiling
|
jupyter
| 1,129
|
Feature Request: support for Polars
|
### Missing functionality
Polars integration ? https://www.pola.rs/
### Proposed feature
Use polars dataframe as a compute backend.
Or let the user give a polars dataframe to the ProfileReport.
### Alternatives considered
Spark integration.
### Additional context
Polars help to optimize queries and reduce memory footprint.
It could be used to do analysis on big dataframe and speed up computation ?
|
open
|
2022-10-25T20:11:49Z
|
2025-03-24T01:52:27Z
|
https://github.com/ydataai/ydata-profiling/issues/1129
|
[
"needs-triage"
] |
PierreSnell
| 11
|
TheKevJames/coveralls-python
|
pytest
| 1
|
Remove 3d-party modules from coverage report
|
I'm added HTTPretty to my module, and coveralls-python [added HTTPretty to coverage report](https://coveralls.io/builds/5097)
It will be good, if python-coverage don't include 3d-party modules in coverage report.
|
closed
|
2013-02-26T17:16:10Z
|
2013-02-26T18:43:34Z
|
https://github.com/TheKevJames/coveralls-python/issues/1
|
[] |
tyrannosaurus
| 2
|
matterport/Mask_RCNN
|
tensorflow
| 2,339
|
Training doesn't go beyond 1 epoch and no masks, rois produced
|
Hello guys,
I have been training on the custom dataset.
I have changed a few parameters such as:
```
STEPS_PER_EPOCH = 10
epochs = 5
```
The training starts well but it is stuck after almost completing the first epoch. You can see below in the image

I would also like to add that I have waited for around 2 hrs after first epoch to post this issue.
I am using CPU.
The time taken for the training is not an issue. It's just that it is stuck.
Also,
When I only train for 1 epoch then I do not see the rois or the masks in the test image which I test the model on. Any idea why this is also happening?
Can someone please help me out?
Regards,
Yash.
|
closed
|
2020-08-25T09:10:04Z
|
2020-08-27T07:33:41Z
|
https://github.com/matterport/Mask_RCNN/issues/2339
|
[] |
YashRunwal
| 1
|
littlecodersh/ItChat
|
api
| 727
|
创建群聊报错
|
创建群聊不成功。创建群聊返回的信息,打印如下:
```
{'BaseResponse': {'ErrMsg': '返回值不带BaseResponse', 'Ret': -1000, 'RawMsg': 'no BaseResponse in raw response'}}
Start auto replying.
```
您的itchat版本为:`1.2.32` Python 版本为 3.7
具体的代码如下
```
itchat.auto_login(hotReload=True,enableCmdQR=2)
memberList = itchat.get_friends()[1:5]
# 创建群聊,topic键值为群聊名
chatroomUserName = itchat.create_chatroom(memberList, 'test chatroom')
print(chatroomUserName)
itchat.run(debug=True)
```
[document]: http://itchat.readthedocs.io/zh/latest/
[issues]: https://github.com/littlecodersh/itchat/issues
[itchatmp]: https://github.com/littlecodersh/itchatmp
|
closed
|
2018-09-10T12:34:59Z
|
2018-10-15T02:00:01Z
|
https://github.com/littlecodersh/ItChat/issues/727
|
[
"duplicate"
] |
nenuwangd
| 1
|
OpenInterpreter/open-interpreter
|
python
| 912
|
OpenAI API key required?
|
### Describe the bug
Project Readme states that you are not affiliated with OpenAI, nor server their product but once launched it requires you to insert an OpenAI API key...
`OpenAI API key not found
To use GPT-4 (highly recommended) please provide an OpenAI API key. `
I thought you had your own model created and publicly shared with the world but why does it now want OpenAI subscription?
### Reproduce
try running interpreter post installation.
### Expected behavior
expected to work without OpenAI API key
### Screenshots

### Open Interpreter version
latest
### Python version
3.11.7
### Operating System name and version
linux mint
### Additional context
_No response_
|
closed
|
2024-01-13T09:10:55Z
|
2024-03-19T21:03:52Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/912
|
[
"Documentation"
] |
andreylisovskiy
| 9
|
keras-team/keras
|
deep-learning
| 20,627
|
GlobalAveragePooling1D data_format Question
|
My rig
- Ubuntu 24.04 VM , RTX3060Ti with driver nvidia 535
- tensorflow-2.14-gpu/tensorflow-2.18 , both pull from docker
- Nvidia Container Toolkit if running in gpu version
About[ this example](https://keras.io/examples/timeseries/timeseries_classification_transformer/)
The transformer blocks of this example contain 2 Conv1D layer, and therefore we have to reshape the input matrix to add the channel dimension at the end.
There is a GlobalAveragePooling1D layer after the transformer blocks:
x = layers.GlobalAveragePooling1D(data_format="channels_last")(x)
which should be correct since our channel is added at the last.
However, if running these example, the summary at the last third line will not have 64,128 Params
dense (Dense) │ (None, 128) │ 64,128 │ global_average_pool…
Instead it will just have 256 parameters and making the total params way less, the model will also have an accuracy of ~50% only

this happen no matter i am running tensorflow-2.14-gpu, or just using the CPU version tensorflow-2.18
However, if changing the data_format="channels_first" everything become fine. The number of params in the GlobalAveragePooling1D layer become 64,128. The total params also match. The training accuracy also more than 90%.
I discover that as i find a very similar model [here](https://github.com/mxochicale/intro-to-transformers/blob/main/tutorials/time-series-classification/timeseries_transformer_classification.ipynb).
The only difference is the data_format
But isn't data_format="channels_last" is the right choice ?
So whats wrong ?
|
open
|
2024-12-11T05:51:31Z
|
2024-12-13T06:28:57Z
|
https://github.com/keras-team/keras/issues/20627
|
[
"type:Bug"
] |
cptang2007
| 0
|
huggingface/transformers
|
tensorflow
| 36,848
|
GPT2 repetition of words in output
|
### System Info
- `transformers` version: 4.45.2
- Platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: NO
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
@pytest.mark.parametrize("dtype", [torch.float16, torch.float32])
def test_gpt2_cpu_inductor(dtype):
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2").to(dtype)
prompt1 = "GPT2 is model developed by OpenAI"
# run on CPU
input = tokenizer(prompt1, return_tensors="pt")
input_ids1 = input.input_ids
attention_mask = input.attention_mask
gen_tokens1 = model.generate(
input_ids1,
attention_mask = attention_mask,
max_new_tokens=30,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
)
gen_text1 = tokenizer.batch_decode(gen_tokens1)[0]
print(gen_text1)
import torch._inductor.config as inductor_config
inductor_config.inplace_buffers = False
model.transformer.wte.forward = torch.compile(
model.transformer.wte.forward, backend="inductor", fullgraph=False
)
gen_tokens_cpu1 = model.generate(
input_ids1,
attention_mask = attention_mask,
max_new_tokens=30,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
)
gen_text_cpu1 = tokenizer.batch_decode(gen_tokens_cpu1)[0]
assert gen_text1 == gen_text_cpu1
```
### Expected behavior
For above test I see output as
`GPT2 is model developed by OpenAI and is based on the OpenAI-based OpenAI-based OpenAI-based OpenAI-based OpenAI-based OpenAI-based Open`
is this expected behavior?
@ArthurZucker could you please explain why the output is like this?
|
closed
|
2025-03-20T10:55:28Z
|
2025-03-20T13:05:33Z
|
https://github.com/huggingface/transformers/issues/36848
|
[
"bug"
] |
vpandya-quic
| 1
|
keras-rl/keras-rl
|
tensorflow
| 192
|
Continue training using saved weights
|
My question basically is:
Is there a way to save the weights/memory and use them after for more training? In my environment I want to train some model and afterwards continue this training changing some features, in order to adapt it to each situation. I think it will be accomplished maybe by saving the memory (e.g. Sequential Memory) into disk, and then load it again to call the agent with this loaded memory, and train it again.
|
closed
|
2018-04-05T11:15:58Z
|
2019-01-12T16:20:27Z
|
https://github.com/keras-rl/keras-rl/issues/192
|
[
"wontfix"
] |
ghub-c
| 4
|
aleju/imgaug
|
machine-learning
| 706
|
how to know which augmenters are used?
|
hi~,
i want to know when i use (Someof), how to know which augmenters are used?
thank you very much!
and i got a problem it seem like that
error: OpenCV(4.1.1) C:\projects\opencv-python\opencv\modules\core\src\alloc.app:72:error(-4:Insufficient memory) Failed to allocate ***** bytes in function 'cv::OutofMemoryError'
|
closed
|
2020-07-30T08:33:55Z
|
2020-08-03T04:16:53Z
|
https://github.com/aleju/imgaug/issues/706
|
[] |
TyrionChou
| 2
|
aiogram/aiogram
|
asyncio
| 1,457
|
Invalid webhook response given with handle_in_background=False
|
### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
mac os x
### Python version
3.12.2
### aiogram version
3.4.1
### Expected behavior
Bot handles incoming requests and replies with a proper status (200 on success)
### Current behavior
Bot handles requests successfully but do not reply properly
### Steps to reproduce
1. create SimpleRequestHandler
> SimpleRequestHandler(
> dispatcher,
> bot,
> handle_in_background=False
> ).register(app, path=WEBHOOK_PATH)
2. Create an empty command handler
> @router.message(CommandStart())
> async def message_handler(message: types.Message, state: FSMContext):
> pass
3. Run
### Code example
```python3
import logging
import os
import aiohttp.web_app
from aiogram import Dispatcher, Bot, types
from aiogram.client.default import DefaultBotProperties
from aiogram.client.session.aiohttp import AiohttpSession
from aiogram.client.telegram import TelegramAPIServer
from aiogram.enums import ParseMode
from aiogram.filters import CommandStart
from aiogram.fsm.context import FSMContext
from aiogram.webhook.aiohttp_server import setup_application, SimpleRequestHandler
from aiohttp import web
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger()
BOT_TOKEN = os.environ["BOT_TOKEN"]
BOT_API_BASE_URL = os.getenv('BOT_API_BASE_URL')
WEBAPP_HOST = os.getenv('WEBAPP_HOST', 'localhost')
WEBAPP_PORT = int(os.getenv('WEBAPP_PORT', 8080))
BASE_WEBHOOK_URL = os.getenv('BASE_WEBHOOK_URL', f"http://{WEBAPP_HOST}:{WEBAPP_PORT}")
WEBHOOK_PATH = os.getenv('WEBHOOK_PATH', '/webhook')
async def on_startup(bot: Bot) -> None:
logger.info("Setting main bot webhook")
await bot.set_webhook(f"{BASE_WEBHOOK_URL}{WEBHOOK_PATH}")
async def on_shutdown(bot: Bot) -> None:
logger.info("Deleting main bot webhook")
await bot.delete_webhook()
def main():
logger.info('BOT_API_BASE_URL=%s', BOT_API_BASE_URL)
logger.info('WEBAPP_HOST=%s', WEBAPP_HOST)
logger.info('WEBAPP_PORT=%s', WEBAPP_PORT)
logger.info('BASE_WEBHOOK_URL=%s', BASE_WEBHOOK_URL)
logger.info('WEBHOOK_PATH=%s', WEBHOOK_PATH)
if BOT_API_BASE_URL is not None:
session = AiohttpSession(
api=TelegramAPIServer.from_base(BOT_API_BASE_URL)
)
else:
session = None
bot = Bot(
token=BOT_TOKEN, session=session,
default=DefaultBotProperties(
parse_mode=ParseMode.MARKDOWN_V2,
link_preview_is_disabled=True
)
)
app = aiohttp.web_app.Application()
dispatcher = Dispatcher()
dispatcher['bot'] = bot
dispatcher['bot_api_session'] = session
dispatcher.startup.register(on_startup)
dispatcher.shutdown.register(on_shutdown)
@dispatcher.message(CommandStart())
async def message_handler(message: types.Message, state: FSMContext):
pass
SimpleRequestHandler(
dispatcher,
bot,
handle_in_background=False
).register(app, path=WEBHOOK_PATH)
setup_application(app, dispatcher)
# Start web-application.
web.run_app(app, host=WEBAPP_HOST, port=WEBAPP_PORT)
if __name__ == '__main__':
main()
```
### Logs
```sh
2024-04-11 09:28:12,870 - aiogram.event - INFO - Update id=209665694 is handled. Duration 0 ms by bot id=7052315372
2024-04-11 09:28:12,871 - aiohttp.access - INFO - 127.0.0.1 [11/Apr/2024:09:28:08 +0300] "POST /webhook HTTP/1.1" 200 237 "-" "-"
2024-04-11 09:28:12,872 - aiohttp.server - ERROR - Error handling request
Traceback (most recent call last):
File "/Users/ignz/Library/Caches/pypoetry/virtualenvs/tg-feedback-bot-dfXuSdBg-py3.12/lib/python3.12/site-packages/aiohttp/web_protocol.py", line 350, in data_received
messages, upgraded, tail = self._request_parser.feed_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "aiohttp/_http_parser.pyx", line 557, in aiohttp._http_parser.HttpParser.feed_data
aiohttp.http_exceptions.BadStatusLine: 400, message:
Invalid method encountered:
b'HTTP/1.1 400 Bad Request'
^
2024-04-11 09:28:12,873 - aiohttp.access - INFO - 127.0.0.1 [11/Apr/2024:09:28:12 +0300] "UNKNOWN / HTTP/1.0" 400 0 "-" "-"
```
### Additional information
_No response_
|
open
|
2024-04-11T06:28:58Z
|
2024-04-11T06:28:58Z
|
https://github.com/aiogram/aiogram/issues/1457
|
[
"bug"
] |
unintended
| 0
|
hankcs/HanLP
|
nlp
| 1,356
|
CoreStopWordDictionary.add() 如何持久化
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.7.5
我使用的版本是:1.7.4
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
CoreStopWordDictionary.add() 如何将停用词写入stopwords.txt?
还是说只能手动修改stopword.txt或者用文件流的方式。
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public class DemoStopWord
{
public static void main(String[] args) throws Exception {
System.out.println(CoreStopWordDictionary.contains("一起"));
stopwords();
add();
stopwords();
}
public static void add() {
boolean add = CoreStopWordDictionary.add("一起");
CoreStopWordDictionary.reload();
System.out.println("add = " + add);
}
public static void stopwords() throws Exception {
String con = "我们一起去逛超市,我们先去城西银泰,我们再去城南银泰。然后我们再一起回家";
List<String> strings = HanLP.extractKeyword(con, 5);
System.out.println("strings = " + strings);
}
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
我期望是调用CoreStopWordDictionary.add("一起");后,将停用词"一起"加入停用词词典。然后调用CoreStopWordDictionary.reload();重新加载字典和缓存达到动态修改停用词词典的功能。
```
false
strings = [一起, 银泰, 城, 再, 西]
add = true
strings = [城, 西, 银泰, 先, 再]
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
CoreStopWordDictionary.add("一起");并没有将“一起”写入停用词词典,只是在内存中。调用CoreStopWordDictionary.reload();以后停用词字典里没有“一起”
```
false
strings = [一起, 银泰, 城, 再, 西]
add = true
strings = [一起, 银泰, 城, 再, 西]
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
所以说现在还是要手动修改stopword.txt或者是使用文件流的形式来修改对么?那CoreStopWordDictionary.add()和CoreStopWordDictionary.remove()这两个方法应该怎么用?
|
closed
|
2019-12-17T06:54:56Z
|
2019-12-20T02:06:11Z
|
https://github.com/hankcs/HanLP/issues/1356
|
[
"question"
] |
xiuqiang1995
| 2
|
OFA-Sys/Chinese-CLIP
|
computer-vision
| 358
|
保存微调后模型,效果没有微调过程中好
|
我微调过程中文本到图片以及图片到文本都到达了90%以上的情况下,我保存了过程中的pt文件,并基于extracts feature中的代码构建了基于图片匹配描述的demo,但是匹配表达答案时计算的概率只有百分之四五十。


|
open
|
2024-09-12T07:14:45Z
|
2024-09-12T07:14:45Z
|
https://github.com/OFA-Sys/Chinese-CLIP/issues/358
|
[] |
wangly1998
| 0
|
vaexio/vaex
|
data-science
| 1,836
|
[FEATURE-REQUEST]: Remove the df.*func* when using a function after "register_function" + "auto add".
|
**Description**
When registering a new function `foo()` with `@vaex.register_function()`, there is a need to
also apply it to the dataframe with `df.add_function("foo", foo)` or the state won't save it.
Later you can use it with `df.func.foo()`.
1. It will be very cool to have it without the `df.*func*.foo()` → `df.foo()`
2. It would be will be very cool if it would be automatically behind the scenes without the `add_function`.
|
open
|
2022-01-17T15:11:41Z
|
2022-01-17T15:12:40Z
|
https://github.com/vaexio/vaex/issues/1836
|
[] |
xdssio
| 0
|
ets-labs/python-dependency-injector
|
asyncio
| 393
|
A service becomes a _asyncio.Future if it has an asynchronous dependency
|
di version: 4.20.2
python: 3.9.1
os: linux
```python
from dependency_injector import containers, providers
async def init_resource():
yield ...
class Service:
def __init__(self, res):
...
class AppContainer(containers.DeclarativeContainer):
res = providers.Resource(init_resource)
foo = providers.Singleton(Service, res)
bar = providers.Singleton(Service, None)
container = AppContainer()
foo = container.foo()
print(type(foo)) # <class '_asyncio.Future'> <-- why not a <class '__main__.Service'> instance?
bar = container.bar()
print(type(bar)) # <class '__main__.Service'>
```
It's bug or feature? Maybe I miss something in documentation?
|
closed
|
2021-02-09T18:23:13Z
|
2021-02-09T19:27:17Z
|
https://github.com/ets-labs/python-dependency-injector/issues/393
|
[
"question"
] |
gtors
| 3
|
fastapi-admin/fastapi-admin
|
fastapi
| 61
|
NoSQL db support
|
Hello, I am using both PostgreSQL and MongoDB on my fastapi project.
Is there any possibility to add Mongodb collections to admin dashboard and enable CRUD operations for them as well?
|
closed
|
2021-07-29T21:31:27Z
|
2021-07-30T01:44:18Z
|
https://github.com/fastapi-admin/fastapi-admin/issues/61
|
[] |
royalwood
| 1
|
hindupuravinash/the-gan-zoo
|
machine-learning
| 54
|
AttGAN code
|
AttGAN code has been released recently. https://github.com/LynnHo/AttGAN-Tensorflow
|
closed
|
2018-05-08T02:00:22Z
|
2018-05-10T16:31:33Z
|
https://github.com/hindupuravinash/the-gan-zoo/issues/54
|
[] |
LynnHo
| 1
|
pydata/xarray
|
pandas
| 9,129
|
scatter plot is slow
|
### What happened?
scatter plot is slow when the dataset has large (length) coordinates even though those coordinates are not involved in the scatter plot.
### What did you expect to happen?
scatter plot speed does not depend on coordinates that are not involved in the scatter plot, which was the case at some point in the past
### Minimal Complete Verifiable Example
```Python
import numpy as np
import xarray as xr
from matplotlib import pyplot as plt
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
# Define coordinates
month = np.arange(1, 13, dtype=np.int64)
L = np.arange(1, 13, dtype=np.int64)
# Create random values for the variables SP and SE
np.random.seed(0) # For reproducibility
SP_values = np.random.rand(len(L), len(month))
SE_values = SP_values + np.random.rand(len(L), len(month))
# Create the dataset
ds = xr.Dataset(
{
"SP": (["L", "month"], SP_values),
"SE": (["L", "month"], SE_values)
},
coords={
"L": L,
"month": month,
"S": np.arange(250),
"model": np.arange(7),
"M": np.arange(30)
}
)
# slow
ds.plot.scatter(x='SP', y='SE')
ds = xr.Dataset(
{
"SP": (["L", "month"], SP_values),
"SE": (["L", "month"], SE_values)
},
coords={
"L": L,
"month": month
}
)
# fast
ds.plot.scatter(x='SP', y='SE')
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
_No response_
### Anything else we need to know?
For me, slow = 25 seconds and fast = instantaneous
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:45:13) [Clang 16.0.6 ]
python-bits: 64
OS: Darwin
OS-release: 23.5.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2024.6.0
pandas: 2.2.2
numpy: 1.26.4
scipy: 1.13.1
netCDF4: 1.6.5
pydap: installed
h5netcdf: 1.3.0
h5py: 3.11.0
zarr: 2.18.2
cftime: 1.6.4
nc_time_axis: 1.4.1
iris: None
bottleneck: 1.3.8
dask: 2024.6.0
distributed: 2024.6.0
matplotlib: 3.8.4
cartopy: 0.23.0
seaborn: 0.13.2
numbagg: 0.8.1
fsspec: 2024.6.0
cupy: None
pint: 0.24
sparse: 0.15.4
flox: 0.9.8
numpy_groupies: 0.11.1
setuptools: 70.0.0
pip: 24.0
conda: None
pytest: 8.2.2
mypy: None
IPython: 8.17.2
sphinx: None</details>
|
closed
|
2024-06-16T21:11:31Z
|
2024-07-09T07:09:19Z
|
https://github.com/pydata/xarray/issues/9129
|
[
"bug",
"topic-plotting",
"topic-performance"
] |
mktippett
| 1
|
flasgger/flasgger
|
flask
| 611
|
flask-marshmallow - generating Marshmallow schemas from SQLAlchemy models seemingly breaks flasgger
|
I have a SQLAlchemy model from which I derive a flask-marshmallow schema:
```python
class Customer(db.Model):
__tablename__ = 'customer'
customer_id: Mapped[int] = mapped_column('pk_customer_id', primary_key=True)
first_name: Mapped[str] = mapped_column(String(100))
last_name: Mapped[str] = mapped_column(String(100))
customer_number: Mapped[int]
class CustomerSchema(ma.SQLAlchemyAutoSchema):
class Meta:
model = Customer
```
I then add the Schema to my definitions with `@swag_from`:
```python
@customers_bp.route('/', methods=['GET'])
@swag_from({
'definitions': {
'Customer': CustomerSchema
}
})
def get_customers():
"""
Return a list of customers.
---
responses:
200:
description: A list of customers
schema:
type: object
properties:
customers:
type: array
items:
$ref: '#/definitions/Customer'
"""
...
```
When I head to `/apidocs`, I am greeted with the following error:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/flasgger/base.py", line 164, in get
return jsonify(self.loader())
File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 170, in jsonify
return current_app.json.response(*args, **kwargs) # type: ignore[return-value]
File "/usr/local/lib/python3.9/site-packages/flask/json/provider.py", line 214, in response
f"{self.dumps(obj, **dump_args)}\n", mimetype=self.mimetype
File "/usr/local/lib/python3.9/site-packages/flask/json/provider.py", line 179, in dumps
return json.dumps(obj, **kwargs)
File "/usr/local/lib/python3.9/json/__init__.py", line 234, in dumps
return cls(
File "/usr/local/lib/python3.9/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/local/lib/python3.9/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/local/lib/python3.9/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/local/lib/python3.9/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/local/lib/python3.9/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/usr/local/lib/python3.9/site-packages/flask/json/provider.py", line 121, in _default
raise TypeError(f"Object of type {type(o).__name__} is not JSON serializable")
TypeError: Object of type SQLAlchemyAutoSchemaMeta is not JSON serializable
```
Seemingly, the `Meta` object containing options for schema generation causes problems when attempting to include it in the flasgger config.
|
open
|
2024-02-29T11:37:39Z
|
2024-03-07T20:49:01Z
|
https://github.com/flasgger/flasgger/issues/611
|
[] |
snctfd
| 1
|
jmcnamara/XlsxWriter
|
pandas
| 254
|
Unable to set x axis intervals to floating point number
|
Hi,
My requirement is to set the x axis between 1.045 and 1.21 with intervals of 0.055. Thus, the four points for x-axis are: 1.045, 1.1, 1.155, 1.21. I am unable to achieve this in xlsxwriter. I have tried the below code which failed:
``` python
chart.set_x_axis({'min': 1.045, 'max': 1.21})
chart.set_x_axis({'interval_unit': 0.055})
```
Kindly suggest a solution.
Thanks and regards,
Amitra
|
closed
|
2015-05-14T07:01:58Z
|
2015-05-14T07:45:01Z
|
https://github.com/jmcnamara/XlsxWriter/issues/254
|
[
"question"
] |
amitrasudan
| 1
|
seleniumbase/SeleniumBase
|
web-scraping
| 2,879
|
Add support for `uc_gui_handle_cf()` with `Driver()` and `DriverContext()` formats
|
### Add support for `uc_gui_handle_cf()` with `Driver()` and `DriverContext()` formats
Currently, if running this code:
```python
from seleniumbase import DriverContext
with DriverContext(uc=True) as driver:
url = "https://www.virtualmanager.com/en/login"
driver.uc_open_with_reconnect(url, 4)
driver.uc_gui_handle_cf() # Ready if needed!
driver.assert_element('input[name*="email"]')
driver.assert_element('input[name*="login"]')
```
That leads to this stack trace:
```python
File "/Users/michael/github/SeleniumBase/seleniumbase/core/browser_launcher.py", line 4025, in <lambda>
lambda *args, **kwargs: uc_gui_handle_cf(
^^^^^^^^^^^^^^^^^
File "/Users/michael/github/SeleniumBase/seleniumbase/core/browser_launcher.py", line 651, in uc_gui_handle_cf
install_pyautogui_if_missing()
File "/Users/michael/github/SeleniumBase/seleniumbase/core/browser_launcher.py", line 559, in install_pyautogui_if_missing
verify_pyautogui_has_a_headed_browser()
File "/Users/michael/github/SeleniumBase/seleniumbase/core/browser_launcher.py", line 552, in verify_pyautogui_has_a_headed_browser
if sb_config.headless or sb_config.headless2:
^^^^^^^^^^^^^^^^^^
AttributeError: module 'seleniumbase.config' has no attribute 'headless'
```
Here's the workaround for now using `SB()`: (Which includes the virtual display needed on Linux)
```python
from seleniumbase import SB
with SB(uc=True) as sb:
url = "https://www.virtualmanager.com/en/login"
sb.uc_open_with_reconnect(url, 4)
sb.uc_gui_handle_cf() # Ready if needed!
sb.assert_element('input[name*="email"]')
sb.assert_element('input[name*="login"]')
```
Once this ticket is resolved, Linux users who use `Driver()` or `DriverContext` formats in UC Mode will still need to set `pyautogui._pyautogui_x11._display` to `Xlib.display.Display(os.environ['DISPLAY'])` on Linux in order to sync up `pyautogui` with the `X11` virtual display after calling `sbvirtualdisplay.Display(visible=True, size=(1366, 768), backend="xvfb", use_xauth=True).start()`. (For `Xlib`, use `import Xlib.display` after `pip install python-xlib`.)
|
closed
|
2024-06-27T15:20:13Z
|
2024-07-02T14:29:16Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2879
|
[
"enhancement",
"UC Mode / CDP Mode"
] |
mdmintz
| 7
|
Sanster/IOPaint
|
pytorch
| 50
|
how can we use our own trained model checkpoints and model config to test.
|
like we have trained Lama Inpaining model with our own datasets and now i want to inference using our trained model config and checkpoints .
or simply provide the code to produce mask image using cursor for some selected area in any given images.
Thank you in advance ,
|
closed
|
2022-05-24T07:50:25Z
|
2022-05-24T08:38:20Z
|
https://github.com/Sanster/IOPaint/issues/50
|
[] |
ram-parvesh
| 0
|
graphql-python/graphene
|
graphql
| 1,452
|
Confusing/incorrect DataLoader example in docs
|
Hi all,
I'm very much new to Graphene, so please excuse me if this is incorrect, but it seems to me the code in the docs (screenshot below) will in fact make 4 round trips to the backend?
<img width="901" alt="image" src="https://user-images.githubusercontent.com/1187758/186154207-041da5ed-9341-4c75-bc10-73d3e950ef9a.png">
I don't see how it would be possible otherwise. Looks like these docs were inspired by the aiodataloader docs (screenshot below). It's reasonable to see how these would be able to bring it down to 2 requests.
<img width="770" alt="image" src="https://user-images.githubusercontent.com/1187758/186154951-5f5fdcb1-be64-4d68-8994-82b02a5fd4a3.png">
|
closed
|
2022-08-23T12:13:54Z
|
2022-08-25T03:22:23Z
|
https://github.com/graphql-python/graphene/issues/1452
|
[
"🐛 bug"
] |
lopatin
| 0
|
keras-team/keras
|
deep-learning
| 20,278
|
Incompatibility of compute_dtype with complex-valued inputs
|
Hi,
In #19872, you introduced the possibility for layers with complex-valued inputs.
It then seems that this statement of the API Documentation is now wrong:

When I feed a complex-valued input tensor into a layer (as in this [unit test](https://github.com/keras-team/keras/commit/076ab315a7d1939d2ec965dc097946c53ef1d539#diff-94db6e94fea3334a876a0c3c02a897c1a99e91398dff51987a786b58d52cc0d1)), it is not cast to the `compute_dtype`, but rather kept as it is. I would somehow expect that the `compute_dtype` becomes complex in this case as well.
|
open
|
2024-09-23T11:48:24Z
|
2024-09-25T19:31:05Z
|
https://github.com/keras-team/keras/issues/20278
|
[
"type:feature"
] |
jhoydis
| 1
|
deepspeedai/DeepSpeed
|
deep-learning
| 6,906
|
[BUG] RuntimeError: Unable to JIT load the fp_quantizer op due to it not being compatible due to hardware/software issue. FP Quantizer is using an untested triton version (3.1.0), only 2.3.(0, 1) and 3.0.0 are known to be compatible with these kernels
|
**Describe the bug**
I am out of my depth here but I'll try.
Installed deepspeed on vllm/vllm-openai docker via pip install deepspeed. Install went fine but when I tried to do an FP6 quant inflight on a model I got the error in the subject line. Noodling around, I see op_builder/fp_quantizer.py is checking the Triton version and presumably blocking it? I tried downgrading triton from 3.1.0 to 3.0.0 and caused a cascading array of interdependency issues. I would like to lift the version check and see if it works but I am not a coder and wouldn't know what to do.
**To Reproduce**
Steps to reproduce the behavior:
1. load vllm/vllm-openai:latest docker
2. install latest deepspeed
3. attempt to load model vllm serve (model_id) with parameter --quantization deepspeedfp (need configuration.json file)
4. See error
**Expected behavior**
**ds_report output**
[2024-12-23 14:25:46,009] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
[WARNING] FP Quantizer is using an untested triton version (3.1.0), only 2.3.(0, 1) and 3.0.0 are known to be compatible with these kernels
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
/usr/bin/ld: cannot find -lcufile: No such file or directory
collect2: error: ld returned 1 exit status
gds .................... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.5
[WARNING] using untested triton version (3.1.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/usr/local/lib/python3.12/dist-packages/torch']
torch version .................... 2.5.1+cu124
deepspeed install path ........... ['/usr/local/lib/python3.12/dist-packages/deepspeed']
deepspeed info ................... 0.16.2, unknown, unknown
torch cuda version ............... 12.4
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 2.5, cuda 12.4
shared memory (/dev/shm) size .... 46.57 GB
**Screenshots**
**System info (please complete the following information):**
- OS: Ubuntu 22.04
- GPU=2 A40
**Launcher context**
**Docker context**
vllm/vllm-openai:latest (0.65)
**Additional context**
|
closed
|
2024-12-23T22:27:19Z
|
2025-01-13T17:26:35Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6906
|
[
"bug",
"compression"
] |
GHBigD
| 3
|
babysor/MockingBird
|
pytorch
| 511
|
请教作者及各位大佬,模型声音机械怎么办
|
纯小白,请教
1继续作者75K训练到200K,loss02左右,声音有点奇怪,最主要是效果机械,这是什么情况。要怎么办
2是训练还不到位吗?要继续训练还是咋办
3软件顶端的区域,怎么加载。就是下图这个,灰色

|
open
|
2022-04-20T14:24:24Z
|
2022-04-22T14:04:29Z
|
https://github.com/babysor/MockingBird/issues/511
|
[] |
1239hy
| 2
|
Aeternalis-Ingenium/FastAPI-Backend-Template
|
sqlalchemy
| 33
|
Running pytests
|
Hi.
In root dir README it says:
Make sure that you are in the backend/ directory
In backend/README.md:
INFO: For running the test, make sure you are in the root directory and NOT in the backend/ directory!
Try both ways i am not able to run tests,
```
from src.main import backend_app
E ModuleNotFoundError: No module named 'src'
```
|
closed
|
2023-12-19T07:41:34Z
|
2023-12-19T08:10:03Z
|
https://github.com/Aeternalis-Ingenium/FastAPI-Backend-Template/issues/33
|
[] |
djo10
| 0
|
tensorflow/datasets
|
numpy
| 5,263
|
[data request] <dataset name>
|
* Name of dataset: NYU Depth Dataset V2
* URL of dataset: http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/bathrooms_part1.zip
I am not able to download this dataset now (which I could years ago). It says the site can't be reached. Any help would be appreciated. Thanks.
|
closed
|
2024-01-29T15:37:50Z
|
2024-02-14T11:47:34Z
|
https://github.com/tensorflow/datasets/issues/5263
|
[
"dataset request"
] |
Moushumi9medhi
| 2
|
strawberry-graphql/strawberry
|
graphql
| 3,631
|
strawberry.ext.mypy_plugin Pydantic 2.9.0 PydanticModelField.to_argument error missing 'model_strict' and 'is_root_model_root'
|
Hello!
It seems the Pydantic 2.9.0 version introduced a breaking change on PydanticModelField.to_argument adding two new arguments:
https://github.com/pydantic/pydantic/commit/d6df62aaa34c21272cb5fcbcbe3a8b88474732f8
and
https://github.com/pydantic/pydantic/commit/93ced97b00491da4778e0608f2a3be62e64437a8
## Describe the Bug
This is the mypy trace
```
./my-file.py:132: error: INTERNAL ERROR -- Please try using mypy master on GitHub:
https://mypy.readthedocs.io/en/stable/common_issues.html#using-a-development-mypy-build
Please report a bug at https://github.com/python/mypy/issues
version: 1.11.2
Traceback (most recent call last):
File "mypy/semanal.py", line 7087, in accept
File "mypy/nodes.py", line 1183, in accept
File "mypy/semanal.py", line 1700, in visit_class_def
File "mypy/semanal.py", line 1891, in analyze_class
File "mypy/semanal.py", line 1925, in analyze_class_body_common
File "mypy/semanal.py", line 1996, in apply_class_plugin_hooks
File "/Users/victorbarroncas/code/boostsec-asset-management/.venv/lib/python3.12/site-packages/strawberry/ext/mypy_plugin.py", line 489, in strawberry_pydantic_class_callback
f.to_argument(
TypeError: PydanticModelField.to_argument() missing 2 required positional arguments: 'model_strict' and 'is_root_model_root'
./my-file.py:132: : note: use --pdb to drop into pdb
```
## System Information
- Operating system: OSX
- strawberry-graphql 0.240.3
- pydantic 2.9.1
- pydantic-core 2.23.3
- mypy 1.11.2
- mypy-extensions 1.0.0
## Additional Context
Similar issue:
https://github.com/strawberry-graphql/strawberry/issues/3560
|
open
|
2024-09-13T12:02:43Z
|
2025-03-20T15:56:52Z
|
https://github.com/strawberry-graphql/strawberry/issues/3631
|
[
"bug"
] |
victor-nb
| 0
|
Lightning-AI/pytorch-lightning
|
data-science
| 19,768
|
Script freezes when Trainer is instantiated
|
### Bug description
I can run once a training script with pytorch-lightning. However, after the training finishes, if train to run it again, the code freezes when the `L.Trainer` is instantiated. There are no error messages.
Only if I shutdown and restart, I can run it once again, but then the problem persist for the next time.
This happens to me with different codes, even in the "lightning in 15 minutes" example.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
# Based on https://lightning.ai/docs/pytorch/stable/starter/introduction.html
import os
import torch
from torch import optim, nn, utils
from torchvision.datasets import MNIST
from torchvision.transforms import ToTensor
import pytorch_lightning as L
# define any number of nn.Modules (or use your current ones)
encoder = nn.Sequential(nn.Linear(28 * 28, 64), nn.ReLU(), nn.Linear(64, 3))
decoder = nn.Sequential(nn.Linear(3, 64), nn.ReLU(), nn.Linear(64, 28 * 28))
# define the LightningModule
class LitAutoEncoder(L.LightningModule):
def __init__(self, encoder, decoder):
super().__init__()
self.encoder = encoder
self.decoder = decoder
def training_step(self, batch, batch_idx):
# training_step defines the train loop.
# it is independent of forward
x, y = batch
x = x.view(x.size(0), -1)
x_hat = self.model_forward(x)
loss = nn.functional.mse_loss(x_hat, x)
# Logging to TensorBoard (if installed) by default
self.log("train_loss", loss)
return batch
def configure_optimizers(self):
optimizer = optim.Adam(self.parameters(), lr=1e-3)
return optimizer
# init the autoencoder
autoencoder = LitAutoEncoder(encoder, decoder)
# setup data
dataset = MNIST(os.getcwd(), download=True, train=True, transform=ToTensor())
# use 20% of training data for validation
train_set_size = int(len(dataset) * 0.8)
valid_set_size = len(dataset) - train_set_size
seed = torch.Generator().manual_seed(42)
train_set, val_set = utils.data.random_split(dataset, [train_set_size, valid_set_size], generator=seed)
train_loader = utils.data.DataLoader(train_set, num_workers=15)
valid_loader = utils.data.DataLoader(val_set, num_workers=15)
print("Before instantiate Trainer")
# train the model (hint: here are some helpful Trainer arguments for rapid idea iteration)
trainer = L.Trainer(limit_train_batches=100, max_epochs=10, check_val_every_n_epoch=10, accelerator="gpu")
print("After instantiate Trainer")
```
### Error messages and logs
There are no error messages
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA GeForce RTX 3080 Laptop GPU
- available: True
- version: 12.1
* Lightning:
- denoising-diffusion-pytorch: 1.5.4
- ema-pytorch: 0.2.1
- lightning-utilities: 0.11.2
- pytorch-fid: 0.3.0
- pytorch-lightning: 2.2.2
- torch: 2.2.2
- torchaudio: 2.2.2
- torchmetrics: 1.0.0
- torchvision: 0.17.2
* Packages:
- absl-py: 1.4.0
- accelerate: 0.17.1
- addict: 2.4.0
- aiohttp: 3.8.3
- aiosignal: 1.2.0
- antlr4-python3-runtime: 4.9.3
- anyio: 3.6.1
- appdirs: 1.4.4
- argon2-cffi: 21.3.0
- argon2-cffi-bindings: 21.2.0
- array-record: 0.4.0
- arrow: 1.2.3
- astropy: 5.2.1
- asttokens: 2.0.8
- astunparse: 1.6.3
- async-timeout: 4.0.2
- attrs: 23.1.0
- auditwheel: 5.4.0
- babel: 2.10.3
- backcall: 0.2.0
- beautifulsoup4: 4.11.1
- bleach: 5.0.1
- blinker: 1.6.2
- bqplot: 0.12.40
- branca: 0.6.0
- build: 1.2.1
- cachetools: 5.2.0
- carla: 0.9.14
- certifi: 2024.2.2
- cffi: 1.15.1
- chardet: 5.1.0
- charset-normalizer: 2.1.1
- click: 8.1.3
- click-plugins: 1.1.1
- cligj: 0.7.2
- cloudpickle: 3.0.0
- cmake: 3.26.1
- colossus: 1.3.1
- colour: 0.1.5
- contourpy: 1.0.7
- cycler: 0.11.0
- cython: 0.29.32
- dacite: 1.8.1
- dask: 2023.3.1
- dataclass-array: 1.4.1
- debugpy: 1.6.3
- decorator: 4.4.2
- deepspeed: 0.7.2
- defusedxml: 0.7.1
- denoising-diffusion-pytorch: 1.5.4
- deprecation: 2.1.0
- dill: 0.3.6
- distlib: 0.3.6
- dm-tree: 0.1.8
- docker-pycreds: 0.4.0
- docstring-parser: 0.15
- einops: 0.6.0
- einsum: 0.3.0
- ema-pytorch: 0.2.1
- etils: 1.3.0
- exceptiongroup: 1.2.0
- executing: 1.0.0
- farama-notifications: 0.0.4
- fastjsonschema: 2.16.1
- filelock: 3.8.0
- fiona: 1.9.3
- flask: 2.3.3
- flatbuffers: 24.3.25
- folium: 0.14.0
- fonttools: 4.37.1
- frozenlist: 1.3.1
- fsspec: 2022.8.2
- future: 1.0.0
- fvcore: 0.1.5.post20221221
- gast: 0.4.0
- gdown: 4.7.1
- geojson: 3.0.1
- geopandas: 0.12.2
- gitdb: 4.0.11
- gitpython: 3.1.43
- google-auth: 2.16.2
- google-auth-oauthlib: 0.4.6
- google-pasta: 0.2.0
- googleapis-common-protos: 1.63.0
- googledrivedownloader: 0.4
- gputil: 1.4.0
- gpxpy: 1.5.0
- grpcio: 1.62.1
- gunicorn: 20.0.4
- gym: 0.26.2
- gym-notices: 0.0.8
- gymnasium: 0.28.1
- h5py: 3.7.0
- haversine: 2.8.0
- hdf5plugin: 4.1.1
- hjson: 3.1.0
- humanfriendly: 10.0
- idna: 3.6
- imageio: 2.31.3
- imageio-ffmpeg: 0.4.7
- immutabledict: 2.2.0
- importlib-metadata: 4.12.0
- importlib-resources: 6.1.0
- imutils: 0.5.4
- invertedai: 0.0.8.post1
- iopath: 0.1.10
- ipyevents: 2.0.2
- ipyfilechooser: 0.6.0
- ipykernel: 6.15.3
- ipyleaflet: 0.17.4
- ipython: 8.5.0
- ipython-genutils: 0.2.0
- ipytree: 0.2.2
- ipywidgets: 8.0.2
- itsdangerous: 2.1.2
- jax-jumpy: 1.0.0
- jedi: 0.18.1
- jinja2: 3.1.2
- joblib: 1.4.0
- jplephem: 2.19
- json5: 0.9.10
- jsonargparse: 4.15.0
- jsonschema: 4.19.1
- jsonschema-specifications: 2023.7.1
- jstyleson: 0.0.2
- julia: 0.6.1
- jupyter: 1.0.0
- jupyter-client: 7.3.5
- jupyter-console: 6.4.4
- jupyter-core: 4.11.1
- jupyter-packaging: 0.12.3
- jupyter-server: 1.18.1
- jupyterlab: 3.4.7
- jupyterlab-pygments: 0.2.2
- jupyterlab-server: 2.15.1
- jupyterlab-widgets: 3.0.3
- keras: 2.11.0
- kiwisolver: 1.4.4
- lanelet2: 1.2.1
- lark: 1.1.9
- lazy-loader: 0.2
- leafmap: 0.27.0
- libclang: 14.0.6
- lightning-utilities: 0.11.2
- lit: 16.0.0
- llvmlite: 0.39.1
- locket: 1.0.0
- lunarsky: 0.2.1
- lxml: 4.9.1
- lz4: 4.3.3
- markdown: 3.4.1
- markdown-it-py: 2.2.0
- markupsafe: 2.1.1
- matplotlib: 3.6.1
- matplotlib-inline: 0.1.6
- mdurl: 0.1.2
- mistune: 2.0.4
- moviepy: 1.0.3
- mpi4py: 3.1.3
- mpmath: 1.3.0
- msgpack: 1.0.8
- multidict: 6.0.2
- munch: 2.5.0
- natsort: 8.2.0
- nbclassic: 0.4.3
- nbclient: 0.6.8
- nbconvert: 7.0.0
- nbformat: 5.5.0
- nest-asyncio: 1.5.5
- networkx: 2.8.6
- ninja: 1.10.2.3
- notebook: 6.4.12
- notebook-shim: 0.1.0
- numba: 0.56.4
- numpy: 1.24.4
- nvidia-cublas-cu11: 11.10.3.66
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu11: 11.7.101
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu11: 11.7.99
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu11: 11.7.99
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu11: 8.5.0.96
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu11: 10.9.0.58
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu11: 10.2.10.91
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu11: 11.4.0.1
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu11: 11.7.4.91
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu11: 2.14.3
- nvidia-nccl-cu12: 2.19.3
- nvidia-nvjitlink-cu12: 12.4.127
- nvidia-nvtx-cu11: 11.7.91
- nvidia-nvtx-cu12: 12.1.105
- oauthlib: 3.2.2
- omegaconf: 2.3.0
- open-humans-api: 0.2.9
- opencv-python: 4.6.0.66
- openexr: 1.3.9
- opt-einsum: 3.3.0
- osmnx: 1.2.2
- p5py: 1.0.0
- packaging: 21.3
- pandas: 1.5.3
- pandocfilters: 1.5.0
- parso: 0.8.3
- partd: 1.4.1
- pep517: 0.13.0
- pickleshare: 0.7.5
- pillow: 9.2.0
- pint: 0.21.1
- pip: 24.0
- pkgconfig: 1.5.5
- pkgutil-resolve-name: 1.3.10
- platformdirs: 2.5.2
- plotly: 5.13.1
- plyfile: 0.8.1
- portalocker: 2.8.2
- powerbox: 0.7.1
- prettymapp: 0.1.0
- proglog: 0.1.10
- prometheus-client: 0.14.1
- promise: 2.3
- prompt-toolkit: 3.0.31
- protobuf: 3.19.6
- psutil: 5.9.2
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- py-cpuinfo: 8.0.0
- pyarrow: 10.0.0
- pyasn1: 0.4.8
- pyasn1-modules: 0.2.8
- pycocotools: 2.0
- pycosat: 0.6.3
- pycparser: 2.21
- pydantic: 1.10.9
- pydeprecate: 0.3.1
- pydub: 0.25.1
- pyelftools: 0.30
- pyerfa: 2.0.0.1
- pyfftw: 0.13.1
- pygame: 2.1.2
- pygments: 2.13.0
- pylians: 0.7
- pyparsing: 3.0.9
- pyproj: 3.5.0
- pyproject-hooks: 1.0.0
- pyquaternion: 0.9.9
- pyrsistent: 0.18.1
- pyshp: 2.3.1
- pysocks: 1.7.1
- pysr: 0.16.3
- pystac: 1.8.4
- pystac-client: 0.7.5
- python-box: 7.1.1
- python-dateutil: 2.8.2
- pytorch-fid: 0.3.0
- pytorch-lightning: 2.2.2
- pytz: 2022.2.1
- pywavelets: 1.4.1
- pyyaml: 6.0
- pyzmq: 23.2.1
- qtconsole: 5.3.2
- qtpy: 2.2.0
- ray: 2.10.0
- referencing: 0.30.2
- requests: 2.31.0
- requests-oauthlib: 1.3.1
- rich: 13.3.4
- rpds-py: 0.10.3
- rsa: 4.9
- rtree: 1.0.1
- ruamel.yaml: 0.17.21
- ruamel.yaml.clib: 0.2.7
- scikit-build-core: 0.8.2
- scikit-image: 0.20.0
- scikit-learn: 1.2.2
- scipy: 1.8.1
- scooby: 0.7.4
- seaborn: 0.12.2
- send2trash: 1.8.0
- sentry-sdk: 1.44.1
- setproctitle: 1.3.3
- setuptools: 67.6.0
- shapely: 1.8.0
- shellingham: 1.5.4
- six: 1.16.0
- sklearn: 0.0.post1
- smmap: 5.0.1
- sniffio: 1.3.0
- soupsieve: 2.3.2.post1
- spiceypy: 6.0.0
- stack-data: 0.5.0
- stravalib: 1.4
- swagger-client: 1.0.0
- sympy: 1.11.1
- tabulate: 0.9.0
- taichi: 1.5.0
- tenacity: 8.2.3
- tensorboard: 2.11.2
- tensorboard-data-server: 0.6.1
- tensorboard-plugin-wit: 1.8.1
- tensorboardx: 2.6.2.2
- tensorflow: 2.11.0
- tensorflow-addons: 0.21.0
- tensorflow-datasets: 4.9.0
- tensorflow-estimator: 2.11.0
- tensorflow-graphics: 2021.12.3
- tensorflow-io-gcs-filesystem: 0.29.0
- tensorflow-metadata: 1.13.0
- tensorflow-probability: 0.19.0
- termcolor: 2.1.1
- terminado: 0.15.0
- threadpoolctl: 3.1.0
- tifffile: 2023.3.21
- timm: 0.4.12
- tinycss2: 1.1.1
- toml: 0.10.2
- tomli: 2.0.1
- tomlkit: 0.11.4
- toolz: 0.12.1
- torch: 2.2.2
- torchaudio: 2.2.2
- torchmetrics: 1.0.0
- torchvision: 0.17.2
- tornado: 6.2
- tqdm: 4.66.2
- tr: 1.0.0.2
- trafficgen: 0.0.0
- traitlets: 5.4.0
- traittypes: 0.2.1
- trimesh: 4.3.0
- triton: 2.2.0
- typeguard: 2.13.3
- typer: 0.12.2
- typing-extensions: 4.11.0
- urllib3: 1.26.15
- virtualenv: 20.16.5
- visu3d: 1.5.1
- wandb: 0.16.5
- waymo-open-dataset-tf-2-11-0: 1.6.1
- wcwidth: 0.2.5
- webencodings: 0.5.1
- websocket-client: 1.4.1
- werkzeug: 2.3.7
- wheel: 0.37.1
- whitebox: 2.3.1
- whiteboxgui: 2.3.0
- widgetsnbextension: 4.0.3
- wrapt: 1.14.1
- xyzservices: 2023.7.0
- yacs: 0.1.8
- yapf: 0.30.0
- yarl: 1.8.1
- zipp: 3.8.1
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.19
- release: 5.15.0-102-generic
- version: #112~20.04.1-Ubuntu SMP Thu Mar 14 14:28:24 UTC 2024
</details>
### More info
_No response_
|
closed
|
2024-04-12T14:11:34Z
|
2024-06-22T22:46:07Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19768
|
[
"question"
] |
PabloVD
| 5
|
cvat-ai/cvat
|
tensorflow
| 8,764
|
Can access CVAT over LAN but not Internet
|
Hi, i did all in https://github.com/cvat-ai/cvat/issues/1095 but http://localhost:7070/auth/login don't login and show message "Could not check authentication on the server Open the Browser Console to get details" but on http://localhost:8080/auth/login no problem.it on WAN also http://myip:7060/, i have same state. can you help me?
|
closed
|
2024-12-02T12:04:18Z
|
2024-12-10T17:54:18Z
|
https://github.com/cvat-ai/cvat/issues/8764
|
[
"need info"
] |
alirezajafarishahedan
| 1
|
microsoft/MMdnn
|
tensorflow
| 378
|
load pytorch vgg16 converted from tensorflow : ImportError: No module named 'MainModel'
|
Platform (like ubuntu 16.04/win10): ubuntu16
Python version: 3.6
Source framework with version (like Tensorflow 1.4.1 with GPU): 1.3
Destination framework with version (like CNTK 2.3 with GPU):
Pre-trained model path (webpath or webdisk path):
Running scripts:
when torch.load have converted pretrain model
ImportError: No module named 'MainModel'
|
closed
|
2018-08-27T04:56:18Z
|
2019-03-21T01:26:58Z
|
https://github.com/microsoft/MMdnn/issues/378
|
[] |
Proxiaowen
| 8
|
healthchecks/healthchecks
|
django
| 549
|
log entries: keep last n failure entries
|
Hello,
when the log entries hit the maximum, old messages are removed.
Especially with higher frequency intervals, keeping a few of those "failure" events (which may contain important debug information's in the body) would be useful, as opposed to remove log entries solely based on the timestamp. Positive log entries are often only useful for their timestamp.
It so happens that I could have 100 positives log entries but lacking the last 2 - 3 negative log entries with debug informations in the body, and I'm really interested in the failures.
I'm not sure how this could be structured clearly without over-complicating the UI, maybe always keep the last 3 negative entries in the log?
|
closed
|
2021-08-06T12:07:32Z
|
2022-06-19T13:57:26Z
|
https://github.com/healthchecks/healthchecks/issues/549
|
[
"feature"
] |
lukastribus
| 6
|
dpgaspar/Flask-AppBuilder
|
flask
| 1,516
|
How to use OR operation in Rison
|
In the docunment, I found we can use `AND` operations like this `(filters:!((col:name,opr:sw,value:a),(col:name,opr:ew,value:z)))`, but how to use `OR` operation?
|
closed
|
2020-11-11T00:18:15Z
|
2021-06-29T00:56:11Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1516
|
[
"question",
"stale"
] |
HugoPu
| 2
|
biolab/orange3
|
data-visualization
| 6,892
|
Problem with SVM widget (degree must be int but only allows float)
|
<!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
Got this error when try to use SVM widget:
Fitting failed.
The 'degree' parameter of SVC must be an int in the range [0, inf). Got 3.0 instead.

The problem is that degree (d) hyperparameter of this widget only allows for float values yet the error message clearly says that this ought to be integer.
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system:
- Orange version:
- How you installed Orange:
|
closed
|
2024-09-17T11:34:04Z
|
2024-11-21T10:19:00Z
|
https://github.com/biolab/orange3/issues/6892
|
[
"bug report"
] |
TonciG
| 5
|
RobertCraigie/prisma-client-py
|
pydantic
| 28
|
Add group_by action method
|
[https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#groupby](https://www.prisma.io/docs/reference/api-reference/prisma-client-reference#groupby)
The design decision we need to make here is how to represent the data that prisms returns.
|
closed
|
2021-06-21T17:34:52Z
|
2022-01-23T01:32:48Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/28
|
[
"kind/feature"
] |
RobertCraigie
| 6
|
tflearn/tflearn
|
tensorflow
| 983
|
What's the reason for removing the second parameter for to_categorical
|
Old version of to_categorical took 2 parameters, the second one is nb_classes, why should it be removed from new version? What are the benefits of doing this? In my opinion, sometimes we must list all the labels in a list_file, but that do not mean we have all labels in each dataset, e.g. we may not see any sample in testset for small categories. Really do not understand.
|
open
|
2017-12-22T11:58:26Z
|
2017-12-22T11:58:26Z
|
https://github.com/tflearn/tflearn/issues/983
|
[] |
wronsky317
| 0
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 87
|
Add trainer for MoCo
|
It would be nice to have a trainer for [MoCo](https://arxiv.org/abs/1911.05722). It would be similar to [UnsupervisedEmbeddingsUsingAugmentations](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/src/pytorch_metric_learning/trainers/unsupervised_embeddings_using_augmentations.py) but would need to use [CrossBatchMemory](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/src/pytorch_metric_learning/losses/cross_batch_memory.py) for the queue. Also, since the queue has to contain only negatives, there needs to be some care taken when creating labels inside each batch.
|
closed
|
2020-05-03T18:51:45Z
|
2023-01-21T05:14:46Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/87
|
[
"help wanted",
"new algorithm request"
] |
KevinMusgrave
| 0
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,019
|
Unable to open on Mac, OS Big Sur
|
Program cannot open, running into this issue as seen on the command line:
\# /Applications/Ultimate\ Vocal\ Remover.app/Contents/MacOS/UVR
Traceback (most recent call last):
File "UVR.py", line 8, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/__init__.py", line 209, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/__init__.py", line 5, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/convert.py", line 7, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/notation.py", line 8, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/__init__.py", line 78, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/files.py", line 11, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "pooch/__init__.py", line 19, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "pooch/processors.py", line 14, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "lzma.py", line 27, in <module>
ImportError: dlopen(/Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so, 2): Library not loaded: @rpath/liblzma.5.dylib
Referenced from: /Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so
Reason: Incompatible library version: _lzma.cpython-311-darwin.so requires version 10.0.0 or later, but liblzma.5.dylib provides version 8.0.0
[1635] Failed to execute script 'UVR' due to unhandled exception: dlopen(/Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so, 2): Library not loaded: @rpath/liblzma.5.dylib
Referenced from: /Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so
Reason: Incompatible library version: _lzma.cpython-311-darwin.so requires version 10.0.0 or later, but liblzma.5.dylib provides version 8.0.0
[1635] Traceback:
Traceback (most recent call last):
File "UVR.py", line 8, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/__init__.py", line 209, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/__init__.py", line 5, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/convert.py", line 7, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/core/notation.py", line 8, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/__init__.py", line 78, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "librosa/util/files.py", line 11, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "pooch/__init__.py", line 19, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "pooch/processors.py", line 14, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 391, in exec_module
File "lzma.py", line 27, in <module>
ImportError: dlopen(/Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so, 2): Library not loaded: @rpath/liblzma.5.dylib
Referenced from: /Applications/Ultimate Vocal Remover.app/Contents/Frameworks/lib-dynload/_lzma.cpython-311-darwin.so
Reason: Incompatible library version: _lzma.cpython-311-darwin.so requires version 10.0.0 or later, but liblzma.5.dylib provides version 8.0.0
Would this be an issue with my version of Python, or with my computer for whatever reason?
|
open
|
2023-12-10T08:11:00Z
|
2023-12-13T14:10:05Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1019
|
[] |
Syntoca
| 1
|
DistrictDataLabs/yellowbrick
|
matplotlib
| 1,248
|
Create test for utilizing Sklearn pipelines with Visualizers
|
**Describe the issue**
ModelVisualizers should be tested to see if they would work within pipelines. This addresses issue #498 and PR #1245
This is being addressed in PR #1249
The test should cover these visualizers
- [x] - ClassPredictionError
- [x] - ClassificationReport
- [x] - ConfusionMatrix
- [ ] - DecisionBoundariesVisualizer
- [x] - PrecisionRecallCurve
- [x] - ROCAUC
- [ ] - CVScores
- [ ] - RFECV
- [ ] - ValidationCurve
- [x] - DroppingCurve
- [ ] - DiscriminationThreshold
- [ ] - LearningCurve
- [ ] - FeatureImportances
- [ ] - ResidualsPlot
- [ ] - AlphaSelection
- [ ] - PredictionError
- [ ] - SilhouetteVisualizer
- [ ] - KElbowVisualizer
- [ ] - InterclusterDistance
- [ ] - GridSearchColorPlot
The sample test could look like this
```
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split as tts
from yellowbrick.classifier import ConfusionMatrix
iris = load_iris()
X = iris.data
y = iris.target
classes = iris.target_names
X_train, X_test, y_train, y_test = tts(X, y, test_size=0.2, random_state=42)
iris_cm = Pipeline([
('minmax', MinMaxScaler()),
('confusion', ConfusionMatrix(LogisticRegression(multi_class="auto", solver="liblinear"), classes=classes,
label_encoder={0: 'setosa', 1: 'versicolor', 2: 'virginica'}))
])
iris_cm.fit(X_train, y_train)
iris_cm.score(X_test, y_test)
self.assert_images_similar(iris_cm, tol=??? should be set to similar test if needed)
```
```
from sklearn.neural_network import MLPClassifier
from yellowbrick.classifier import ConfusionMatrix
from sklearn.model_selection import train_test_split as tts
from sklearn.datasets import make_classification
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import MinMaxScaler
X, y = make_classification(
n_samples=400,
n_features=20,
n_informative=8,
n_redundant=8,
n_classes=2,
n_clusters_per_class=4,
random_state=27,
)
X_train, X_test, y_train, y_test = tts(X, y, test_size=0.2, random_state=42)
model = Pipeline([
('minmax', MinMaxScaler()),
('mlp', MLPClassifier()),
])
viz = ConfusionMatrix(model)
viz.fit(X_train, y_train, )
viz.score(X_test, y_test)
self.assert_images_similar(viz, tol=??? should be set to similar test if needed))
```
<!-- If you have a question, note that you can email us via our listserve:
https://groups.google.com/forum/#!forum/yellowbrick -->
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
|
closed
|
2022-05-15T00:00:27Z
|
2022-05-28T19:17:41Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/1248
|
[] |
lwgray
| 0
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 861
|
There is no speaker id in my dataset, what should I do?
|
closed
|
2021-10-05T17:03:40Z
|
2021-10-06T19:35:21Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/861
|
[] |
fancat-programer
| 6
|
|
huggingface/diffusers
|
deep-learning
| 10,223
|
Where should I obtain the lora-sdxl-dreambooth-id in Inference
|
### Describe the bug
I tried to upload the download link from the README file generated during training, but an error indicated it was incorrect. Where should I obtain the lora-id for Inference?
### Reproduction
README.md:
---
base_model: /data/ziqiang/czc/diffusers/examples/dreambooth/model
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks dog
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - daniu111/output
<Gallery />
## Model description
These are daniu111/output LoRA adaption weights for /data/ziqiang/czc/diffusers/examples/dreambooth/model.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: /data/ziqiang/czc/diffusers/examples/dreambooth/model/vae.
## Trigger words
You should use a photo of sks dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](daniu111/output/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
Inference:
from huggingface_hub.repocard import RepoCard
from diffusers import DiffusionPipeline
import torch
lora_model_id = <"lora-sdxl-dreambooth-id">
card = RepoCard.load(lora_model_id)
base_model_id = card.data.to_dict()["base_model"]
pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.load_lora_weights(lora_model_id)
image = pipe("A picture of a sks dog in a bucket", num_inference_steps=25).images[0]
image.save("sks_dog.png")
"The lora-dreambooth-sdxl-id seems to need to be uploaded, but I don't know where to obtain this ID."
### Logs
_No response_
### System Info
- 🤗 Diffusers version: 0.32.0.dev0
- Platform: Linux-5.4.0-198-generic-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.12.4
- PyTorch version (GPU?): 2.4.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.26.2
- Transformers version: 4.46.3
- Accelerate version: 1.1.1
- PEFT version: 0.7.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: 0.0.27.post2
- Accelerator: NVIDIA RTX A6000, 49140 MiB
NVIDIA RTX A6000, 49140 MiB
NVIDIA RTX A6000, 49140 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@hlky
|
open
|
2024-12-14T06:34:56Z
|
2025-02-07T15:03:24Z
|
https://github.com/huggingface/diffusers/issues/10223
|
[
"bug",
"stale"
] |
Zarato2122
| 5
|
ultralytics/ultralytics
|
python
| 19,277
|
about post model labels
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
What does the labeling of the model do
### Additional
What is the relationship between boxes and key points in the results of the model?I want to use the model to detect the posture of running, jumping, falling, etc. Can you directly change the classification label of the box to run and jump?
|
open
|
2025-02-17T08:56:32Z
|
2025-02-17T09:27:51Z
|
https://github.com/ultralytics/ultralytics/issues/19277
|
[
"question",
"pose"
] |
liwenewil
| 5
|
dnouri/nolearn
|
scikit-learn
| 8
|
ValueError: 'total size of new array must be unchanged'
|
Am I doing something wrong here:
``` python
net1 = NeuralNet(
layers=[ # three layers: one hidden layer
('input', layers.InputLayer),
('conv1', layers.Conv2DLayer),
('pool1', layers.MaxPool2DLayer),
('dropout1', layers.DropoutLayer),
('hidden', layers.DenseLayer),
('output', layers.DenseLayer),
],
# layer parameters:
input_shape=(32, 1, 300, 400), # 32 images per batch times
hidden_num_units=100, # number of units in hidden layer
output_nonlinearity=None, # output layer uses identity function
output_num_units=len(classes),
# optimization method:
upate=nesterov_momentum,
update_learning_rate=0.01,
update_momentum=0.9,
regression=False, # flag to indicate we're not dealing with regression problem
use_label_encoder=True,
max_epochs=400, # we want to train this many epochs
verbose=1,
batch_iterator=LoadBatchIterator(batch_size=32),
conv1_num_filters=4, conv1_filter_size=(3, 3), pool1_ds=(2, 2),
dropout1_p=0.1,
)
```
leads to:
```
/home/ubuntu/git/nolearn/nolearn/lasagne.pyc in fit(self, X, y)
155
156 try:
--> 157 self.train_loop(X, y)
158 except KeyboardInterrupt:
159 pdb.set_trace()
/home/ubuntu/git/nolearn/nolearn/lasagne.pyc in train_loop(self, X, y)
193
194 for Xb, yb in self.batch_iterator(X_train, y_train):
--> 195 batch_train_loss = self.train_iter_(Xb, yb)
196 train_losses.append(batch_train_loss)
197
/home/ubuntu/git/Theano/theano/compile/function_module.pyc in __call__(self, *args, **kwargs)
603 gof.link.raise_with_op(
604 self.fn.nodes[self.fn.position_of_error],
--> 605 self.fn.thunks[self.fn.position_of_error])
606 else:
607 # For the c linker We don't have access from
/home/ubuntu/git/Theano/theano/compile/function_module.pyc in __call__(self, *args, **kwargs)
593 t0_fn = time.time()
594 try:
--> 595 outputs = self.fn()
596 except Exception:
597 if hasattr(self.fn, 'position_of_error'):
/home/ubuntu/git/Theano/theano/gof/op.pyc in rval(p, i, o, n)
751
752 def rval(p=p, i=node_input_storage, o=node_output_storage, n=node):
--> 753 r = p(n, [x[0] for x in i], o)
754 for o in node.outputs:
755 compute_map[o][0] = True
/home/ubuntu/git/Theano/theano/sandbox/cuda/basic_ops.pyc in perform(self, node, inp, out_)
2349 else:
2350 raise ValueError("total size of new array must be unchanged",
-> 2351 x.shape, shp)
2352
2353 out[0] = x.reshape(tuple(shp))
ValueError: ('total size of new array must be unchanged', (31, 4, 298, 398), array([128, 1, 298, 398]))
Apply node that caused the error: GpuReshape{4}(GpuElemwise{Composite{[mul(i0, add(i1, Abs(i1)))]},no_inplace}.0, TensorConstant{[128 1 298 398]})
Inputs types: [CudaNdarrayType(float32, 4D), TensorType(int64, vector)]
Inputs shapes: [(31, 4, 298, 398), (4,)]
Inputs strides: [(474416, 118604, 398, 1), (8,)]
Inputs values: ['not shown', array([128, 1, 298, 398])]
HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint of this apply node.
```
|
closed
|
2014-12-27T17:39:49Z
|
2015-01-02T01:42:50Z
|
https://github.com/dnouri/nolearn/issues/8
|
[] |
cancan101
| 7
|
cvat-ai/cvat
|
tensorflow
| 8,353
|
Exporting review comments with annotations
|
Hello,
I want to save not only annotations but also their issues (comments that are made during review by clicking right mouse button and choosing "Open Issue") however I cannot find any export format that would do that.
Am I missing something, or it isn't implemented? If not, there are any possibilities to get such a feature in near future?
Also, it would be nice to upload such comments during task creation
|
closed
|
2024-08-27T10:29:16Z
|
2024-08-27T11:14:45Z
|
https://github.com/cvat-ai/cvat/issues/8353
|
[] |
DainiusGaidamaviciuss
| 5
|
sqlalchemy/alembic
|
sqlalchemy
| 335
|
autogen primary key renderer uses `column.key` instead of `column.name` when assembling list of columns
|
**Migrated issue, originally created by Jesse Dhillon ([@jessedhillon](https://github.com/jessedhillon))**
Possibly related to #259
When rendering `PrimaryKeyConstraint`, I've noticed that I have to manually fix the output because the list of columns references each column's `key` property instead of `name`. Reading the SQLAlchemy docs, particularly:
> The name of this column as represented in the database.
~ [sqlalchemy.schema.Column.params.name](http://docs.sqlalchemy.org/en/rel_1_0/core/metadata.html#sqlalchemy.schema.Column.params.name)
and
> the `name` field is used only when rendering SQL.
~ [sqlalchemy.schema.Column.params.key](http://docs.sqlalchemy.org/en/rel_1_0/core/metadata.html#sqlalchemy.schema.Column.params.key)
it seems that `name` is the correct choice when building a `PrimaryKeyConstraint`.
|
closed
|
2015-10-25T21:47:29Z
|
2016-01-29T15:53:24Z
|
https://github.com/sqlalchemy/alembic/issues/335
|
[
"bug",
"autogenerate - rendering"
] |
sqlalchemy-bot
| 6
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,260
|
Access Azure API in Python
|
My boss sent me an url which has the following format:
`https://{appname}.azurewebsites.net/api/Authentication/Token?username=XXXX&password=YYYY
`
I would like to access the api and fetch the data from a python script.
I tried the following in script :
```
import requests
response= requests.get("https://{appname}.azurewebsites.net/",
auth=('XXXX', 'YYYY'))
print(response.status_code)
```
I received the HTTP code as 200. But I don't know how to retrieve the data from a possible GET (list of example of possible GETs below).
He sent me a list of possible GETs.
For example:
GET /api/Country
GET /api/bike/{id}
...
(the list of possible GETs and POSTs are located in SWAGGER).
I am new with APIs so any tips would help :)
|
closed
|
2020-04-20T17:20:27Z
|
2020-04-20T17:45:06Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1260
|
[] |
chloe-v
| 1
|
wger-project/wger
|
django
| 1,679
|
Trailing zero in weight chart
|

|
open
|
2024-05-25T13:22:31Z
|
2024-06-23T16:52:46Z
|
https://github.com/wger-project/wger/issues/1679
|
[] |
tomyan112
| 2
|
QingdaoU/OnlineJudge
|
django
| 219
|
并发可能导致同一个用户的同一个考试产生多条ACMContestRank
|
https://github.com/QingdaoU/OnlineJudge/blob/f7bd9f16b44f944e0bb370dcc7e0e82a1f2b5eb4/judge/dispatcher.py#L327
ACMContestRank没有设置唯一索引,同一个用户有可能创建出多条数据。之后get就会一直抛出异常
|
closed
|
2019-01-22T11:37:10Z
|
2019-03-28T03:15:47Z
|
https://github.com/QingdaoU/OnlineJudge/issues/219
|
[] |
michaelzf
| 1
|
yzhao062/pyod
|
data-science
| 545
|
TypeError: SUOD.__init__() got an unexpected keyword argument 'cost_forecast_loc_fit'
|
I was trying to use SUOD model from pyod model list. Initially when importing SUOD ; throwing error of module not found. So I have installed SUOD with command `! pip install --pre suod `. Then I was getting error of `SUOD.__init__() got an unexpected keyword argument 'cost_forecast_loc_fit'`.

|
open
|
2024-02-09T06:23:30Z
|
2024-02-09T11:59:55Z
|
https://github.com/yzhao062/pyod/issues/545
|
[] |
jajinkya
| 2
|
zihangdai/xlnet
|
nlp
| 49
|
Could we add __init__.py in the root path?
|
Could we add __init__.py in the root path?
|
closed
|
2019-06-25T01:59:45Z
|
2019-06-28T01:35:44Z
|
https://github.com/zihangdai/xlnet/issues/49
|
[] |
stevezheng23
| 1
|
deepset-ai/haystack
|
machine-learning
| 8,824
|
Reliably check whether a component has been warmed up or not
|
In the current Pipeline, whenever the `Pipeline.run()` is called the `warm_up()` for every component is run. We want to avoid that an expensive operation is executed multiple times, we cannot to this from the pipeline side. We should review that every component which has a `warm_up()` makes this check.
For instance, `SentenceTransformersTextEmbedder` is [doing it properly](https://github.com/deepset-ai/haystack/blob/main/haystack/components/embedders/sentence_transformers_text_embedder.py#L178) by checking if the sentence transformers model was already initialized.
The `NamedEntityExtractor` [uses a boolean](https://github.com/deepset-ai/haystack/blob/main/haystack/components/extractors/named_entity_extractor.py#L142) to keep track of this state.
We should review all the `warm_up()` methods and make sure this is the current behaviour.
|
closed
|
2025-02-06T11:37:13Z
|
2025-02-06T11:41:31Z
|
https://github.com/deepset-ai/haystack/issues/8824
|
[] |
davidsbatista
| 1
|
encode/httpx
|
asyncio
| 3,223
|
http2协议发送请求会发送两次,第一次数据,第二次空包End Stream,如何将第二次空包的End Stream放到第一次包中
|
The starting point for issues should usually be a discussion...
https://github.com/encode/httpx/discussions
Possible bugs may be raised as a "Potential Issue" discussion, feature requests may be raised as an "Ideas" discussion. We can then determine if the discussion needs to be escalated into an "Issue" or not.
This will help us ensure that the "Issues" list properly reflects ongoing or needed work on the project.
---
- [ ] Initially raised as discussion #...
|
closed
|
2024-06-14T10:14:33Z
|
2024-06-14T10:15:28Z
|
https://github.com/encode/httpx/issues/3223
|
[] |
mayanan-py-go
| 0
|
yezz123/authx
|
pydantic
| 611
|
📝 Fix All typing Problems in codebase
|
Fix All this related issues for typing:
- [x] #610
- [x] #609
- [x] #608
- [x] #607
- [x] #606
- [x] #605
- [x] #604
|
closed
|
2024-06-13T15:29:18Z
|
2024-06-17T16:48:06Z
|
https://github.com/yezz123/authx/issues/611
|
[
"enhancement",
"Extra Large",
"python",
"investigate",
"Typing"
] |
yezz123
| 0
|
aiortc/aiortc
|
asyncio
| 249
|
server example not working in new Chrome / Chromium
|
I just tried the server example on ubuntu 18.04. It works well in Firefox but doesn't establish a data channel in Chromium 79.
Since the offer sdp differ I suppose there is the error. Especially since Chromium just uses mDNS candidates.
Chromium:
```
v=0
o=- 3306756189431829391 2 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE 0
a=msid-semantic: WMS
m=application 9 UDP/DTLS/SCTP webrtc-datachannel
c=IN IP4 0.0.0.0
a=candidate:2117698018 1 udp 2113937151 ce171e07-5ea1-4dbd-94d2-45fa06790e34.local 59143 typ host generation 0 network-cost 999
a=candidate:2165939456 1 udp 2113939711 460dae65-26dc-42a2-ad05-df585f445dc1.local 60050 typ host generation 0 network-cost 999
a=ice-ufrag:IXtM
a=ice-pwd:wnpPO6nRnAPjhvQgcz26/xSg
a=ice-options:trickle
a=fingerprint:sha-256 06:BE:13:23:FF:E2:67:D6:B1:54:FF:ED:57:1F:07:00:6A:C1:F4:19:B5:33:16:64:7E:37:CE:31:D1:24:13:C3
a=setup:actpass
a=mid:0
a=sctp-port:5000
a=max-message-size:262144
```
Firefox:
```
v=0
o=mozilla...THIS_IS_SDPARTA-71.0 8217249654257905244 0 IN IP4 0.0.0.0
s=-
t=0 0
a=sendrecv
a=fingerprint:sha-256 66:FC:51:28:81:13:DC:57:02:AD:E5:16:23:EF:23:17:65:06:50:E9:00:1C:B1:20:96:88:36:0F:25:40:8A:11
a=group:BUNDLE 0
a=ice-options:trickle
a=msid-semantic:WMS *
m=application 41982 UDP/DTLS/SCTP webrtc-datachannel
c=IN IP4 192.168.178.157
a=candidate:0 1 UDP 2122252543 192.168.178.157 41982 typ host
a=candidate:1 1 TCP 2105524479 192.168.178.157 9 typ host tcptype active
a=sendrecv
a=end-of-candidates
a=ice-pwd:a3cf72e6b2ce5eaeb6633abb46fc1535
a=ice-ufrag:b8d5d07c
a=mid:0
a=setup:actpass
a=sctp-port:5000
a=max-message-size:1073741823
```
|
closed
|
2020-01-07T18:20:11Z
|
2021-01-29T08:50:00Z
|
https://github.com/aiortc/aiortc/issues/249
|
[
"bug"
] |
hobbeshunter
| 4
|
feder-cr/Jobs_Applier_AI_Agent_AIHawk
|
automation
| 123
|
Getting selector errors
|
Traceback (most recent call last):
File "d:\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 63, in job_apply
self._fill_application_form(job)
File "d:\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 129, in _fill_application_form
self.fill_up(job)
File "d:\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 170, in fill_up
easy_apply_content = self.driver.find_element(By.CLASS_NAME, 'jobs-easy-apply-content')
File "C:\Users\cleopatra\anaconda3\envs\hawk\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 831, in find_element
return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"]
File "C:\Users\cleopatra\anaconda3\envs\hawk\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 440, in execute
self.error_handler.check_response(response)
File "C:\Users\cleopatra\anaconda3\envs\hawk\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 245, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".jobs-easy-apply-content"}
(Session info: chrome=128.0.6613.85)
Stacktrace:
GetHandleVerifier [0x00EF8283+26163]
(No symbol) [0x00E89D34]
(No symbol) [0x00D824C3]
(No symbol) [0x00DC7453]
(No symbol) [0x00DC762B]
(No symbol) [0x00E06B62]
(No symbol) [0x00DEAD04]
(No symbol) [0x00E04661]
(No symbol) [0x00DEAA56]
(No symbol) [0x00DBBE89]
(No symbol) [0x00DBC8CD]
GetHandleVerifier [0x011CCF73+2994979]
GetHandleVerifier [0x012217E9+3341209]
GetHandleVerifier [0x00F87B5F+614159]
GetHandleVerifier [0x00F8F1EC+644508]
(No symbol) [0x00E9286D]
(No symbol) [0x00E8F768]
(No symbol) [0x00E8F905]
(No symbol) [0x00E81C86]
BaseThreadInitThunk [0x76737BA9+25]
RtlInitializeExceptionChain [0x77A2C11B+107]
RtlClearBits [0x77A2C09F+191]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "d:\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_job_manager.py", line 138, in apply_jobs
self.easy_applier_component.job_apply(job)
File "d:\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 67, in job_apply
raise Exception(f"Failed to apply to job! Original exception: \nTraceback:\n{tb_str}")
Exception: Failed to apply to job! Original exception:
Traceback:
Traceback (most recent call last):
File "d:\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 63, in job_apply
self._fill_application_form(job)
File "d:\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 129, in _fill_application_form
self.fill_up(job)
File "d:\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 170, in fill_up
easy_apply_content = self.driver.find_element(By.CLASS_NAME, 'jobs-easy-apply-content')
File "C:\Users\cleopatra\anaconda3\envs\hawk\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 831, in find_element
|
closed
|
2024-08-29T01:36:01Z
|
2024-08-29T17:23:31Z
|
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/123
|
[] |
safakhan413
| 1
|
deezer/spleeter
|
tensorflow
| 488
|
[Discussion] Standalone Mac app
|
Hi folks,
I've succeeded to build standalone Mac Application for separation(5stems).
No needs to interact with Terminal(command line). Only you have to is to download my "MySpleeter" app from below
because this app includes python, spleeter, ffmpeg, tensowflow and dependencies.
(changed at 2020/09/04)
https://github.com/kyab/MySpleeter/releases/download/20200904_2/MySpleeter20200904.dmg
Could someone try this and let me know any problem?
<img width="580" alt="スクリーンショット 2020-09-02 21 57 23" src="https://user-images.githubusercontent.com/197653/91985028-4e579b00-ed67-11ea-8aed-aeb0fc397159.png">
note:
-Its quite big app because it include python itself
-first time it may take few minutes to separation done.
|
open
|
2020-09-02T12:58:32Z
|
2022-03-13T08:40:18Z
|
https://github.com/deezer/spleeter/issues/488
|
[
"question"
] |
kyab
| 22
|
pytorch/pytorch
|
deep-learning
| 149,119
|
The device_id parameter of distributed.init_process_group will cause each process to occupy video memory on the first accessible GPU
|
### 🐛 Describe the bug
The device_id parameter of distributed.init_process_group will cause each process to occupy video memory on the first accessible GPU.
For example, I set the environment variable to "CUDA_VISIBLE_DEVICES": "0,1" . After init_process_group is executed, rank 1 will also occupy some video memory on GPU 0. This is obviously not what I expected.
Before, I used torch.cuda.set_device(local_rank) to set the GPU and never used the device_id parameter. But when I updated pytorch, pytorch gave me this warning' [rank0]:[W313 18:22:43.453826616 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.'
The code is as follows:
```python
import os
import torch
import torch.multiprocessing as mp
from torch import distributed
def proc_main(local_rank):
torch.cuda.set_device(local_rank)
backend = 'nccl' if distributed.is_nccl_available() else 'gloo'
print(f'backend is {backend}')
dev = torch.device('cuda', local_rank)
distributed.init_process_group(
backend=backend,
init_method='env://',
world_size=torch.cuda.device_count(),
rank=local_rank,
device_id=dev
)
distributed.barrier()
distributed.destroy_process_group()
def main():
if distributed.is_available():
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '9987'
mp.spawn(proc_main, nprocs=torch.cuda.device_count())
if __name__ == '__main__':
main()
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090 Ti
GPU 1: NVIDIA GeForce RTX 3090 Ti
GPU 2: NVIDIA GeForce RTX 3090 Ti
GPU 3: NVIDIA GeForce RTX 3090 Ti
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6244 CPU @ 3.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
Stepping: 7
CPU max MHz: 4400.0000
CPU min MHz: 1200.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 49.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] triton==3.2.0
[conda] numpy 2.2.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
|
open
|
2025-03-13T11:29:00Z
|
2025-03-13T18:51:35Z
|
https://github.com/pytorch/pytorch/issues/149119
|
[
"oncall: distributed",
"triaged",
"bug"
] |
Staten-Wang
| 1
|
yezyilomo/django-restql
|
graphql
| 254
|
Problem when creating/updating ManyToOne nested field if a serializer is using fields = '__all__'
|
when try to create nested record; show this error `Field name `_` is not valid for model `ModelName`.`
|
closed
|
2021-08-03T19:18:53Z
|
2021-08-05T20:42:48Z
|
https://github.com/yezyilomo/django-restql/issues/254
|
[
"bug"
] |
farasu
| 12
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 10,355
|
Insert performance regression of ORM models with multiple Enums and async engine
|
### Describe the bug
Adding a second Enum column to an ORM model leads to unpredictable performance regression during the insert commit when running with an async engine and null pool size. This issue is not reproduced with a synchronous engine.
### SQLAlchemy Version in Use
2.0.20
### DBAPI (i.e. the database driver)
asyncpg 0.28.0
greenlet 2.0.2
### Database Vendor and Major Version
PostgreSQL 15
### Python Version
3.11.4
### Operating system
OSX
### To Reproduce
```python
import asyncio
import enum
import time
from sqlalchemy import JSON, NullPool
from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
class EnumOne(enum.Enum):
VALUE = "value"
class EnumTwo(enum.Enum):
VALUE = "value"
class EnumThree(enum.Enum):
VALUE = "value"
class Base(DeclarativeBase):
...
class ModelWithOneEnum(Base):
__tablename__ = "model_with_one_enum"
id: Mapped[int] = mapped_column(primary_key=True)
enum_one: Mapped[EnumOne]
# To demonstrate that it's not about additional column, can be removed
third_column: Mapped[dict] = mapped_column(JSON)
class ModelWithTwoEnums(Base):
__tablename__ = "model_with_two_enums"
id: Mapped[int] = mapped_column(primary_key=True)
enum_one: Mapped[EnumOne]
enum_two: Mapped[EnumTwo]
class ModelWithThreeEnums(Base):
__tablename__ = "model_with_three_enums"
id: Mapped[int] = mapped_column(primary_key=True)
enum_one: Mapped[EnumOne]
enum_two: Mapped[EnumTwo]
enum_three: Mapped[EnumThree]
async def main() -> None:
connection_string = "postgresql+asyncpg://postgres:postgres@localhost:5432/postgres"
setup_engine = create_async_engine(connection_string)
null_pool_engine = create_async_engine(connection_string, poolclass=NullPool)
with_pool_size_engine = create_async_engine(connection_string, pool_size=10)
async with setup_engine.begin() as conn:
await conn.run_sync(Base.metadata.drop_all)
await conn.run_sync(Base.metadata.create_all)
async with AsyncSession(null_pool_engine) as session:
session.add(ModelWithOneEnum(enum_one=EnumOne.VALUE, third_column=[1] * 1000))
start_time = time.time()
await session.commit()
print(f"NullPool + ModelWithOneEnum: {time.time() - start_time}")
async with AsyncSession(null_pool_engine) as session:
session.add(ModelWithTwoEnums(enum_one=EnumOne.VALUE, enum_two=EnumTwo.VALUE))
start_time = time.time()
await session.commit()
print(f"NullPool + ModelWithTwoEnums: {time.time() - start_time}")
async with AsyncSession(null_pool_engine) as session:
session.add(
ModelWithThreeEnums(
enum_one=EnumOne.VALUE,
enum_two=EnumTwo.VALUE,
enum_three=EnumThree.VALUE,
)
)
start_time = time.time()
await session.commit()
print(f"NullPool + ModelWithThreeEnums: {time.time() - start_time}")
async with AsyncSession(with_pool_size_engine) as session:
session.add(ModelWithOneEnum(enum_one=EnumOne.VALUE, third_column=[1] * 1000))
start_time = time.time()
await session.commit()
print(f"WithPool + ModelWithOneEnum: {time.time() - start_time}")
async with AsyncSession(with_pool_size_engine) as session:
session.add(ModelWithTwoEnums(enum_one=EnumOne.VALUE, enum_two=EnumTwo.VALUE))
start_time = time.time()
await session.commit()
print(f"WithPool + ModelWithTwoEnums: {time.time() - start_time}")
async with AsyncSession(with_pool_size_engine) as session:
session.add(
ModelWithThreeEnums(
enum_one=EnumOne.VALUE,
enum_two=EnumTwo.VALUE,
enum_three=EnumThree.VALUE,
)
)
start_time = time.time()
await session.commit()
print(f"WithPool + ModelWithThreeEnums: {time.time() - start_time}")
if __name__ == "__main__":
asyncio.run(main())
```
### Error
Timings on my machine:
```
NullPool + ModelWithOneEnum: 0.1395738124847412
NullPool + ModelWithTwoEnums: 0.6456940174102783
NullPool + ModelWithThreeEnums: 0.6018879413604736
WithPool + ModelWithOneEnum: 0.1197047233581543
WithPool + ModelWithTwoEnums: 0.04678606986999512
WithPool + ModelWithThreeEnums: 0.04847002029418945
```
### Additional context
1. When using a synchronous engine, all inserts take roughly the same amount of time (under 0.02s for me).
2. Changing the order of operations does not affect the result
3. The third Enum in the model does not add any overhead vs the two-enum model
Offtop: the reason we have to use the null pool is that we couldn't find another way to make `sqlalchemy[asyncio]` work fine with `pytest-asyncio` in auto mode (raises "attached to a different loop" otherwise).
|
closed
|
2023-09-17T01:29:30Z
|
2023-09-17T01:58:50Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/10355
|
[
"performance",
"external driver issues"
] |
cepbuch
| 1
|
graphql-python/graphene-sqlalchemy
|
graphql
| 242
|
Many to Many relationships! How to Mutation?
|
i have models
```python
user_role = db.Table(
"user_role",
db.metadata,
db.Column("user_id", db.Integer, db.ForeignKey("user.id")),
db.Column("role_id", db.Integer, db.ForeignKey("role.id")),
db.UniqueConstraint("user_id", "role_id"),
)
class User(db.Model):
__tablename__ = "user"
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(20))
password = db.Column(db.String(100), nullable=False)
roles = db.relationship("Role", secondary=user_role, back_populates="users")
class Role(db.Model):
__tablename__ = "role"
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255))
users = db.relationship("User", secondary=user_role, back_populates="roles")
```
user.roles how to input in graphql and how to do in mutate?
can you give some example ?
thanks very much~
|
closed
|
2019-08-06T07:13:37Z
|
2023-02-25T00:48:27Z
|
https://github.com/graphql-python/graphene-sqlalchemy/issues/242
|
[] |
goodking-bq
| 4
|
aeon-toolkit/aeon
|
scikit-learn
| 2,650
|
[ENH] Improve `create_multi_comparison_matrix` parameters and saving
|
### Describe the feature or idea you want to propose
The MCM figure creator has a lot of parameters, some of which could be a lot tidier IMO. i.e. we probably dont need 5 (or more) parameters for output.
```
output_dir="./",
pdf_savename=None,
png_savename=None,
csv_savename=None,
tex_savename=None,
```
Same with ones like `order_win_tie_loss` and `order_better` which feel like they could be the same.
Saving the returned figure as a PDF also just returns a blank file. Not sure if plotting the returned figure is any better.
```
mcm = create_multi_comparison_matrix(df)
mcm.savefig(
f"{save_path}/{statistic_name}/figures/"
f"{eval_name}_{statistic_name.lower()}_mcm.pdf",
bbox_inches="tight",
)
```
### Describe your proposed solution
Improve saving for the MCM figure and condense parameters, or at least discuss doing so.
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
I was trying to use it in my evaluation package and it was a little unintuitive to get working IMO 🙂. https://github.com/time-series-machine-learning/tsml-eval/blob/main/tsml_eval/evaluation/multiple_estimator_evaluation.py#L1330
|
open
|
2025-03-18T19:33:37Z
|
2025-03-20T10:32:55Z
|
https://github.com/aeon-toolkit/aeon/issues/2650
|
[
"enhancement",
"visualisation"
] |
MatthewMiddlehurst
| 2
|
huggingface/datasets
|
numpy
| 7,018
|
`load_dataset` fails to load dataset saved by `save_to_disk`
|
### Describe the bug
This code fails to load the dataset it just saved:
```python
from datasets import load_dataset
from transformers import AutoTokenizer
MODEL = "google-bert/bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
dataset = load_dataset("yelp_review_full")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
tokenized_datasets.save_to_disk("dataset")
tokenized_datasets = load_dataset("dataset/") # raises
```
It raises `ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}`.
I believe this bug is caused by the [logic that tries to infer dataset format](https://github.com/huggingface/datasets/blob/9af8dd3de7626183a9a9ec8973cebc672d690400/src/datasets/load.py#L556). It counts the most common file extension. However, a small dataset can fit in a single `.arrow` file and have two JSON metadata files, causing the format to be inferred as JSON:
```shell
$ ls -l dataset/test
-rw-r--r-- 1 sliedes sliedes 191498784 Jul 1 13:55 data-00000-of-00001.arrow
-rw-r--r-- 1 sliedes sliedes 1730 Jul 1 13:55 dataset_info.json
-rw-r--r-- 1 sliedes sliedes 249 Jul 1 13:55 state.json
```
### Steps to reproduce the bug
Execute the code above.
### Expected behavior
The dataset is loaded successfully.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.9.3-arch1-1-x86_64-with-glibc2.39
- Python version: 3.12.4
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
|
open
|
2024-07-01T12:19:19Z
|
2024-12-03T11:26:17Z
|
https://github.com/huggingface/datasets/issues/7018
|
[] |
sliedes
| 4
|
cvat-ai/cvat
|
computer-vision
| 9,125
|
Images are not uploaded when trying to create a new task
|
I am using CVAT version 2.5.0 locally installed on an ubuntu 22.04 operating system. Everything was working fine until we needed to install the server operating system again. I backed up CVAT data by following the [backup guide](https://docs.cvat.ai/docs/administration/advanced/backup_guide/).
After installing CVAT again and restored the data using the same backup guide above, my data is back and everything works, except that I cannot successfully create a new task anymore.
Here is what I am doing and the result that I get:
I create the task by using the "create a new task" as shown in the following image:

Then I enter the name of the task and select the image to upload.

And then click 'submit & open'.
The image looks like being uploaded but the job is not created and the images are not seen within the task:


Does anyone have an idea what the problem might be?
|
open
|
2025-02-20T00:21:38Z
|
2025-02-21T13:10:01Z
|
https://github.com/cvat-ai/cvat/issues/9125
|
[
"need info"
] |
fshaker
| 1
|
blacklanternsecurity/bbot
|
automation
| 1,594
|
Bug with RAW_DNS_RECORD discovery_path
|
```
{"type": "RAW_DNS_RECORD", "id": "RAW_DNS_RECORD:78f971682ee9ef651f9740456466e53ed8da3ba1", "data": {"host": "ind.dell.com", "type": "MX", "answer": "10 dmx1.bfi0.com."}, "host": "ind.dell.com", "resolved_hosts": ["208.70.143.18"], "dns_children": {"A": ["208.70.143.18"], "MX": ["dmx1.bfi0.com"], "TXT": ["dmx1.bfi0.com", "spf2.0", "bfi0.com"]}, "scan": "SCAN:5ba8f0947209a7f8e2362774209cefc59b650ea9", "timestamp": 1721859008.039669, "parent": "RAW_DNS_RECORD:b177e9c1d7c9f90025a25ab10aa9dc8581c7ddbb", "tags": ["in-scope", "mx-record"], "discovery_context": "MX lookup on ind.dell.com produced RAW_DNS_RECORD", "discovery_path": ["Scan 2024-07-24_06-52-07 seeded with DNS_NAME: dell.com", "rapiddns searched rapiddns API for \"dell.com\" and found DNS_NAME: ind.dell.com", "TXT lookup on ind.dell.com produced RAW_DNS_RECORD", "TXT lookup on ind.dell.com produced RAW_DNS_RECORD", "TXT lookup on ind.dell.com produced RAW_DNS_RECORD", "TXT lookup on ind.dell.com produced RAW_DNS_RECORD", "TXT lookup on ind.dell.com produced RAW_DNS_RECORD", "TXT lookup on ind.dell.com produced RAW_DNS_RECORD", "MX lookup on ind.dell.com produced RAW_DNS_RECORD"]}
```
Discovered by @amiremami
|
closed
|
2024-07-28T16:37:52Z
|
2024-08-02T14:47:26Z
|
https://github.com/blacklanternsecurity/bbot/issues/1594
|
[
"bug"
] |
TheTechromancer
| 3
|
ray-project/ray
|
pytorch
| 51,018
|
[core] Performance evaluation on GCS key-value storage
|
### Description
For the current GCS key-value storage implementation, we're storing value (protobuf message) as a serialized string.
Eg:
protobuf message serialization
https://github.com/ray-project/ray/blob/a04cb06bb1a2c09e93b882b611492d62b8d1837a/src/ray/gcs/gcs_server/gcs_table_storage.cc#L40
flat hash map
https://github.com/ray-project/ray/blob/a04cb06bb1a2c09e93b882b611492d62b8d1837a/src/ray/gcs/store_client/in_memory_store_client.h#L81
One improvement we could do, is to refactor the hash map to
```
struct ArenaAllocatedProto {
arena
ProtoMessage*
}
absl::flat_hash_map<string, shared_ptr<ArenaAllocatedProto>>
```
The benefit is:
- When passing value to the callback, we don't need to copy the whole string (which could be infinitely long); as a comparison, atomic reference count addition's overhead is guaranteed
- No need for (de)serialization for every Put and Get operation
### Use case
Improve GCS performance
|
open
|
2025-03-02T02:12:32Z
|
2025-03-03T19:23:36Z
|
https://github.com/ray-project/ray/issues/51018
|
[
"enhancement",
"P2",
"core"
] |
dentiny
| 1
|
pydantic/pydantic-ai
|
pydantic
| 1,070
|
Documentation on Instrumentation?
|
### Description
I see InstrumentedModel in the codebase, but no documentation of it.
Is the feature not yet prepared, and/or can we expect documentation?
### References
_No response_
|
closed
|
2025-03-06T11:01:31Z
|
2025-03-20T11:40:14Z
|
https://github.com/pydantic/pydantic-ai/issues/1070
|
[
"documentation"
] |
pedroallenrevez
| 6
|
oegedijk/explainerdashboard
|
plotly
| 152
|
Encountering an AttributeError: 'NoneType' object has no attribute 'dim'
|
Hi all,
trying to implement the dashboard on a PyTorch NN classification model. I am using a similar function that was on In GitHub for this case:
`def get_skorch_classifier():
X_train_m = X_train.astype(np.float32)
y_train_m = y_train.astype(np.int64)
X_train_df = pd.DataFrame(X_train_m, columns=X.columns)
class MyModule(nn.Module):
def __init__(self, input=298, l1=60, l2=60, l3=60, l4=60, dropout=0.1):
super(MyModule, self).__init__()
self.layer_1 = nn.Linear(input, l1)
self.layer_2 = nn.Linear(l1, l2)
self.layer_3 = nn.Linear(l2, l3)
self.layer_4 = nn.Linear(l3, l4)
self.layer_out = nn.Linear(l4, 1)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(dropout)
self.batchnorm1 = nn.BatchNorm1d(l1, momentum=0.2)
self.batchnorm2 = nn.BatchNorm1d(l2, momentum=0.2)
self.batchnorm3 = nn.BatchNorm1d(l3, momentum=0.2)
self.batchnorm4 = nn.BatchNorm1d(l4, momentum=0.2)
self.sigmoid = nn.Sigmoid()
def forward(self, inputs):
x = self.relu(self.layer_1(inputs))
x = self.batchnorm1(x)
x = self.dropout(x)
x = self.relu(self.layer_2(x))
x = self.batchnorm2(x)
x = self.dropout(x)
x = self.relu(self.layer_3(x))
x = self.batchnorm3(x)
x = self.dropout(x)
x = self.relu(self.layer_4(x))
x = self.batchnorm4(x)
#x = self.dropout(x)
x = self.layer_out(x)
#x = self.sigmoid(x)
model = NeuralNetBinaryClassifier(MyModule, max_epochs=20, lr=0.01, optimizer=optim.Adam)
model.fit(X_train_df.values, y_train_m)
return model,X_train_m, X_train_df, y_train_m
model,Xm, Xm_df, ym = get_skorch_classifier()`
Where X_train and y_train are ndarray, X is a DataFrame.
When trying to run the code it gives the following error:
> AttributeError Traceback (most recent call last)
> <ipython-input-48-73280dfe11ae> in <module>()
> 52 return model,X_train_m, X_train_df, y_train_m
> 53
> ---> 54 model,Xm, Xm_df, ym = get_skorch_classifier()
>
> 12 frames
> /usr/local/lib/python3.7/dist-packages/skorch/classifier.py in infer(self, x, **fit_params)
> 300 y_infer, *rest = y_infer
> 301
> --> 302 if (y_infer.dim() > 2) or ((y_infer.dim() == 2) and (y_infer.shape[1] != 1)):
> 303 raise ValueError(
> 304 "Expected module output to have shape (n,) or "
>
> AttributeError: 'NoneType' object has no attribute 'dim'
Any ideas why this is happening and what to do?
|
closed
|
2021-10-28T07:54:16Z
|
2021-10-30T06:42:52Z
|
https://github.com/oegedijk/explainerdashboard/issues/152
|
[] |
Waeara
| 0
|
gradio-app/gradio
|
machine-learning
| 10,223
|
Request for Adding error handling for gradio_client's TypeError: can only concatenate str (not "bool") to str
|
gradio_client/client.py has an issue at the line 1181. The issue seem to be same as described in [#4884](https://github.com/gradio-app/gradio/issues/4884) which has been fixed in [PR #4886](https://github.com/gradio-app/gradio/pull/4886).
https://github.com/gradio-app/gradio/blob/501adefd0c3d5769055ef2156c85e586eb60bf84/client/python/gradio_client/client.py#L1179-L1183
Please add an error handling the case self.client.config["dependencies"][i].get("api_name") returns False.
Error is like below.

|
closed
|
2024-12-18T02:28:42Z
|
2024-12-18T17:20:33Z
|
https://github.com/gradio-app/gradio/issues/10223
|
[
"needs repro"
] |
yokomizo-tech
| 3
|
rougier/numpy-100
|
numpy
| 197
|
Alternative solution to q4
|
Use `Z.nbytes` to find the memory size directly.
```python
print("%d bytes" % (Z.nbytes))
```
|
closed
|
2023-03-02T06:45:11Z
|
2023-10-11T04:12:40Z
|
https://github.com/rougier/numpy-100/issues/197
|
[] |
jeremy-feng
| 2
|
dpgaspar/Flask-AppBuilder
|
rest-api
| 1,717
|
Can I modify add form field to have select list from data got by external API?
|
I want to extend one field from add form, to be able to choose option from select list got by external API. Let me explain the question.
I've got model like:
```
class ContactGroup(Model):
id = Column(Integer, primary_key=True)
name = Column(String(50), unique=True, nullable=False)
my_field_id = Column(Integer, nullable=False)
def __repr__(self):
return self.name
```
When I want to add new entity, by default form created by app-builder, I have to specify my_field_id "by hand". So my question is, can I extend my_field_id to instead of specifying integer I will have a select list with data returned by external API.
Api is providing data as a json, ex.:
```
{
data: [
{
id: 1,
name: "My field"
}
...
]
}
```
Thanks!
|
closed
|
2021-10-15T10:00:57Z
|
2022-02-07T11:56:58Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1717
|
[] |
Shhad
| 4
|
JaidedAI/EasyOCR
|
pytorch
| 776
|
Is there any onnx model deploy example?
|
closed
|
2022-07-05T16:07:03Z
|
2022-07-11T02:22:24Z
|
https://github.com/JaidedAI/EasyOCR/issues/776
|
[] |
yuanyan3060
| 1
|
|
ray-project/ray
|
python
| 51,518
|
[Autoscaler] Issue when using ray start --head with --autoscaling-config
|
### What happened + What you expected to happen
## Describe
When running ray start --head with --autoscaling-config, the head node is deployed, but the head node creation request defined in head_node_type(autoscaling-config) remains in a pending state in the autoscaler, even though the head node is already running.
In this case, the head_node_type does not need to be defined, but it is required as a configuration. Is this behavior intentional, or is there something I might be overlooking?
### autoscaling-config
```yaml
...
# Specify the node type of the head node (as configured above).
head_node_type: head.default
```
## Expected Behavior
- Run the head node using the following command:
- ray start --head --dashboard-agent-listen-port=52365 --memory=0 --num-gpus=1 --num-cpus=0 --autoscaling-config=/tmp/dev/test.yaml
- The head node should be deployed, and since worker.default is set with min_workers: 10, 10 worker.default nodes should be pending in the autoscaler.
- ray list nodes
- Total: 1(head node)
- Pending:
- worker.default, 10
## Actual Behavior

- ray list nodes
- Total: 1(head node)
- Pending:
- head.default, 1
- worker.default, 10
### Versions / Dependencies
Ray-2.43.0
### Reproduction script
## /tmp/dev/test.yaml
```yaml
cluster_name: test
provider:
type: local
head_ip: localhost
coordinator_address: "127.0.0.1:8000"
# How Ray will authenticate with newly launched nodes.
auth:
ssh_user: root
ssh_private_key: /root/.ssh/id_rsa
worker_liveness_check: False
worker_rpc_drain: True
disable_node_updaters: True
disable_launch_config_check: True
use_internal_ips: True
max_workers: 100
# The default behavior for manually managed clusters is
# min_workers == max_workers == len(worker_ips),
# meaning that Ray is started on all available nodes of the cluster.
# For automatically managed clusters, max_workers is required and min_workers defaults to 0.
# The autoscaler will scale up the cluster faster with higher upscaling speed.
# E.g., if the task requires adding more nodes then autoscaler will gradually
# scale up the cluster in chunks of upscaling_speed*currently_running_nodes.
# This number should be > 0.
upscaling_speed: 1.0
idle_timeout_minutes: 5
# Files or directories to copy to the head and worker nodes. The format is a
# dictionary from REMOTE_PATH: LOCAL_PATH, e.g.
file_mounts: {
# "/path1/on/remote/machine": "/path1/on/local/machine",
# "/path2/on/remote/machine": "/path2/on/local/machine",
}
# Files or directories to copy from the head node to the worker nodes. The format is a
# list of paths. The same path on the head node will be copied to the worker node.
# This behavior is a subset of the file_mounts behavior. In the vast majority of cases
# you should just use file_mounts. Only use this if you know what you're doing!
cluster_synced_files: []
# Whether changes to directories in file_mounts or cluster_synced_files in the head node
# should sync to the worker node continuously
file_mounts_sync_continuously: False
# Patterns for files to exclude when running rsync up or rsync down
rsync_exclude:
- "**/.git"
- "**/.git/**"
# Pattern files to use for filtering out files when running rsync up or rsync down. The file is searched for
# in the source directory and recursively through all subdirectories. For example, if .gitignore is provided
# as a value, the behavior will match git's behavior for finding and using .gitignore files.
rsync_filter:
- ".gitignore"
# List of commands that will be run before `setup_commands`. If docker is
# enabled, these commands will run outside the container and before docker
# is setup.
initialization_commands: []
# List of shell commands to run to set up each nodes.
setup_commands: [
]
# Note: if you're developing Ray, you probably want to create a Docker image that
# has your Ray repo pre-cloned. Then, you can replace the pip installs
# below with a git checkout <your_sha> (and possibly a recompile).
# To run the nightly version of ray (as opposed to the latest), either use a rayproject docker image
# that has the "nightly" (e.g. "rayproject/ray-ml:nightly-gpu") or uncomment the following line:
# - pip install -U "ray[default] @ https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-2.0.0.dev0-cp37-cp37m-manylinux2014_x86_64.whl"
# Custom commands that will be run on the head node after common setup.
head_setup_commands: [
]
# Custom commands that will be run on worker nodes after common setup.
worker_setup_commands: [
]
# Command to start ray on the head node. You don't need to change this.
head_start_ray_commands: [
]
# Command to start ray on worker nodes. You don't need to change this.
worker_start_ray_commands: [
]
# Tell the autoscaler the allowed node types and the resources they provide.
# The key is the name of the node type, which is just for debugging purposes.
# The node config specifies the launch config and physical instance type.
available_node_types:
worker:
min_workers: 10
max_workers: 10
resources: {"memory": 16384, "num_cpus": 1}
worker2:
min_workers: 0
max_workers: 10
node_config:
InstanceType: t1.large
resources: {"num_cpus": 100}
head.default:
min_workers: 0
max_workers: 0
# You can override the resources here. For GPU, currently only Nvidia GPU is supported. If no ESXi host can
# fulfill the requirement, the Ray node creation will fail. The number of created nodes may not meet the desired
# minimum number. The vSphere node provider will not distinguish the GPU type. It will just count the quantity:
resources: {"memory": 0, "num_cpus": 0, "num_gpus": 1}
# Specify the node type of the head node (as configured above).
head_node_type: head.default
```
## Run
```sh
ray start --head --dashboard-agent-listen-port=52365 --memory=0 --num-gpus=1 --num-cpus=0 --autoscaling-config=/tmp/dev/test.yaml
```
### Issue Severity
High: It blocks me from completing my task.
|
open
|
2025-03-19T09:29:43Z
|
2025-03-20T00:33:48Z
|
https://github.com/ray-project/ray/issues/51518
|
[
"bug",
"P2",
"core"
] |
nadongjun
| 2
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 9,828
|
add reflect kw for DeferredReflection.prepare
|
the API here seems dated as it accepts only an Engine, but we can at least add reflection kw, see https://github.com/sqlalchemy/sqlalchemy/issues/5499#issuecomment-1560262233
|
closed
|
2023-05-24T02:01:03Z
|
2023-05-24T13:17:43Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/9828
|
[
"orm",
"sqlalchemy.ext",
"declarative",
"reflection",
"use case"
] |
zzzeek
| 1
|
PokeAPI/pokeapi
|
api
| 492
|
Missing moveset information
|
The database doesn't provide information about Pokemon's egg and event moves such as island scan or global distributions.
|
closed
|
2020-05-12T17:30:37Z
|
2020-08-19T10:53:35Z
|
https://github.com/PokeAPI/pokeapi/issues/492
|
[] |
ghost
| 3
|
seleniumbase/SeleniumBase
|
web-scraping
| 3,343
|
why when i press enter it make for me a new line
|
for i in "seleniumbase2025\n": sb.sleep(0.5) ,sb.cdp.press_keys('[name="q"]', i)
i use this line to press Enter after write the keyword but it make a new line can i know how to press Enter ?
|
closed
|
2024-12-15T23:00:52Z
|
2024-12-16T02:19:44Z
|
https://github.com/seleniumbase/SeleniumBase/issues/3343
|
[
"can't reproduce",
"UC Mode / CDP Mode"
] |
pythondeveloperz
| 1
|
sanic-org/sanic
|
asyncio
| 2,649
|
There is a problem with the Extend.register method
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
I have asked for help on forums before, and I feel that there is a bug in the `Extend.register` method https://community.sanicframework.org/t/use-dependency-method-in-custom-extensions-not-work/1107/7
### Code snippet
_No response_
### Expected Behavior
The `Extend.register` method can achieve the same effect as `app.extend(extensions=[Plugin])`
### How do you run Sanic?
As a script (`app.run` or `Sanic.serve`)
### Operating System
linux
### Sanic Version
v22.9.1
### Additional context
_No response_
|
open
|
2023-01-05T09:26:20Z
|
2023-01-05T09:26:20Z
|
https://github.com/sanic-org/sanic/issues/2649
|
[
"bug"
] |
tqq1994516
| 0
|
keras-team/autokeras
|
tensorflow
| 1,856
|
Bug:
|
### On binary classifier, AutoKeras returns only one class's probability on `predict` call.
```
# model: is trained autokeras model
prob = model.predict(vector)
print(prob)
```
Data used by the code: Any binary classifier data
### Expected Behavior.
It should return probabilities of all `n` classes.
For binary, it should print as [0.18029907, 0.41025335] but it prints [0.18029907]
For 3 classes it should print as [0.18029907, 0.41025335, 0.40944752] and it prints as expected [0.18029907, 0.41025335, 0.40944752]
```
prob = model.predict(vector)
prob = pd.DataFrame(prob)
print(prob.shape)
```
When I convert the above prob into a pandas data frame, For the binary class it prints one column only and for multiple class datasets it prints `n` columns where `n` is the number of classes.
The issue is with binary classifiers only
### Setup Details
- OS type and version:
- Python: >= 3.8
- autokeras==1.0.19
- keras-tuner:1.1.3
- scikit-learn:1.2.0
- numpy:1.22.4
- pandas:1.3.0
- tensorflow:2.11.0
### Additional context
When I run it on multiple class datasets, then the `predict` call returns all class probability as expected
|
open
|
2023-02-24T09:15:06Z
|
2023-02-24T09:29:46Z
|
https://github.com/keras-team/autokeras/issues/1856
|
[
"bug report"
] |
shabir1
| 0
|
tensorflow/tensor2tensor
|
machine-learning
| 1,605
|
[bug] img2img generation: likelihood error for hparams_set advised in ReadMe
|
### Description
I am not able to run a training job for the img2img transformer example.
I have tested it with hparams_set =
* img2img_transformer2d_base
* img2img_transformer2d_tiny
* img2img_transformer_b1
* img2img_transformer_base
* img2img_transformer_tiny
### Environment information
OS: Ubuntu through Docker with image FROM tensorflow/tensorflow:1.13.1-gpu
```
$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
tensor2tensor==1.13.4
tensorboard==1.13.1
tensorflow-datasets==1.0.2
tensorflow-estimator==1.13.0
tensorflow-gpu==1.13.1
tensorflow-metadata==0.13.0
tensorflow-probability==0.6.0
```
```
$ python -V
Python 2.7.12
```
### For bugs: reproduction and error logs
# Steps to reproduce:
```
t2t-trainer --generate_data --data_dir=~/t2t_data --output_dir=~/transfo_celeba/first_try --problem=img2img_celeba --model=imagetransformer --hparams_set=img2img_transformer2d_tiny --train_steps=100000 --eval_steps=100
```
# Error logs:
```
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
INFO:tensorflow:Generating data for img2img_celeba
INFO:tensorflow:Skipping generator because outputs files exists at ['/root/t2t_data/image_celeba-unshuffled-train-00000-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00001-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00002-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00003-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00004-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00005-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00006-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00007-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00008-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00009-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00010-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00011-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00012-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00013-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00014-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00015-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00016-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00017-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00018-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00019-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00020-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00021-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00022-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00023-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00024-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00025-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00026-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00027-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00028-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00029-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00030-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00031-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00032-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00033-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00034-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00035-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00036-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00037-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00038-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00039-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00040-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00041-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00042-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00043-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00044-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00045-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00046-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00047-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00048-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00049-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00050-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00051-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00052-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00053-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00054-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00055-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00056-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00057-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00058-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00059-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00060-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00061-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00062-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00063-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00064-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00065-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00066-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00067-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00068-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00069-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00070-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00071-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00072-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00073-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00074-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00075-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00076-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00077-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00078-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00079-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00080-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00081-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00082-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00083-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00084-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00085-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00086-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00087-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00088-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00089-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00090-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00091-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00092-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00093-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00094-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00095-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00096-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00097-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00098-of-00100', '/root/t2t_data/image_celeba-unshuffled-train-00099-of-00100']
INFO:tensorflow:Skipping generator because outputs files exists at ['/root/t2t_data/image_celeba-unshuffled-dev-00000-of-00010', '/root/t2t_data/image_celeba-unshuffled-dev-00001-of-00010', '/root/t2t_data/image_celeba-unshuffled-dev-00002-of-00010', '/root/t2t_data/image_celeba-unshuffled-dev-00003-of-00010', '/root/t2t_data/image_celeba-unshuffled-dev-00004-of-00010', '/root/t2t_data/image_celeba-unshuffled-dev-00005-of-00010', '/root/t2t_data/image_celeba-unshuffled-dev-00006-of-00010', '/root/t2t_data/image_celeba-unshuffled-dev-00007-of-00010', '/root/t2t_data/image_celeba-unshuffled-dev-00008-of-00010', '/root/t2t_data/image_celeba-unshuffled-dev-00009-of-00010']
INFO:tensorflow:Skipping generator because outputs files exists at ['/root/t2t_data/image_celeba-unshuffled-test-00000-of-00010', '/root/t2t_data/image_celeba-unshuffled-test-00001-of-00010', '/root/t2t_data/image_celeba-unshuffled-test-00002-of-00010', '/root/t2t_data/image_celeba-unshuffled-test-00003-of-00010', '/root/t2t_data/image_celeba-unshuffled-test-00004-of-00010', '/root/t2t_data/image_celeba-unshuffled-test-00005-of-00010', '/root/t2t_data/image_celeba-unshuffled-test-00006-of-00010', '/root/t2t_data/image_celeba-unshuffled-test-00007-of-00010', '/root/t2t_data/image_celeba-unshuffled-test-00008-of-00010', '/root/t2t_data/image_celeba-unshuffled-test-00009-of-00010']
INFO:tensorflow:Skipping shuffle because output files exist
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensor2tensor/utils/trainer_lib.py:240: __init__ (from tensorflow.contrib.learn.python.learn.estimators.run_config) is deprecated and will be removed in a future version.
Instructions for updating:
When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead.
INFO:tensorflow:Configuring DataParallelism to replicate the model.
INFO:tensorflow:schedule=continuous_train_and_eval
INFO:tensorflow:worker_gpu=1
INFO:tensorflow:sync=False
WARNING:tensorflow:Schedule=continuous_train_and_eval. Assuming that training is running on a single machine.
INFO:tensorflow:datashard_devices: ['gpu:0']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:ps_devices: ['gpu:0']
INFO:tensorflow:Using config: {'_save_checkpoints_secs': None, '_num_ps_replicas': 0, '_keep_checkpoint_max': 20, '_task_type': None, '_train_distribute': None, '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f4dd88b9ad0>, '_tf_config': gpu_options {
per_process_gpu_memory_fraction: 1.0
}
, '_protocol': None, '_save_checkpoints_steps': 1000, '_keep_checkpoint_every_n_hours': 10000, '_session_config': gpu_options {
per_process_gpu_memory_fraction: 0.95
}
allow_soft_placement: true
graph_options {
optimizer_options {
global_jit_level: OFF
}
}
isolate_session_state: true
, '_model_dir': '/root/transfo_celeba/first_try', 'use_tpu': False, '_tf_random_seed': None, '_master': '', '_device_fn': None, '_num_worker_replicas': 0, '_task_id': 0, '_log_step_count_steps': 100, '_evaluation_master': '', '_eval_distribute': None, 'data_parallelism': <tensor2tensor.utils.expert_utils.Parallelism object at 0x7f4dd88b9b10>, '_environment': 'local', '_save_summary_steps': 100, 't2t_device_info': {'num_async_replicas': 1}}
WARNING:tensorflow:Estimator's model_fn (<function wrapping_model_fn at 0x7f4de2c32230>) includes params argument, but params are not passed to Estimator.
WARNING:tensorflow:ValidationMonitor only works with --schedule=train_and_evaluate
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 1000 or save_checkpoints_secs None.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
INFO:tensorflow:Reading data files from /root/t2t_data/image_celeba-train*
INFO:tensorflow:partition: 0 num_data_files: 100
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensor2tensor/data_generators/image_utils.py:94: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensor2tensor/utils/data_reader.py:275: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
`tf.data.TFRecordDataset(path)`
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensor2tensor/utils/data_reader.py:37: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensor2tensor/utils/data_reader.py:233: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Setting T2TModel mode to 'train'
INFO:tensorflow:Using variable initializer: uniform_unit_scaling
INFO:tensorflow:Transforming feature 'inputs' with identity_modality.bottom
INFO:tensorflow:Transforming feature 'targets' with identity_modality.targets_bottom
INFO:tensorflow:Building model body
Traceback (most recent call last):
File "/usr/local/bin/t2t-trainer", line 33, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/usr/local/bin/t2t-trainer", line 28, in main
t2t_trainer.main(argv)
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 401, in main
execute_schedule(exp)
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 356, in execute_schedule
getattr(exp, FLAGS.schedule)()
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/utils/trainer_lib.py", line 401, in continuous_train_and_eval
self._eval_spec)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/training.py", line 471, in train_and_evaluate
return executor.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/training.py", line 611, in run
return self.run_local()
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/training.py", line 712, in run_local
saving_listeners=saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1154, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1112, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/utils/t2t_model.py", line 1414, in wrapping_model_fn
use_tpu=use_tpu)
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/utils/t2t_model.py", line 1477, in estimator_model_fn
logits, losses_dict = model(features) # pylint: disable=not-callable
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/layers/base.py", line 530, in __call__
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 554, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/utils/t2t_model.py", line 323, in call
sharded_logits, losses = self.model_fn_sharded(sharded_features)
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/utils/t2t_model.py", line 400, in model_fn_sharded
sharded_logits, sharded_losses = dp(self.model_fn, datashard_to_features)
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/utils/expert_utils.py", line 231, in __call__
outputs.append(fns[i](*my_args[i], **my_kwargs[i]))
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/utils/t2t_model.py", line 427, in model_fn
body_out = self.body(transformed_features)
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/models/image_transformer.py", line 50, in body
if (hparams.likelihood == cia.DistributionType.DMOL and
AttributeError: 'HParams' object has no attribute 'likelihood'
```
|
closed
|
2019-06-18T13:54:38Z
|
2019-06-20T16:48:02Z
|
https://github.com/tensorflow/tensor2tensor/issues/1605
|
[] |
Vargeel
| 4
|
huggingface/datasets
|
deep-learning
| 7,059
|
None values are skipped when reading jsonl in subobjects
|
### Describe the bug
I have been fighting against my machine since this morning only to find out this is some kind of a bug.
When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around.
E.g., let's take this example
Here are two version of a same dataset:
[not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz)
[buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz)
### Steps to reproduce the bug
1. Load the `buggy.tar.gz` dataset
2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
3. Load the `not-buggy.tar.gz` dataset
4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
### Expected behavior
Both should have 4 baseline entries:
1. Buggy should have None followed by three lists
2. Non-Buggy should have four lists, and the first one should be an empty list.
One does not work, 2 works. Despite accepting None in another position than the first one.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
|
open
|
2024-07-22T13:02:42Z
|
2024-07-22T13:02:53Z
|
https://github.com/huggingface/datasets/issues/7059
|
[] |
PonteIneptique
| 0
|
MaartenGr/BERTopic
|
nlp
| 1,722
|
Output the distance/correlation matrix of topics
|
In the visualisation heatmap, the calculation of the correlation matrix of topics is actually very useful, e.g. for debugging purpose and as a guide to do topic reduction. Any chance it can become part of the class attribute of the bertopic or an output from calling visualize_heatmap ?
|
open
|
2024-01-04T16:47:45Z
|
2024-01-09T08:13:29Z
|
https://github.com/MaartenGr/BERTopic/issues/1722
|
[] |
swl-dm
| 3
|
encode/apistar
|
api
| 76
|
Websockets support
|
Requires either of #29, #9 first.
Client side support for live endpoints with content-diffs.
Support for notifications of updating live endpoints.
|
closed
|
2017-04-20T14:23:00Z
|
2018-09-25T14:46:05Z
|
https://github.com/encode/apistar/issues/76
|
[
"Headline feature"
] |
tomchristie
| 6
|
onnx/onnx
|
scikit-learn
| 6,006
|
how onnx models stores weights and layer parameters
|
I am working on generating an ONNX translator for a machine learning framework, and for that, I need to have a clear understanding of how weights and layer parameters are stored in an ONNX model. After exploring several ONNX models, I realized that some of the ONNX models store their weights and layer parameters in separate initializers, while some of the ONNX models store their weights and layer parameters in their attributes class. Is there any way I can figure out how these parameters are stored in the model so that I can apply the appropriate algorithm to extract their parameters?
|
open
|
2024-03-09T08:00:08Z
|
2024-03-09T08:00:08Z
|
https://github.com/onnx/onnx/issues/6006
|
[
"question"
] |
kumar-utkarsh0317
| 0
|
BeastByteAI/scikit-llm
|
scikit-learn
| 76
|
Azure OpenAI Embeddings
|
You have added support for Azure OpenAI GPT models. Please add support for Azure OpenAI embedding models too. Due to this problem, I can't use the GPTVectorizer as well as Dynamic Few Shot Classification.
|
closed
|
2023-10-31T15:44:46Z
|
2024-05-24T14:39:43Z
|
https://github.com/BeastByteAI/scikit-llm/issues/76
|
[] |
Ioannis-Pikoulis
| 2
|
sanic-org/sanic
|
asyncio
| 3,004
|
Named middleware doesn't work
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
Middlewares registered with app.register_named_middleware don't seem to work.
### Code snippet
```python
# playground.py
from sanic import Sanic, text
app = Sanic("my-app")
@app.get("/test", name="test")
def test(request):
return text("This is response")
async def middleware(request):
print("This is middleware")
app.register_named_middleware(middleware, ["test"])
```
### Expected Behavior
Running `sanic playground:app --host=0.0.0.0 --port=8000 --workers=1` and initiate a GET to 0.0.0.0:8000 is expected to emit a print in console.
### How do you run Sanic?
Sanic CLI
### Operating System
MacOS
### Sanic Version
24.6.0
### Additional context
_No response_
|
closed
|
2024-10-23T08:56:30Z
|
2024-12-12T16:31:08Z
|
https://github.com/sanic-org/sanic/issues/3004
|
[
"bug"
] |
atticusfyj
| 2
|
grillazz/fastapi-sqlalchemy-asyncpg
|
sqlalchemy
| 150
|
postgres vector poc
|
open
|
2024-05-08T10:46:53Z
|
2024-05-08T10:46:53Z
|
https://github.com/grillazz/fastapi-sqlalchemy-asyncpg/issues/150
|
[] |
grillazz
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.