repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
sinaptik-ai/pandas-ai
|
pandas
| 768
|
Huggingface Interface Endpoints
|
Hi, I wanted to ask if it was possible to use Huggingface Interface Endpoints and if so where to set the token.
Can you give me details?
Thanks
|
closed
|
2023-11-21T15:26:18Z
|
2024-06-01T00:20:54Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/768
|
[] |
emanueleparini
| 11
|
AirtestProject/Airtest
|
automation
| 825
|
启动pocoserver 的时候总是报多线程错误
|
mkdir: cannot create directory ‘upload.dir’: File exists
[06:09:41][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb devices
[06:09:41][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb devices
[06:09:41][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 get-state
[06:09:41][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 wait-for-device
[06:09:41][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell getprop ro.build.version.sdk
[06:09:41][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:09:41][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:09:41][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:09:41][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:09:42][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:09:42][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:12898 tcp:10080
[06:09:42][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:13828 tcp:10081
[06:09:42][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:09:52][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:09:53][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:09:53][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:09:55][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:09:55][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:09:56][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:09:56][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:09:56][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:09:56][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:13265 tcp:10080
[06:09:56][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:17922 tcp:10081
[06:09:56][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:09:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:09:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:09:57][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:09:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:09:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:12742 tcp:10080
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:17504 tcp:10081
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb devices
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb devices
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 get-state
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 wait-for-device
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell getprop ro.build.version.sdk
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ls /data/local/tmp/minicap
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ls /data/local/tmp/minicap.so
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -v 2>&1
[06:09:58][DEBUG]<airtest.core.android.minicap> version:6
[06:09:58][DEBUG]<airtest.core.android.minicap> skip install minicap
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys window displays
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:09:58][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:09:58][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:09:59][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:09:59][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:18175 tcp:10080
[06:09:59][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:16158 tcp:10081
[06:09:59][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:09:59][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:09:59][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:09:59][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:09:59][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:10:00][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:10:00][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:11733 tcp:10080
[06:10:00][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:15851 tcp:10081
[06:10:00][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:10:00][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:10:01][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:10:01][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:10:01][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:10:01][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:10:02][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:16582 tcp:10080
[06:10:02][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:17340 tcp:10081
[06:10:02][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:10:02][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:10:02][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:10:02][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:10:02][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:10:03][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:10:03][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:17407 tcp:10080
[06:10:03][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:12454 tcp:10081
[06:10:03][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:10:03][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:10:03][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:10:03][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:10:03][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:10:04][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:10:04][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:19738 tcp:10080
[06:10:04][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:15541 tcp:10081
[06:10:04][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:10:04][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:10:04][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:10:04][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:10:04][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:10:05][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:10:05][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:18388 tcp:10080
[06:10:05][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:16008 tcp:10081
[06:10:05][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:10:06][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:10:06][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:10:06][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:10:06][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:10:06][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:10:07][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:13382 tcp:10080
[06:10:07][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:11402 tcp:10081
[06:10:07][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:10:07][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:10:07][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:10:07][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:10:07][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:10:07][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:10:08][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:12188 tcp:10080
[06:10:08][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:11535 tcp:10081
[06:10:08][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:10:08][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:10:08][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:10:08][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:10:08][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:10:09][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:10:09][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:11247 tcp:10080
[06:10:09][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --no-rebind tcp:17129 tcp:10081
[06:10:09][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:10:09][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell netcfg
[06:10:09][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ifconfig
[06:10:09][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
[06:10:09][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys package com.netease.nie.yosemite
[06:10:10][INFO]<airtest.core.android.yosemite> local version code is 300, installed version code is 300
[06:10:10][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell settings get secure default_input_method
[06:10:10][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ime list -a
[06:10:10][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell ps
[06:10:21][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:21][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:21][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:21][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:22][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:22][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:22][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:22][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:22][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=21>
[06:10:31][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:31][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:31][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:31][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:31][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:31][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:32][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:32][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:32][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:32][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:32][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:32][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:32][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:32][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:33][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:33][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:33][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:33][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:33][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:33][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:33][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:33][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:33][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:34][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:34][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:34][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:35][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:35][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:35][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:35][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
Exception in thread Thread-6:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=27>
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=29>
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=33>
[06:10:42][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:42][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:42][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:43][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:43][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:44][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am force-stop com.netease.open.pocoservice
[06:10:44][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
[06:10:44][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am start -n com.netease.open.pocoservice/.TestActivity
[06:10:45][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell am instrument -w -e debug false -e class com.netease.open.pocoservice.InstrumentedTestAsLauncher com.netease.open.pocoservice.test/android.support.test.runner.AndroidJUnitRunner
Exception in thread Thread-9:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=11>
Exception in thread Thread-10:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=13>
Exception in thread Thread-8:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=21>
Exception in thread Thread-4:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=18>
Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=25>
Exception in thread Thread-7:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=23>
Exception in thread Thread-11:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=26>
Exception in thread Thread-12:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 208, in loop
stdout, stderr = self._instrument_proc.communicate()
File "/usr/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/usr/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/usr/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/usr/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/usr/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=28>
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 shell dumpsys activity top
/home/test_user
/home/test_user/android/api/report
/
/home/test_user/android
/home
/home/test_user/android/common
python log /home/test_user/upload.dir/zxg.log
/home/test_user/upload.dir/zxg.log
[pocoservice.apk] background daemon started.
[pocoservice.apk] background daemon started.
/home
/home/test_user/android/api
/home/test_user
/home/test_user/android/api/report
/home
/home/test_user/android/conf
/home/test_user
/home/test_user/android/api/login
/home
/home/test_user/android/portfolio_page
[pocoservice.apk] background daemon started.
[pocoservice.apk] background daemon started.
[pocoservice.apk] background daemon started.
/home/test_user
/home/test_user/android/api/base
[pocoservice.apk] background daemon started.
[pocoservice.apk] background daemon started.
[pocoservice.apk] background daemon started.
[pocoservice.apk] background daemon started.
[pocoservice.apk] background daemon started.
[pocoservice.apk] background daemon started.
[pocoservice.apk] background daemon started.
[pocoservice.apk] stdout: b'\ncom.netease.open.pocoservice.InstrumentedTestAsLauncher:INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'\ncom.netease.open.pocoservice.InstrumentedTestAsLauncher:INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'\ncom.netease.open.pocoservice.InstrumentedTestAsLauncher:INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'\ncom.netease.open.pocoservice.InstrumentedTestAsLauncher:INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'\ncom.netease.open.pocoservice.InstrumentedTestAsLauncher:INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'\ncom.netease.open.pocoservice.InstrumentedTestAsLauncher:INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'\ncom.netease.open.pocoservice.InstrumentedTestAsLauncher:INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'\ncom.netease.open.pocoservice.InstrumentedTestAsLauncher:INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[pocoservice.apk] stdout: b'INSTRUMENTATION_RESULT: shortMsg=Process crashed.\nINSTRUMENTATION_CODE: 0\n'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
Traceback (most recent call last):
File "android/api/report/test_report_runner_bvt_portfolio.py", line 10, in <module>
from android.api.base.test_portfolio_bvt import TestPortfolioBvt
File "/home/test_user/android/api/base/test_portfolio_bvt.py", line 17, in <module>
poco = AndroidUiautomationPoco(use_airtest_input=False, screenshot_each_action=False, using_proxy=False)
File "/usr/local/lib/python3.6/dist-packages/poco/drivers/android/uiautomation.py", line 179, in __init__
raise RuntimeError("unable to launch AndroidUiautomationPoco")
RuntimeError: unable to launch AndroidUiautomationPoco
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:12898
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:13828
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:13265
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:17922
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:12742
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:17504
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:18175
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:16158
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:11733
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:15851
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:16582
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:17340
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:17407
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:12454
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:19738
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:15541
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:18388
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:16008
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:13382
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:11402
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:12188
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:11535
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:11247
[06:10:57][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.6/dist-packages/airtest/core/android/static/adb/linux/adb -s localhost:6969 forward --remove tcp:17129
|
open
|
2020-11-05T03:02:31Z
|
2020-11-05T03:02:31Z
|
https://github.com/AirtestProject/Airtest/issues/825
|
[] |
xiaojunliu19
| 0
|
nolar/kopf
|
asyncio
| 682
|
Kopf requires aiohttp>=3
|
## Long story short
When I install kopf, I get aiohttp 2.x.x which does not support the ssl kwarg in the aiohttp.TCPConnector constructor.
According to the docs this kwarg is introduced in aiohttp version 3.0, so maybe kopf should require this a a minimal version.
My workaround was to specify aiohttp 3.7.3 manually in my pyproject.toml.
Source:
search for 'TCPConnector' or 'param ssl' on [this page](https://docs.aiohttp.org/en/stable/client_reference.html), then see the ssl parameter, it states "New in version 3.0"
my traceback:
<details><summary>kopf run operator.py
</summary>
```
/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/kopf/reactor/running.py:157: FutureWarning: Absence of either namespaces or cluster-wide flag will become an error soon. For now, switching to the cluster-wide mode for backward compatibility.
warnings.warn("Absence of either namespaces or cluster-wide flag will become an error soon."
[2021-02-13 23:13:21,776] kopf.reactor.activit [INFO ] Initial authentication has been initiated.
[2021-02-13 23:13:21,786] kopf.activities.auth [INFO ] Activity 'login_via_client' succeeded.
[2021-02-13 23:13:21,786] kopf.reactor.activit [INFO ] Initial authentication has finished.
[2021-02-13 23:13:21,789] kopf.reactor.running [ERROR ] Resource observer has failed: __init__() got an unexpected keyword argument 'ssl'
Traceback (most recent call last):
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/kopf/utilities/aiotasks.py", line 69, in guard
await coro
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/kopf/reactor/observation.py", line 104, in resource_observer
resources = await scanning.scan_resources(groups=group_filter)
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/kopf/clients/auth.py", line 42, in wrapper
async for key, info, context in vault.extended(APIContext, 'contexts'):
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/kopf/structs/credentials.py", line 164, in extended
item.caches[purpose] = factory(item.info)
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/kopf/clients/auth.py", line 183, in __init__
connector=aiohttp.TCPConnector(
TypeError: __init__() got an unexpected keyword argument 'ssl'
Traceback (most recent call last):
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/bin/kopf", line 8, in <module>
sys.exit(main())
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/kopf/cli.py", line 50, in wrapper
return fn(*args, **kwargs)
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/click/decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/kopf/cli.py", line 97, in run
return running.run(
File "/Users/dino/Library/Caches/pypoetry/virtualenvs/deployment-operator-46HapXxc-py3.9/lib/python3.9/site-packages/kopf/reactor/running.py", line 47, in run
```
</details>
## Environment
<!-- The following commands can help:
`kopf --version` or `pip show kopf`
`kubectl version`
`python --version`
-->
* Kopf version: kopf, version 1.29.2
* Kubernetes version: doesnt matter
* Python version: 3.9.1
* OS/platform: macOs big sur 11.1
<details><summary>Python packages installed</summary>
```
aiohttp @ file:///Users/dino/Library/Caches/pypoetry/artifacts/41/0f/23/7a69d935a278f7e9ae829d0ff2c9b669b96713f4eed57f6b64466634c5/aiohttp-3.7.3.tar.gz
aiojobs @ file:///Users/dino/Library/Caches/pypoetry/artifacts/98/2c/2d/fbec70db545b5331949364846b7ac86cca4db40b91730da294b2f9ec50/aiojobs-0.3.0-py3-none-any.whl
astroid @ file:///Users/dino/Library/Caches/pypoetry/artifacts/34/e0/99/2edbd67ade70c8c00b3e1ab5559e9ed5efc5470f97256888886f6d312a/astroid-2.4.2-py3-none-any.whl
async-timeout @ file:///Users/dino/Library/Caches/pypoetry/artifacts/e1/e7/dc/347dacf16e20e4b15a992103281583f3a03db422172dec1c5f08d68c07/async_timeout-3.0.1-py3-none-any.whl
attrs @ file:///Users/dino/Library/Caches/pypoetry/artifacts/f9/48/82/553e4bef24d3b294c0c18f27a7853f3ed151508efd144cb7ea37db1c48/attrs-20.3.0-py2.py3-none-any.whl
cachetools @ file:///Users/dino/Library/Caches/pypoetry/artifacts/6f/e8/07/77a8a35bf67f89350dc9cf391674b48c6f880833c4c46a9309228d0546/cachetools-4.2.1-py3-none-any.whl
certifi @ file:///Users/dino/Library/Caches/pypoetry/artifacts/d8/df/24/ed696681f34f8916b0aef99138db9a94e37d54684b9829af34a7fd4e39/certifi-2020.12.5-py2.py3-none-any.whl
chardet @ file:///Users/dino/Library/Caches/pypoetry/artifacts/8f/6f/1c/8085d730ad63c462222af30d0d01c4bd0caca5287e40b63c1fe8f529b7/chardet-3.0.4-py2.py3-none-any.whl
click @ file:///Users/dino/Library/Caches/pypoetry/artifacts/e2/79/34/a23e9d2f683ed66be11ec3bd760dec3a2fe228cfdedf2071bcf0531b06/click-7.1.2-py2.py3-none-any.whl
google-auth @ file:///Users/dino/Library/Caches/pypoetry/artifacts/83/9f/60/33b898f8f35d337ff96f05454a752e14a79afeaf22d58da8687103dcff/google_auth-1.26.1-py2.py3-none-any.whl
idna @ file:///Users/dino/Library/Caches/pypoetry/artifacts/ef/7f/a9/19cc0b8760bdf6f696290c06532496f8bb29fbdaad044f852fed00ec82/idna-2.10-py2.py3-none-any.whl
iso8601 @ file:///Users/dino/Library/Caches/pypoetry/artifacts/d4/71/89/7658ef5c51dadec6c208c5b1ebdeeae0ef36ac56402685529704588aed/iso8601-0.1.14-py2.py3-none-any.whl
isort @ file:///Users/dino/Library/Caches/pypoetry/artifacts/13/0e/9d/0ac87b4f86576f57416f5d21432dec16c02955743e7afe51afe253a24b/isort-5.7.0-py3-none-any.whl
kopf @ file:///Users/dino/Library/Caches/pypoetry/artifacts/ec/55/f6/35d79e4b88276a813dd5dec4d45604a2cf668bee0b4c3779fcd9b0eba5/kopf-1.29.2-py3-none-any.whl
kubernetes @ file:///Users/dino/Library/Caches/pypoetry/artifacts/c9/8b/5a/78ad793efb1c9385d86052d66cc892d11f33d38cff30bbfd2435ddc868/kubernetes-12.0.1-py2.py3-none-any.whl
lazy-object-proxy @ file:///Users/dino/Library/Caches/pypoetry/artifacts/18/04/70/fa7b9e82b3409e05c268ba4038442836717cc0255979a9e8cfff1a415f/lazy-object-proxy-1.4.3.tar.gz
mccabe @ file:///Users/dino/Library/Caches/pypoetry/artifacts/96/5e/5f/21ae5296697ca7f94de4da6e21d4936d74029c352a35202e4c339a4253/mccabe-0.6.1-py2.py3-none-any.whl
multidict @ file:///Users/dino/Library/Caches/pypoetry/artifacts/67/72/75/4f22882a49c8f1595c644f316e1bbebbb8f4bbc8bf2de538f928cea588/multidict-5.1.0.tar.gz
oauthlib @ file:///Users/dino/Library/Caches/pypoetry/artifacts/cd/73/ce/de02d263260699199b7d71249cfb85d546e2d53bbf3a508267e87b233e/oauthlib-3.1.0-py2.py3-none-any.whl
pip==20.2.2
pyasn1 @ file:///Users/dino/Library/Caches/pypoetry/artifacts/7b/3a/54/42ce43b579bda01b9d79022fb733811594441e7a32e9f9a5a98f672bdc/pyasn1-0.4.8-py2.py3-none-any.whl
pyasn1-modules @ file:///Users/dino/Library/Caches/pypoetry/artifacts/dd/b8/4f/b56433e0354274a31074995e01b8671751e9f0ed0001f5254e5b03a54f/pyasn1_modules-0.2.8-py2.py3-none-any.whl
pylint @ file:///Users/dino/Library/Caches/pypoetry/artifacts/f5/28/9c/9c127841963caba0fa5310c92162143da8ad0b19de264fb03c7b25d79d/pylint-2.6.0-py3-none-any.whl
python-dateutil @ file:///Users/dino/Library/Caches/pypoetry/artifacts/93/67/cf/49f56d9e954addcfc50e5ffc9faee013c2eb00c6d77d56c6a22cb33b54/python_dateutil-2.8.1-py2.py3-none-any.whl
python-json-logger @ file:///Users/dino/Library/Caches/pypoetry/artifacts/6c/b2/9e/87d24622c6d60716f59f27298a0458888858784bc2ea70a34400f70ce6/python-json-logger-2.0.1.tar.gz
PyYAML @ file:///Users/dino/Library/Caches/pypoetry/artifacts/4a/f4/03/07b8639f883fbaa6f6c0c9af133435a163e11ddcb00ebab6ec3daa09df/PyYAML-5.4.1.tar.gz
requests @ file:///Users/dino/Library/Caches/pypoetry/artifacts/22/0a/9d/0df883fbffbb406d0cddbb35e881e4ac6bfb8f0dee8733056b6a054bf7/requests-2.25.1-py2.py3-none-any.whl
requests-oauthlib @ file:///Users/dino/Library/Caches/pypoetry/artifacts/11/f5/eb/81a5da1da15ae0d7c5c1cc43f729856e59f8a0f09c77051ed1841bd01d/requests_oauthlib-1.3.0-py2.py3-none-any.whl
rope @ file:///Users/dino/Library/Caches/pypoetry/artifacts/e9/fd/6c/b743a9ad0e91e4ddcfdef121030235cd39345a1bd7761143da136fabd6/rope-0.18.0.tar.gz
rsa @ file:///Users/dino/Library/Caches/pypoetry/artifacts/ec/f9/78/8f0a5b86843da4022adb0c5a82223fd59c82e0e973b9150b847207c8a5/rsa-4.7-py3-none-any.whl
setuptools==51.2.0
six @ file:///Users/dino/Library/Caches/pypoetry/artifacts/dd/1c/65/ad0dea11136f5a869f072890a0eea955aa8fc35b90c85c55249fd3abfe/six-1.15.0-py2.py3-none-any.whl
toml @ file:///Users/dino/Library/Caches/pypoetry/artifacts/6b/6a/c9/53b19f7870a77d855e8b05ecdc98193944e5d246dafe11bbcad850ecba/toml-0.10.2-py2.py3-none-any.whl
typing-extensions @ file:///Users/dino/Library/Caches/pypoetry/artifacts/ab/c3/72/446cb2c521f10fc837619e8a7c68ed3c3bd74859bd625b7d74f38a159b/typing_extensions-3.7.4.3-py3-none-any.whl
urllib3 @ file:///Users/dino/Library/Caches/pypoetry/artifacts/3d/49/75/4245c9a53c80e9d437e00720b38959ccd850e173b62242bcea85c1b100/urllib3-1.26.3-py2.py3-none-any.whl
websocket-client @ file:///Users/dino/Library/Caches/pypoetry/artifacts/83/d9/33/524e1bb12489c6d0573175851ad9e59b936282ac76e7a2c4f0308e1406/websocket_client-0.57.0-py2.py3-none-any.whl
wheel==0.36.2
wrapt @ file:///Users/dino/Library/Caches/pypoetry/artifacts/6c/e2/d9/2c022794d212a87320efa16fd1b05654bf6656b6cf0510c072845ecc95/wrapt-1.12.1.tar.gz
yapf @ file:///Users/dino/Library/Caches/pypoetry/artifacts/d4/79/c0/d3be7c7004716c6ab7c5177f8d59ec4d28e9152f045368536bcdd1b8a9/yapf-0.30.0-py2.py3-none-any.whl
yarl @ file:///Users/dino/Library/Caches/pypoetry/artifacts/4a/2e/96/4e7dccdaca47b59e170425a689c820c9b76a11f8ac97501563cc294741/yarl-1.6.3.tar.gz
```
</details>
|
open
|
2021-02-13T22:36:46Z
|
2021-02-13T22:36:46Z
|
https://github.com/nolar/kopf/issues/682
|
[
"bug"
] |
dhensen
| 0
|
thtrieu/darkflow
|
tensorflow
| 1,058
|
Installation problem: env: python\r No such file or directory
|
I am using Mac.
I git cloned the repository and installed using 'pip3 install .'
However whenever I run 'flow' or './flow', there is an error message:
"env: python\r: No such file or directory"
Any clues on how to solve this?
|
open
|
2019-07-09T07:08:18Z
|
2019-07-09T07:08:18Z
|
https://github.com/thtrieu/darkflow/issues/1058
|
[] |
sleung852
| 0
|
chiphuyen/stanford-tensorflow-tutorials
|
nlp
| 148
|
huber loss equation in eager execution
|
Hi, I was going through the assignments and realised that the huber loss equation was multiplied by two here:
https://github.com/chiphuyen/stanford-tensorflow-tutorials/blob/51e53daaa2a32cfe7a1966f060b28dbbd081791c/examples/04_linreg_eager.py#L43
Any reason for that?
|
open
|
2019-09-29T15:14:54Z
|
2019-09-29T15:14:54Z
|
https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/148
|
[] |
fsilavong
| 0
|
ExpDev07/coronavirus-tracker-api
|
fastapi
| 307
|
Question
|
Maybe this is just me being stupid but why is it that upon using the API for total recovered it actively pops up as 0 almost as though its not in total time?
|
closed
|
2020-04-29T10:55:06Z
|
2020-04-29T11:30:10Z
|
https://github.com/ExpDev07/coronavirus-tracker-api/issues/307
|
[
"bug",
"duplicate"
] |
IfYouWouldStop
| 1
|
recommenders-team/recommenders
|
machine-learning
| 1,670
|
[ASK] The docstring explanation for `LibffmConverter`
|
### Description
Hello Dev Team,
In the docstring of `LibffmConverter`:
https://github.com/microsoft/recommenders/blob/fe215e9babf8f7caba025a83059445362cff0006/recommenders/datasets/pandas_df_utils.py#L112
I'm wondering if this line would be the following one instead:
```
i.e. `<field_index>:<field_feature_index>:1` or `<field_index>:<field_feature_index>:<field_feature_value>`
```
Since according to:
https://github.com/microsoft/recommenders/blob/fe215e9babf8f7caba025a83059445362cff0006/recommenders/datasets/pandas_df_utils.py#L225
The returned value is in form of `<field_index>:<field_feature_index>:<field_feature_value>` instead of `<field_index>:<field_index>:<field_feature_value>`.
A PR https://github.com/microsoft/recommenders/pull/1669 has been made to fix this typo if applicable.
Thank you very much.
|
closed
|
2022-03-11T00:09:05Z
|
2022-05-06T08:05:29Z
|
https://github.com/recommenders-team/recommenders/issues/1670
|
[
"help wanted"
] |
Tony-Feng
| 2
|
davidsandberg/facenet
|
computer-vision
| 747
|
No such file or directory: data/pairs.txt
|
I was a bignner of facenet
and when i done the 6th step: Run the test of Validate on LFW.
it came out a warnning that:
No such file or directory: data/pairs.txt
how to solve it?
|
open
|
2018-05-15T13:05:00Z
|
2018-05-29T13:02:37Z
|
https://github.com/davidsandberg/facenet/issues/747
|
[] |
tangdouzi
| 1
|
vi3k6i5/flashtext
|
nlp
| 120
|
Chinese and Arabic words
|
I tried to train the flashtext model with the words from the language Chinese and Arabic.
After I train in the keyword_processor the Chinese word : '早上好' is converted to '"早业好"'
I want to know , what is that you are using to convert '早上好' into '早业好'.
I want to train the flashtext to extract the keywords for multiple languages.
Please , let me know the solution .
Thank you.
|
closed
|
2020-12-19T14:38:25Z
|
2020-12-21T05:24:46Z
|
https://github.com/vi3k6i5/flashtext/issues/120
|
[] |
RohanNayaks
| 0
|
oegedijk/explainerdashboard
|
plotly
| 73
|
Issue with shap='deep' using tensorflow
|
Hello,
I tried the following scenario using an ANN with the latest Tensorflow and the shap='deep (https://github.com/aprimera/explainerdashboard).
However I get the following error:
Traceback (most recent call last):
File "explainer.py", line 92, in <module>
shap='deep',
File "C:\ProgramData\Anaconda3\lib\site-packages\explainerdashboard\explainers.py", line 1727, in __init__
_ = self.shap_explainer
File "C:\ProgramData\Anaconda3\lib\site-packages\explainerdashboard\explainers.py", line 1790, in shap_explainer
self._shap_explainer = shap.DeepExplainer(self.model, self.X_background)
File "C:\ProgramData\Anaconda3\lib\site-packages\shap\explainers\_deep\__init__.py", line 84, in __init__
self.explainer = TFDeep(model, data, session, learning_phase_flags)
File "C:\ProgramData\Anaconda3\lib\site-packages\shap\explainers\_deep\deep_tf.py", line 131, in __init__
self.graph = _get_graph(self)
File "C:\ProgramData\Anaconda3\lib\site-packages\shap\explainers\tf_utils.py", line 46, in _get_graph
return explainer.model_output.graph
AttributeError: 'KerasTensor' object has no attribute 'graph'
After downgrading tensorflow I am getting more errors:
Traceback (most recent call last):
File "explainer.py", line 92, in <module>
shap='deep',
File "C:\ProgramData\Anaconda3\lib\site-packages\explainerdashboard\explainers.py", line 1727, in __init__
_ = self.shap_explainer
File "C:\ProgramData\Anaconda3\lib\site-packages\explainerdashboard\explainers.py", line 1790, in shap_explainer
self._shap_explainer = shap.DeepExplainer(self.model, self.X_background)
File "C:\ProgramData\Anaconda3\lib\site-packages\shap\explainers\_deep\__init__.py", line 84, in __init__
self.explainer = TFDeep(model, data, session, learning_phase_flags)
File "C:\ProgramData\Anaconda3\lib\site-packages\shap\explainers\_deep\deep_tf.py", line 158, in __init__
self.expected_value = tf.reduce_mean(self.model(self.data), 0)
File "C:\Users\prime\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\engine\base_layer.py", line 985, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "C:\Users\prime\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\engine\sequential.py", line 372, in call
return super(Sequential, self).call(inputs, training=training, mask=mask)
File "C:\Users\prime\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\engine\functional.py", line 386, in call
inputs, training=training, mask=mask)
File "C:\Users\prime\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\engine\functional.py", line 508, in _run_internal_graph
outputs = node.layer(*args, **kwargs)
File "C:\Users\prime\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\engine\base_layer.py", line 985, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "C:\Users\prime\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\layers\core.py", line 1198, in call
dtype=self._compute_dtype_object)
File "C:\Users\prime\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\keras\layers\ops\core.py", line 45, in dense
if inputs.dtype.base_dtype != dtype.base_dtype:
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py", line 4372, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'dtype'
Please do you have any idea what could be causing the issue or potential workarounds?
Thanks!
|
closed
|
2021-01-27T09:50:08Z
|
2021-06-27T20:04:08Z
|
https://github.com/oegedijk/explainerdashboard/issues/73
|
[] |
aprimera
| 2
|
widgetti/solara
|
flask
| 966
|
Error with pytest-playwright 0.6
|
Minimal reproducer:
```
python -m venv clean
source clean/bin/activate
pip install solara[pytest]
```
then create ``test.py`` with:
```python
def test_basic(page_session):
pass
```
And run the tests with:
```
pytest test.py
```
this fails with:
```
@pytest.fixture(scope="session")
def context_session(
browser: "playwright.sync_api.Browser",
browser_context_args: Dict,
pytestconfig: Any,
request: pytest.FixtureRequest,
) -> Generator["playwright.sync_api.BrowserContext", None, None]:
from playwright.sync_api import Error, Page
> from pytest_playwright.pytest_playwright import _build_artifact_test_folder
E ImportError: cannot import name '_build_artifact_test_folder' from 'pytest_playwright.pytest_playwright' (/home/tom/tmp/debug/clean/lib/python3.11/site-packages/pytest_playwright/pytest_playwright.py)
clean/lib/python3.11/site-packages/solara/test/pytest_plugin.py:66: ImportError
```
Downgrading to pytest-playwright 0.5 fixes this. It looks like some private API was removed?
|
closed
|
2025-01-10T12:59:56Z
|
2025-01-16T10:08:00Z
|
https://github.com/widgetti/solara/issues/966
|
[] |
astrofrog
| 1
|
modelscope/data-juicer
|
data-visualization
| 487
|
Checkpointer support for Ray-Mode
|
### Search before continuing 先搜索,再继续
- [X] I have searched the Data-Juicer issues and found no similar feature requests. 我已经搜索了 Data-Juicer 的 issue 列表但是没有发现类似的功能需求。
### Description 描述
Currently, the [dj_ckpt_manager](https://github.com/modelscope/data-juicer/blob/main/data_juicer/utils/ckpt_utils.py#L7) and [executor](https://github.com/modelscope/data-juicer/blob/main/data_juicer/core/executor.py) only support the HF dataset. They essentially performs three actions:
1. Tracks and saves the executed operation list from OP_1 to OP_i.
2. Saves the processed dataset \( D_{op_i} \).
3. Checks and loads \( D_{op_i} \) when the feature is enabled during re-processing.
It would be straightforward to extend this feature into [ray_executor](https://github.com/modelscope/data-juicer/blob/main/data_juicer/core/ray_executor.py). For step 2 and 3, we can implement a few new interfaces for snapshotting Ray Data [states](https://docs.ray.io/en/latest/data/saving-data.html) and using [persistent storage](https://docs.ray.io/en/latest/data/api/dataset.html#i-o-and-conversion).
### Use case 使用场景
_No response_
### Additional 额外信息
_No response_
### Are you willing to submit a PR for this feature? 您是否乐意为此功能提交一个 PR?
- [X] Yes I'd like to help by submitting a PR! 是的!我愿意提供帮助并提交一个PR!
|
open
|
2024-11-12T11:59:27Z
|
2024-11-14T06:52:27Z
|
https://github.com/modelscope/data-juicer/issues/487
|
[
"enhancement"
] |
yxdyc
| 1
|
amisadmin/fastapi-amis-admin
|
sqlalchemy
| 149
|
Setup id at runtime?
|
educate event system in amis, but this require id, I do:
```python
class TriggerAdminPage(admin.ModelAdmin):
.
.
.
async def get_form_item(
self, request: Request, modelfield: ModelField, action: CrudEnum
) -> Union[FormItem, SchemaNode, None]:
item = await super().get_form_item(request, modelfield, action)
if item.name == Trigger.event.key: # noqa
item.id = item.name # just field name
```
but why just not assign a name at runtime? are there any reasons?
|
closed
|
2023-12-12T22:06:35Z
|
2023-12-20T12:59:34Z
|
https://github.com/amisadmin/fastapi-amis-admin/issues/149
|
[] |
MatsiukMykola
| 6
|
falconry/falcon
|
api
| 2,310
|
Finalize `cibuildwheel` tooling
|
The cibuildwheel gate is almost ready to rock 🤘.
However, minor cleanup is still needed before we can get 4.0.0a1 out the door in one click:
- [x] ~~Disable or remove the old wheels workflow~~ :arrow_right: deferred to #2311.
- [x] Publish at least `sdist` to the release page (do we really need to also upload all binaries like now?).
- [x] Make sure `sdist` is uploaded first in a separate step, otherwise `PyPI` and `pip` get confused by a release without `sdist` in the interim period.
- [x] Clean up and adapt the script(s) to check the built wheels.
|
closed
|
2024-08-28T21:33:28Z
|
2024-08-30T10:37:08Z
|
https://github.com/falconry/falcon/issues/2310
|
[
"maintenance"
] |
vytas7
| 0
|
scikit-learn/scikit-learn
|
machine-learning
| 30,935
|
The default token pattern in CountVectorizer breaks Indic sentences into non-sensical tokens
|
### Describe the bug
The default `token_pattern` in `CountVectorizer` is `r"(?u)\b\w\w+\b"` which tokenizes Indic texts in a wrong way - breaks whitespace tokenized words into multiple chunks and even omits several valid characters. The resulting vocabulary doesn't make any sense !
Is this the expected behaviour?
Sample code is pasted in the sections below
### Steps/Code to Reproduce
```
import sklearn
from sklearn.feature_extraction.text import CountVectorizer
tel = ["ప్రధానమంత్రిని కలుసుకున్నారు"]
hin = ["आधुनिक मानक हिन्दी"]
eng = ["They met the Prime Minister"]
cvect = CountVectorizer(
ngram_range=(1, 1),
max_features=None,
min_df=1,
strip_accents=None,
)
cvect.fit(tel + hin + eng)
print(cvect.vocabulary_)
```
### Expected Results
```
{'ప్రధానమంత్రిని': 9, 'కలుసుకున్నారు': 8, 'आधुनिक': 5, 'मानक': 6, 'हिन्दी': 7, 'they': 4, 'met': 0, 'the': 3, 'prime': 2, 'minister': 1}
```
### Actual Results
```
{'రధ': 9, 'నమ': 8, 'కల': 7, 'आध': 5, 'नक': 6, 'they': 4, 'met': 0, 'the': 3, 'prime': 2, 'minister': 1}
```
### Versions
```
System:
python: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0]
executable: miniconda3/envs/lolm/bin/python
machine: Linux-6.1.0-25-amd64-x86_64-with-glibc2.36
Python dependencies:
sklearn: 1.5.2
pip: 24.2
setuptools: 75.1.0
numpy: 1.26.0
scipy: 1.14.1
Cython: None
pandas: 2.2.3
matplotlib: 3.9.2
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: mkl
num_threads: 1
prefix: libmkl_rt
filepath: miniconda3/envs/lolm/lib/libmkl_rt.so.2
version: 2023.1-Product
threading_layer: gnu
user_api: openmp
internal_api: openmp
num_threads: 1
prefix: libgomp
filepath: miniconda3/envs/lolm/lib/libgomp.so.1.0.0
version: None
```
|
open
|
2025-03-03T13:55:23Z
|
2025-03-06T11:49:56Z
|
https://github.com/scikit-learn/scikit-learn/issues/30935
|
[
"Bug"
] |
skesiraju
| 3
|
deepfakes/faceswap
|
machine-learning
| 1,227
|
Training mode error with Dlight faxeswap gui ubuntu 22.04
|
**Describe the bug**
Error when start training with Dlight
Setting Faceswap backend to NVIDIA
05/29/2022 21:10:09 INFO Log level set to: INFO
05/29/2022 21:10:10 INFO Model A Directory: '/home/cedric/LAB/faceswap/workspace/faceAdst' (1305 images)
05/29/2022 21:10:10 INFO Model B Directory: '/home/cedric/LAB/faceswap/workspace/faceBsrc' (596 images)
05/29/2022 21:10:10 INFO Training data directory: /home/cedric/LAB/faceswap/workspace/modelAB
05/29/2022 21:10:10 INFO ===================================================
05/29/2022 21:10:10 INFO Starting
05/29/2022 21:10:10 INFO ===================================================
05/29/2022 21:10:11 INFO Loading data, this may take a while...
05/29/2022 21:10:11 INFO Loading Model from Dlight plugin...
05/29/2022 21:10:11 INFO No existing state file found. Generating.
05/29/2022 21:10:11 INFO Enabling Mixed Precision Training.
05/29/2022 21:10:11 INFO Mixed precision compatibility check (mixed_float16): OK\nYour GPU will likely run quickly with dtype policy mixed_float16 as it has compute capability of at least 7.0. Your GPU: NVIDIA GeForce RTX 3060 Laptop GPU, compute capability 8.6
05/29/2022 21:10:12 INFO Loading Trainer from Original plugin...
05/29/2022 21:10:36 INFO [Saved models] - Average loss since last save: face_a: 0.16586, face_b: 0.20392
Exception in Tkinter callback
Traceback (most recent call last):
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/tkinter/__init__.py", line 1892, in __call__
return self.func(*args)
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/tkinter/__init__.py", line 814, in callit
func(*args)
File "/home/cedric/LAB/faceswap/lib/gui/display_graph.py", line 364, in refresh
self._calcs = self._thread.get_result() # Terminate the LongRunningTask object
File "/home/cedric/LAB/faceswap/lib/gui/utils.py", line 1263, in get_result
raise self.err[1].with_traceback(self.err[2])
File "/home/cedric/LAB/faceswap/lib/gui/utils.py", line 1234, in run
retval = self._target(*self._args, **self._kwargs)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 565, in refresh
self._get_raw()
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 628, in _get_raw
loss_dict = _SESSION.get_loss(self._session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 174, in get_loss
loss_dict = self._tb_logs.get_loss(session_id=session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 489, in get_loss
self._check_cache(idx)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 463, in _check_cache
self._cache_data(session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 448, in _cache_data
iterator = self._training_iterator if live_data else tf.compat.v1.io.tf_record_iterator(
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/site-packages/tensorflow/python/util/deprecation.py", line 344, in new_func
return func(*args, **kwargs)
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/site-packages/tensorflow/python/lib/io/tf_record.py", line 167, in tf_record_iterator
return _pywrap_record_io.RecordIterator(path, compression_type)
TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
1. tensorflow.python.lib.io._pywrap_record_io.RecordIterator(arg0: str, arg1: str)
Invoked with: None, ''
05/29/2022 21:10:57 CRITICAL Error caught! Exiting...
05/29/2022 21:10:57 ERROR Caught exception in thread: '_training_0'
05/29/2022 21:10:57 ERROR You do not have enough GPU memory available to train the selected model at the selected settings. You can try a number of things:
05/29/2022 21:10:57 ERROR 1) Close any other application that is using your GPU (web browsers are particularly bad for this).
05/29/2022 21:10:57 ERROR 2) Lower the batchsize (the amount of images fed into the model each iteration).
05/29/2022 21:10:57 ERROR 3) Try enabling 'Mixed Precision' training.
05/29/2022 21:10:57 ERROR 4) Use a more lightweight model, or select the model's 'LowMem' option (in config) if it has one.
Process exited.
Exception in Tkinter callback
Traceback (most recent call last):
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/tkinter/__init__.py", line 1892, in __call__
return self.func(*args)
File "/home/cedric/LAB/faceswap/lib/gui/display_graph.py", line 364, in refresh
self._calcs = self._thread.get_result() # Terminate the LongRunningTask object
File "/home/cedric/LAB/faceswap/lib/gui/utils.py", line 1263, in get_result
raise self.err[1].with_traceback(self.err[2])
File "/home/cedric/LAB/faceswap/lib/gui/utils.py", line 1234, in run
retval = self._target(*self._args, **self._kwargs)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 565, in refresh
self._get_raw()
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 628, in _get_raw
loss_dict = _SESSION.get_loss(self._session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 174, in get_loss
loss_dict = self._tb_logs.get_loss(session_id=session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 489, in get_loss
self._check_cache(idx)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 463, in _check_cache
self._cache_data(session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 448, in _cache_data
iterator = self._training_iterator if live_data else tf.compat.v1.io.tf_record_iterator(
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/site-packages/tensorflow/python/util/deprecation.py", line 344, in new_func
return func(*args, **kwargs)
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/site-packages/tensorflow/python/lib/io/tf_record.py", line 167, in tf_record_iterator
return _pywrap_record_io.RecordIterator(path, compression_type)
TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
1. tensorflow.python.lib.io._pywrap_record_io.RecordIterator(arg0: str, arg1: str)
Invoked with: None, ''
Exception in Tkinter callback
Traceback (most recent call last):
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/tkinter/__init__.py", line 1892, in __call__
return self.func(*args)
File "/home/cedric/LAB/faceswap/lib/gui/display_graph.py", line 364, in refresh
self._calcs = self._thread.get_result() # Terminate the LongRunningTask object
File "/home/cedric/LAB/faceswap/lib/gui/utils.py", line 1263, in get_result
raise self.err[1].with_traceback(self.err[2])
File "/home/cedric/LAB/faceswap/lib/gui/utils.py", line 1234, in run
retval = self._target(*self._args, **self._kwargs)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 565, in refresh
self._get_raw()
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 628, in _get_raw
loss_dict = _SESSION.get_loss(self._session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 174, in get_loss
loss_dict = self._tb_logs.get_loss(session_id=session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 489, in get_loss
self._check_cache(idx)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 463, in _check_cache
self._cache_data(session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 448, in _cache_data
iterator = self._training_iterator if live_data else tf.compat.v1.io.tf_record_iterator(
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/site-packages/tensorflow/python/util/deprecation.py", line 344, in new_func
return func(*args, **kwargs)
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/site-packages/tensorflow/python/lib/io/tf_record.py", line 167, in tf_record_iterator
return _pywrap_record_io.RecordIterator(path, compression_type)
TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
1. tensorflow.python.lib.io._pywrap_record_io.RecordIterator(arg0: str, arg1: str)
Invoked with: None, ''
Exception in Tkinter callback
Traceback (most recent call last):
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/tkinter/__init__.py", line 1892, in __call__
return self.func(*args)
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/tkinter/__init__.py", line 814, in callit
func(*args)
File "/home/cedric/LAB/faceswap/lib/gui/display_graph.py", line 364, in refresh
self._calcs = self._thread.get_result() # Terminate the LongRunningTask object
File "/home/cedric/LAB/faceswap/lib/gui/utils.py", line 1263, in get_result
raise self.err[1].with_traceback(self.err[2])
File "/home/cedric/LAB/faceswap/lib/gui/utils.py", line 1234, in run
retval = self._target(*self._args, **self._kwargs)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 565, in refresh
self._get_raw()
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 628, in _get_raw
loss_dict = _SESSION.get_loss(self._session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/stats.py", line 174, in get_loss
loss_dict = self._tb_logs.get_loss(session_id=session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 489, in get_loss
self._check_cache(idx)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 463, in _check_cache
self._cache_data(session_id)
File "/home/cedric/LAB/faceswap/lib/gui/analysis/event_reader.py", line 448, in _cache_data
iterator = self._training_iterator if live_data else tf.compat.v1.io.tf_record_iterator(
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/site-packages/tensorflow/python/util/deprecation.py", line 344, in new_func
return func(*args, **kwargs)
File "/home/cedric/anaconda3/envs/faceswap/lib/python3.9/site-packages/tensorflow/python/lib/io/tf_record.py", line 167, in tf_record_iterator
return _pywrap_record_io.RecordIterator(path, compression_type)
TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
1. tensorflow.python.lib.io._pywrap_record_io.RecordIterator(arg0: str, arg1: str)
Invoked with: None, ''
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: ubuntu 22.04
- Python Version 3.9
- Conda Version Latest
**Crash Report**
05/29/2022 21:10:09 MainProcess MainThread logger log_setup INFO Log level set to: INFO
05/29/2022 21:10:10 MainProcess MainThread train _get_images INFO Model A Directory: '/home/cedric/LAB/faceswap/workspace/faceAdst' (1305 images)
05/29/2022 21:10:10 MainProcess MainThread train _get_images INFO Model B Directory: '/home/cedric/LAB/faceswap/workspace/faceBsrc' (596 images)
05/29/2022 21:10:10 MainProcess MainThread train process INFO Training data directory: /home/cedric/LAB/faceswap/workspace/modelAB
05/29/2022 21:10:10 MainProcess MainThread train _monitor INFO ===================================================
05/29/2022 21:10:10 MainProcess MainThread train _monitor INFO Starting
05/29/2022 21:10:10 MainProcess MainThread train _monitor INFO ===================================================
05/29/2022 21:10:11 MainProcess _training_0 train _training INFO Loading data, this may take a while...
05/29/2022 21:10:11 MainProcess _training_0 plugin_loader _import INFO Loading Model from Dlight plugin...
05/29/2022 21:10:11 MainProcess _training_0 _base _load INFO No existing state file found. Generating.
05/29/2022 21:10:11 MainProcess _training_0 _base _set_keras_mixed_precision INFO Enabling Mixed Precision Training.
05/29/2022 21:10:11 MainProcess _training_0 device_compatibility_check _log_device_compatibility_check INFO Mixed precision compatibility check (mixed_float16): OK\nYour GPU will likely run quickly with dtype policy mixed_float16 as it has compute capability of at least 7.0. Your GPU: NVIDIA GeForce RTX 3060 Laptop GPU, compute capability 8.6
05/29/2022 21:10:12 MainProcess _training_0 plugin_loader _import INFO Loading Trainer from Original plugin...
05/29/2022 21:10:36 MainProcess _training_0 _base _save INFO [Saved models] - Average loss since last save: face_a: 0.16586, face_b: 0.20392
05/29/2022 21:10:57 MainProcess MainThread train _end_thread CRITICAL Error caught! Exiting...
05/29/2022 21:10:57 MainProcess MainThread multithreading join ERROR Caught exception in thread: '_training_0'
05/29/2022 21:10:57 MainProcess MainThread launcher execute_script ERROR You do not have enough GPU memory available to train the selected model at the selected settings. You can try a number of things:
05/29/2022 21:10:57 MainProcess MainThread launcher execute_script ERROR 1) Close any other application that is using your GPU (web browsers are particularly bad for this).
05/29/2022 21:10:57 MainProcess MainThread launcher execute_script ERROR 2) Lower the batchsize (the amount of images fed into the model each iteration).
05/29/2022 21:10:57 MainProcess MainThread launcher execute_script ERROR 3) Try enabling 'Mixed Precision' training.
05/29/2022 21:10:57 MainProcess MainThread launcher execute_script ERROR 4) Use a more lightweight model, or select the model's 'LowMem' option (in config) if it has one.
|
closed
|
2022-05-29T19:18:06Z
|
2022-05-29T21:32:29Z
|
https://github.com/deepfakes/faceswap/issues/1227
|
[] |
gravitydeep
| 1
|
ets-labs/python-dependency-injector
|
asyncio
| 805
|
can't install on Macbook M2 apple silicon
|
**python version** : 3.10.14
**OS: macOS** : 14.4.1
**dependency-injector**: 4.41.0
i want to install this library on my machine but i am getting this error
```bash
(venv) ➜ microservices git:(main) ✗ pip install dependency-injector
Collecting dependency-injector
Downloading dependency-injector-4.41.0.tar.gz (913 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 913.2/913.2 kB 392.8 kB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Requirement already satisfied: six<=1.16.0,>=1.7.0 in ./venv/lib/python3.10/site-packages (from dependency-injector) (1.16.0)
Installing collected packages: dependency-injector
DEPRECATION: dependency-injector is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for dependency-injector ... error
error: subprocess-exited-with-error
× Running setup.py install for dependency-injector did not run successfully.
│ exit code: 1
╰─> [34 lines of output]
running install
/Users/ali/aban-tether/microservices/venv/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
running build
running build_py
creating build
creating build/lib.macosx-11.0-arm64-cpython-310
creating build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
copying src/dependency_injector/__init__.py -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
copying src/dependency_injector/resources.py -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
copying src/dependency_injector/errors.py -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
copying src/dependency_injector/schema.py -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
copying src/dependency_injector/wiring.py -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
creating build/lib.macosx-11.0-arm64-cpython-310/dependency_injector/ext
copying src/dependency_injector/ext/aiohttp.py -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector/ext
copying src/dependency_injector/ext/flask.py -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector/ext
copying src/dependency_injector/ext/__init__.py -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector/ext
copying src/dependency_injector/providers.pxd -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
copying src/dependency_injector/containers.pxd -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
copying src/dependency_injector/containers.pyi -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
copying src/dependency_injector/__init__.pyi -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
copying src/dependency_injector/providers.pyi -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
copying src/dependency_injector/py.typed -> build/lib.macosx-11.0-arm64-cpython-310/dependency_injector
running build_ext
building 'dependency_injector.containers' extension
creating build/temp.macosx-11.0-arm64-cpython-310
creating build/temp.macosx-11.0-arm64-cpython-310/src
creating build/temp.macosx-11.0-arm64-cpython-310/src/dependency_injector
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -arch arm64 -mmacosx-version-min=11.0 -Wno-nullability-completeness -Wno-expansion-to-defined -Wno-undef-prefix -fPIC -Werror=unguarded-availability-new -DCYTHON_CLINE_IN_TRACEBACK=0 -I/Users/ali/aban-tether/microservices/venv/include -I/install/include/python3.10 -c src/dependency_injector/containers.c -o build/temp.macosx-11.0-arm64-cpython-310/src/dependency_injector/containers.o -O2
src/dependency_injector/containers.c:6:10: fatal error: 'Python.h' file not found
#include "Python.h"
^~~~~~~~~~
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> dependency-injector
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
```
|
closed
|
2024-07-26T08:49:30Z
|
2024-12-10T14:03:31Z
|
https://github.com/ets-labs/python-dependency-injector/issues/805
|
[
"bug"
] |
alm0ra
| 3
|
labmlai/annotated_deep_learning_paper_implementations
|
pytorch
| 263
|
LORA
|
An implementation of LORA and other tuning techniques would be nice.
|
open
|
2024-07-13T17:29:05Z
|
2024-07-31T13:42:12Z
|
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/263
|
[] |
erlebach
| 2
|
modoboa/modoboa
|
django
| 2,474
|
Installation on Rocky Linux not possible
|
Installer says:
Traceback (most recent call last):
File "./run.py", line 13, in <module>
from modoboa_installer import package
File "/root/modoboa-installer/modoboa_installer/package.py", line 121, in <module>
backend = get_backend()
File "/root/modoboa-installer/modoboa_installer/package.py", line 117, in get_backend
"Sorry, this distribution is not supported yet.")
NotImplementedError: Sorry, this distribution is not supported yet.
Why?
Rocky Linux is 100% compatible to CentOS/RHEL.
|
closed
|
2022-03-11T07:53:13Z
|
2022-03-25T08:15:01Z
|
https://github.com/modoboa/modoboa/issues/2474
|
[] |
42deluxe
| 1
|
autogluon/autogluon
|
data-science
| 4,186
|
NeuralNetTorch Hyperparameter Tuning Fails with URI Scheme Error in PyArrow
|
Bug Report Checklist
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
[V] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
[X] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
[V] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
Describe the bug
<!-- A clear and concise description of what the bug is. -->
When attempting to run hyperparameter tuning with NN_TORCH as the model on a TabularDataset, an exception is thrown related to URI handling by pyarrow library. The error message indicates an "ArrowInvalid: URI has empty scheme".
Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The training should proceed without errors, and the model should handle the URI scheme appropriately or provide more specific guidance on expected URI formats.
To Reproduce
<!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged.
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com.
In short, we are going to copy-paste your code to run it and we expect to get the same result as you. -->
Install a fresh environment with Python 3.10 and AutoGluon 1.1.0
Run the following script:
```
from autogluon.tabular import TabularDataset, TabularPredictor
data_url = 'https://raw.githubusercontent.com/mli/ag-docs/main/knot_theory/'
train_data = TabularDataset(f'{data_url}train.csv')
label = 'signature'
hp_args = {"num_trials": 3, "scheduler": "local", "searcher": "random"}
fit_args = {"hyperparameter_tune_kwargs": hp_args, "included_model_types": ["NN_TORCH"]}
predictor = TabularPredictor(label=label).fit(train_data, **fit_args)
```
Screenshots / Logs
<!-- If applicable, add screenshots or logs to help explain your problem. -->
Logs from the error:
`pyarrow.lib.ArrowInvalid: URI has empty scheme: 'AutogluonModels/ag-20240509_084509/models`
Installed Versions
<!-- Please run the following code snippet: -->
<details>
INSTALLED VERSIONS
------------------
date : 2024-05-09
time : 08:47:49.707205
python : 3.10.14.final.0
OS : Linux
OS-release : 5.15.0-1040-azure
Version : #47~20.04.1-Ubuntu SMP Fri Jun 2 21:38:08 UTC 2023
machine : x86_64
processor : x86_64
num_cores : 16
cpu_ram_mb : 128812.6796875
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 4284286
accelerate : 0.21.0
autogluon : 1.1.0
autogluon.common : 1.1.0
autogluon.core : 1.1.0
autogluon.features : 1.1.0
autogluon.multimodal : 1.1.0
autogluon.tabular : 1.1.0
autogluon.timeseries : 1.1.0
boto3 : 1.34.101
catboost : 1.2.5
defusedxml : 0.7.1
evaluate : 0.4.2
fastai : 2.7.15
gluonts : 0.14.3
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.4
joblib : 1.4.2
jsonschema : 4.21.1
lightgbm : 4.3.0
lightning : 2.1.4
matplotlib : 3.8.4
mlforecast : 0.10.0
networkx : 3.3
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.26.4
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
optimum : 1.18.1
optimum-intel : None
orjson : 3.10.3
pandas : 2.2.2
pdf2image : 1.17.0
Pillow : 10.3.0
psutil : 5.9.8
pytesseract : 0.3.10
pytorch-lightning : 2.1.4
pytorch-metric-learning: 2.3.0
ray : 2.10.0
requests : 2.28.2
scikit-image : 0.20.0
scikit-learn : 1.4.0
scikit-learn-intelex : None
scipy : 1.12.0
seqeval : 1.2.2
setuptools : 60.2.0
skl2onnx : None
statsforecast : 1.4.0
tabpfn : None
tensorboard : 2.16.2
text-unidecode : 1.3
timm : 0.9.16
torch : 2.1.2
torchmetrics : 1.2.1
torchvision : 0.16.2
tqdm : 4.65.2
transformers : 4.38.2
utilsforecast : 0.0.10
vowpalwabbit : None
xgboost : 2.0.3
</details>
|
closed
|
2024-05-09T08:54:45Z
|
2024-05-28T19:57:54Z
|
https://github.com/autogluon/autogluon/issues/4186
|
[
"bug",
"module: tabular"
] |
giladrubin1
| 1
|
AUTOMATIC1111/stable-diffusion-webui
|
deep-learning
| 15,523
|
[Bug]: Webui Infotext settings not working correctly
|
### What happened?
I added options in settings to remove all adetailer infotext to have a simple and clean infotext but adetailer info continue to be saved in infotext. I don't know if this bug is limited to adetailer or exist in all extensions.
### Steps to reproduce the problem
Add what you don't want in infotext like image below

### What should have happened?
The adetailer (and others) info must no be inserted in infotext if they have been added to exclusion fields.
Webui v1.8
|
open
|
2024-04-15T08:12:25Z
|
2024-04-15T08:24:36Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15523
|
[
"bug-report"
] |
ema7569
| 0
|
developmentseed/lonboard
|
data-visualization
| 282
|
Warn on more than 255 (256?) layers
|
deck picking will stop after 255 layers. Note that each parquet chunk is rendered as one layer. I hit this when testing rendering all of california msft buildings! 11M buildings
|
open
|
2023-12-01T22:12:09Z
|
2024-03-14T01:12:51Z
|
https://github.com/developmentseed/lonboard/issues/282
|
[] |
kylebarron
| 2
|
huggingface/transformers
|
machine-learning
| 36,295
|
[Bugs] RuntimeError: No CUDA GPUs are available in transformers v4.48.0 or above when running Ray RLHF example
|
### System Info
- `transformers` version: 4.48.0
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: Yes
- Using GPU in script?: Yes
- GPU type: NVIDIA A800-SXM4-80GB
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hi for all!
I failed to run the vLLM project RLHF example script. The code is exactly same as the vLLM docs page: https://docs.vllm.ai/en/latest/getting_started/examples/rlhf.html
The error messages are:
```
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] Error executing method 'init_device'. This might cause deadlock in distributed execution.
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] Traceback (most recent call last):
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] return run_method(target, method, args, kwargs)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 2220, in run_method
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] return func(*args, **kwargs)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker.py", line 155, in init_device
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] torch.cuda.set_device(self.device)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 478, in set_device
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] torch._C._cuda_setDevice(device)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] torch._C._cuda_init()
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] RuntimeError: No CUDA GPUs are available
(MyLLM pid=70946) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::MyLLM.__init__() (pid=70946, ip=11.163.37.230, actor_id=202b48118215566c51057a0101000000, repr=<test_ray_vllm_rlhf.MyLLM object at 0x7fb7453669b0>)
(MyLLM pid=70946) File "/data/cfs/workspace/test_ray_vllm_rlhf.py", line 96, in __init__
(MyLLM pid=70946) super().__init__(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 1051, in inner
(MyLLM pid=70946) return fn(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 242, in __init__
(MyLLM pid=70946) self.llm_engine = self.engine_class.from_engine_args(
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 484, in from_engine_args
(MyLLM pid=70946) engine = cls(
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 273, in __init__
(MyLLM pid=70946) self.model_executor = executor_class(vllm_config=vllm_config, )
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 262, in __init__
(MyLLM pid=70946) super().__init__(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 51, in __init__
(MyLLM pid=70946) self._init_executor()
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 90, in _init_executor
(MyLLM pid=70946) self._init_workers_ray(placement_group)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 355, in _init_workers_ray
(MyLLM pid=70946) self._run_workers("init_device")
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 476, in _run_workers
(MyLLM pid=70946) self.driver_worker.execute_method(sent_method, *args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 575, in execute_method
(MyLLM pid=70946) raise e
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
(MyLLM pid=70946) return run_method(target, method, args, kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 2220, in run_method
(MyLLM pid=70946) return func(*args, **kwargs)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker.py", line 155, in init_device
(MyLLM pid=70946) torch.cuda.set_device(self.device)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 478, in set_device
(MyLLM pid=70946) torch._C._cuda_setDevice(device)
(MyLLM pid=70946) File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
(MyLLM pid=70946) torch._C._cuda_init()
(MyLLM pid=70946) RuntimeError: No CUDA GPUs are available
```
I found in transformers==4.47.1 the script could run normally. However when I tried transformers==4.48.0, 4.48.1 and 4.49.0 I got the error messages above. Then I checked pip envs with `pip list` and found only transformers versions are different.
I've tried to change vllm version between 0.7.0 and 0.7.2, the behavior is the same.
Related Ray issues:
* https://github.com/vllm-project/vllm/issues/13597
* https://github.com/vllm-project/vllm/issues/13230
### Expected behavior
The script runs normally.
|
open
|
2025-02-20T07:58:49Z
|
2025-03-22T08:03:03Z
|
https://github.com/huggingface/transformers/issues/36295
|
[
"bug"
] |
ArthurinRUC
| 3
|
littlecodersh/ItChat
|
api
| 786
|
可能不是issue
|
您好,使用您的itchat正在学习python,程序非常棒,调用很多,需要一个个去尝试,按照文档注册了一个真的isAt的tuling机器人反馈,但是经常会收到UnboundLocalError的报错,不是很明白是哪里有问题导致,忘能指导一下,谢谢
以下是demo.py
```
@itchat.msg_register
def tuling_reply(msg):
defaultReply = 'I received: ' + msg['Text']
reply = get_response(msg['Text'])
return reply or defaultReply
@itchat.msg_register(TEXT, isGroupChat=True)
def groupchat_reply(msg):
if msg['isAt']:
defaultReply = 'I received: ' + msg['Text']
reply = get_response(msg['Text'])
return reply or defaultReply
```
以下是错误提示的标准输出
```
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/itchat/components/register.py", line 66, in configured_reply
r = replyFn(msg)
File "demo.py", line 36, in groupchat_reply
return reply or defaultReply
UnboundLocalError: local variable 'reply' referenced before assignment
```
|
open
|
2019-01-24T10:21:11Z
|
2019-06-20T19:35:14Z
|
https://github.com/littlecodersh/ItChat/issues/786
|
[] |
rilyuuj
| 2
|
collerek/ormar
|
pydantic
| 295
|
ManyToMany Relations generate a warning about a table having no fields
|
**Describe the bug**
When using an M2M field between two tables, Ormar produces a warning that the generated join table has no fields. The exact warning shown below was produced using the example from the [Many To Many docs](https://collerek.github.io/ormar/relations/many-to-many/).
```
WARNING:root:Table posts_categorys had no fields so auto Integer primary key named `id` created.
```
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://github.com/etimberg/ormar-test-cases/tree/m2m-warning
2. Clone the repo, check out the `m2m-warning` branch, and follow the readme to start postgres
3. Run `python reproduce.py` in your terminal
4. See a warning
**Expected behavior**
M2M fields should not generate a warning about a table missing columns
**Versions (please complete the following information):**
- Database backend used: postgresql
- Python version: 3.9.6
- `ormar` version: 0.10.15
- `pydantic` version: 1.8.2
**Additional context**
This problem can be mitigated by explicitly modelling the the join table and using `through` on the `ManyToMany` field.
|
closed
|
2021-08-02T16:06:31Z
|
2021-08-06T14:12:05Z
|
https://github.com/collerek/ormar/issues/295
|
[
"bug"
] |
etimberg
| 2
|
jupyter/nbviewer
|
jupyter
| 506
|
Restore Scrolling Slides. Again.
|
It's #433, #439 all over again. My selective memory prevents me from noticing non-scrolling slides as a bug.
This somehow got broken again likely when we moved over to jupyter with #493.
See [this comment](https://github.com/jupyter/nbviewer/issues/466#issuecomment-142063405)
|
closed
|
2015-09-21T19:33:15Z
|
2015-10-18T05:29:29Z
|
https://github.com/jupyter/nbviewer/issues/506
|
[] |
bollwyvl
| 3
|
google-research/bert
|
tensorflow
| 439
|
Clarification of document for BookCorpus
|
"We treat every Wikipedia article as a single document." was confirmed by @jacobdevlin-google at https://github.com/google-research/bert/issues/39
However, it is still unclear for BookCorpus. I found similar question around (https://github.com/google-research/bert/issues/155#issuecomment-448119175) but there is likely no answer yet.
So, please confirm which part of a book was treated as a document in the origin paper.
Is a whole book, or every chapter, or every paragraph treated as a document?
|
open
|
2019-02-15T09:15:29Z
|
2019-02-15T09:15:29Z
|
https://github.com/google-research/bert/issues/439
|
[] |
yoquankara
| 0
|
Miserlou/Zappa
|
flask
| 1,348
|
Automatically create Lambda and ApiGateway Cloudwatch alarms (Using a config?)
|
Lambda and ApiGateway have a lot of alarms that can be configured. For production Lambdas it is required (at least in our org) to have some basic alarms such as on Lambda timeouts, ApiGateway 4xx and 5xx and Lambda errors. Automating this using Zappa makes a lot of sense to me, although there are a lot of way to configure Cloudwatch alarms to this might add a lot of configurations to Zappa.
This should be controlled with a config that is off by default (because not every Lambda is prod and not every org cares), and specific configuration values can be set in the config with sane values that can be overridden. There are a few things that can be customized:
1. Alarm type -
Api Gateway: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/api-gateway-metrics-dimensions.html
Lambda: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/lam-metricscollected.html
I'd choose a subset that makes sense to be default (for example Lambda errors, timeouts, AG 4xx 5xx).
2. Alarms threshold - per alarm (for example, our 5xx threshold is 1, we want to have an alarm for each Lambda server failure).
3. Alarm metrics - per alarm (for example, 5xx metric is maximum).
4. Alarm time period - per alarm (we usually set it to having the alarm over two 5 minutes periods).
Does this make sense for zappa?
|
open
|
2018-01-11T13:10:49Z
|
2018-05-23T09:53:25Z
|
https://github.com/Miserlou/Zappa/issues/1348
|
[
"feature-request",
"low-priority"
] |
Sveder
| 4
|
pydantic/pydantic-settings
|
pydantic
| 191
|
Add support for reading configuration from .json files
|
I was wondering if it would be possible to read the data from `.json` file similarly like from Dotenv (.env) files.
It would be beneficial - as you don't need to write your own logic on how to read the file and pass it to the **Settings** object.
I would expect it in the way that you define your Settings class (inherit from BaseSettings pydantic class). But instead of providing .env file you provide a some_config.json file.
with Dotenv
```
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
model_config = SettingsConfigDict(
env_file=('.env', '.env.prod')
)
```
new feature - with json
```
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
model_config = SettingsConfigDict(
json_file='configuration.json'
)
```
And in the background Pydantic would go through json file and validate it versus attributes and types defined in Settings class.
Is it doable and does it make sense?
|
closed
|
2023-11-21T12:01:23Z
|
2023-11-23T19:12:32Z
|
https://github.com/pydantic/pydantic-settings/issues/191
|
[
"unconfirmed"
] |
JakubPluta
| 2
|
MagicStack/asyncpg
|
asyncio
| 352
|
Is it possible to specify multi hosts dsn for connection pool?
|
Is it possible to connect to several hosts (one master and replicas) from ConnectionPool (interface like libpq provides)?
I mean the following: https://www.postgresql.org/docs/current/static/libpq-connect.html
```
# Read sessions
postgresql://host1:123,host2:456/somedb?target_session_attrs=any
# Read-write sessions
postgresql://host1:123,host2:456/somedb?target_session_attrs=read-write
```
I enumerate all postgresql hosts and in target_session_attrs parameter specify `read-write` if i need master.
Or i should create separate connection pools for master & replicas servers?
How that functionality (switching between hosts depending on target_session_attrs attrs, re-connecting on master switch) can be implemented in asyncpg?
|
open
|
2018-08-29T10:22:13Z
|
2025-03-06T10:44:36Z
|
https://github.com/MagicStack/asyncpg/issues/352
|
[] |
alvassin
| 19
|
tensorly/tensorly
|
numpy
| 124
|
Non_Negative_Parafac non-start error
|
I'm using the non_negative_parafac function to decompose 3way data tensors of Mass Spectrometry data with approximate dimensions of (15x40x7000).
The function usually works just fine, but for certain numbers of factors (~13) for certain data tensors, I get an error where the iterations never begin, so I see nothing but the "thinking" asterisk in a Jupyter notebook. I'm running multiple decompositions of each data tensor to see which number of factors provides the best data, so this kind of silent error is a killer for what I'm doing.
This is my first issue post and I can't guess what you'll need, so please let me know what other information you would like to see and I'll provide it asap.
Thanks for any help!
www.rocklinlab.org
|
closed
|
2019-08-02T21:07:05Z
|
2019-08-06T16:26:18Z
|
https://github.com/tensorly/tensorly/issues/124
|
[] |
WesLudwig
| 2
|
pydata/pandas-datareader
|
pandas
| 817
|
Simple question about documentation
|
Why isn't the yahoo finance API listed in the Data Readers [list](https://pandas-datareader.readthedocs.io/en/latest/readers/index.html), but it is available as get_data_yahoo()? Will the get_data_yahoo() function be deprecated in future versions?
|
closed
|
2020-08-26T18:10:43Z
|
2021-07-13T10:24:48Z
|
https://github.com/pydata/pandas-datareader/issues/817
|
[
"Good First Issue"
] |
Psychotechnopath
| 3
|
BayesWitnesses/m2cgen
|
scikit-learn
| 187
|
Code generated for XGBoost models return error scores when feature input include zero which result in xgboost "missing"
|
I’m try using m2cgen to generate js code for XGBoost model,but find that if the feature input include zero,the result which calculate by generated js has a big difference with the result which predicted by model. For example, if the feature input is [0.4444,0.55555,0.3545,0.22333],the result which calculate by generated js equals the result which predicted by model,but if the feature input is [0.4444,0,0,0.22333],the result which calculate by generated js will be very different from the result which predicted by model,maybe one result is 0.22 ,the other one result is 0.04。After we validate by demo,we find that m2cgen not process “missing” condition. when xgboost result in “missing”, m2cgen will process it as “yes”
|
closed
|
2020-03-27T12:08:07Z
|
2020-04-07T16:49:44Z
|
https://github.com/BayesWitnesses/m2cgen/issues/187
|
[] |
crystaldan
| 2
|
davidteather/TikTok-Api
|
api
| 212
|
[Errno 12] Cannot Allocate Memory
|
Hi.
I wanted to get trending tiktok videos on a shared python host. but it couldn't because of memory allocation problem. As soon as it starts the program I get the error.I wanted to know how to handle it.
```
from TikTokApi import TikTokApi
apt = TikTokApi()
.
.
.
whlie trendings >0:
tr = api.trending(count=trendings, proxy=None)
.
.
.
```
some parts of the code were not important. I am just running the above code in a loop.
```
Traceback (most recent call last):
File "tik.py", line 87, in <module>
update_trends(trendings=trendings, con=con, cur=cur)
File "tik.py", line 25, in update_trends
tr = api.trending(count=trendings, proxy=None)
File "~/venv/3.7/lib/python3.7/site-packages/TikTokApi/tiktok.py", line 89, in trending
b = browser(api_url, language=language, proxy=proxy)
File "~/venv/3.7/lib/python3.7/site-packages/TikTokApi/browser.py", line 57, in __init__
loop.run_until_complete(self.start())
File "/opt/alt/python37/lib64/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "~/venv/3.7/lib/python3.7/site-packages/TikTokApi/browser.py", line 60, in start
self.browser = await pyppeteer.launch(self.options)
File "~/venv/3.7/lib/python3.7/site-packages/pyppeteer/launcher.py", line 305, in launch
return await Launcher(options, **kwargs).launch()
File "~/venv/3.7/lib/python3.7/site-packages/pyppeteer/launcher.py", line 147, in launch
self.cmd, **options, )
File "/opt/alt/python37/lib64/python3.7/subprocess.py", line 775, in __init__
restore_signals, start_new_session)
File "/opt/alt/python37/lib64/python3.7/subprocess.py", line 1453, in _execute_child
restore_signals, start_new_session, preexec_fn)
OSError: [Errno 12] Cannot allocate memory
```
**Desktop (please complete the following information):**
- OS: Linux [kernel verison: 3.10.0-962.3.2.lve1.5.38.el7.x86_64]
- TikTokApi Version 3.3.7
- Pyppeteer Version 0.2.2
- Python Version 3.7
I will be thankful if this issue solves ASAP.
|
closed
|
2020-08-11T12:45:10Z
|
2020-08-24T19:19:48Z
|
https://github.com/davidteather/TikTok-Api/issues/212
|
[
"bug"
] |
hamedwaezi01-zz
| 3
|
Kanaries/pygwalker
|
plotly
| 577
|
It is possible to run pygwalker from Pycharm???
|
Hi, Can I use pygwalker from Pycharm or is mandatory to be in a notebook?
Thanks!
|
closed
|
2024-06-12T10:09:37Z
|
2025-03-04T09:25:42Z
|
https://github.com/Kanaries/pygwalker/issues/577
|
[
"enhancement",
"P2"
] |
ihouses
| 3
|
jmcarpenter2/swifter
|
pandas
| 2
|
Error when import library
|
When I try to import swifter as: `import swifter`
I got this kind of error:
`File "/home/lap00379/venv/local/lib/python2.7/site-packages/swifter/swifter.py", line 27
apply(myfunc, *args, **kwargs, axis=1, meta=meta).compute(get=get)
^
SyntaxError: invalid syntax`
I'm using Ubuntu 16.04 and Python 2.7
|
closed
|
2018-05-09T06:26:43Z
|
2018-11-14T14:08:36Z
|
https://github.com/jmcarpenter2/swifter/issues/2
|
[] |
Tranquangdai
| 3
|
Yorko/mlcourse.ai
|
matplotlib
| 22
|
Решение вопроса 5.11 не стабильно
|
Даже при выставленных random_state параметрах, best_score лучшей модели отличается от вариантов в ответах.
Подтверждено запуском несколькими участниками.
Возможно влияют конкретные версии пакетов на расчеты.
Могу приложить ipynb, на котором воспроизводится.
|
closed
|
2017-04-03T08:43:37Z
|
2017-04-03T08:52:22Z
|
https://github.com/Yorko/mlcourse.ai/issues/22
|
[] |
coodix
| 2
|
LAION-AI/Open-Assistant
|
machine-learning
| 3,009
|
Backend support for moderator message search functionality
|
For data collection
|
closed
|
2023-05-02T08:40:52Z
|
2023-05-26T21:28:47Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3009
|
[
"backend"
] |
olliestanley
| 0
|
plotly/dash-component-boilerplate
|
dash
| 2
|
Warning, 'label' is marked as required but its value is undefined.
|
I got this warning, and to fix it I added 'label' and 'id' keys to the App's initial `state`.
|
open
|
2018-08-12T19:26:38Z
|
2018-08-12T19:26:38Z
|
https://github.com/plotly/dash-component-boilerplate/issues/2
|
[] |
zackstout
| 0
|
nolar/kopf
|
asyncio
| 403
|
[archival placeholder]
|
This is a placeholder for later issues/prs archival.
It is needed now to reserve the initial issue numbers before going with actual development (PRs), so that later these placeholders could be populated with actual archived issues & prs with proper intra-repo cross-linking preserved.
|
closed
|
2020-08-18T20:05:43Z
|
2020-08-18T20:05:44Z
|
https://github.com/nolar/kopf/issues/403
|
[
"archive"
] |
kopf-archiver[bot]
| 0
|
xlwings/xlwings
|
automation
| 2,111
|
Trying to mass assign using multiple ranges
|
#### OS (e.g. Windows 10 or macOS Sierra)
Windows 11
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
xlwings 0.28.5
python 3.11
Excel 2021 pro
#### Describe your issue (incl. Traceback!)
I'm trying to mass assign multiple ranges but when trying to use value I'm only getting the first range
I'm trying to mass assign instead of doing it one by one but it looks like I need to go range by range which takes me 4 seconds to assign 32 values.
Is it possible to mass assign multiple ranges?
I've seen https://stackoverflow.com/questions/46735285/xlwings-selecting-non-adjacent-columns example fir selecting multiple ranges which works but value doesn't return all the values
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
range = "A10:A11,B14:B15"
sheet= wb.sheets['Sheet']
sheet.range(range ).value = [1,2,3,4]
Sets A10,A11,A12,A13
```
|
closed
|
2022-12-03T10:28:11Z
|
2023-01-18T19:23:04Z
|
https://github.com/xlwings/xlwings/issues/2111
|
[] |
farinidan
| 1
|
fastapi/sqlmodel
|
fastapi
| 313
|
How to Initialise & Populate a Postgres Database with Circular ForeignKeys?
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
# Imports
from typing import Optional, List
from sqlmodel import Session, Field, SQLModel, Relationship, create_engine
import uuid as uuid_pkg
# Defining schemas
class Person(SQLModel, table=True):
person_id: uuid_pkg.UUID = Field(default_factory=uuid_pkg.uuid4, primary_key=True, index=True, nullable=True)
first_names: str
last_name: str
mailing_property_id: uuid_pkg.UUID = Field(foreign_key='property.property_id')
customer: Optional['Customer'] = Relationship(back_populates='lead_person')
mailing_property: Optional['Property'] = Relationship(back_populates='person')
class Customer(SQLModel, table=True):
customer_id: uuid_pkg.UUID = Field(default_factory=uuid_pkg.uuid4, primary_key=True, index=True, nullable=True)
lead_person_id: uuid_pkg.UUID = Field(foreign_key='person.person_id')
contract_type: str
lead_person: Optional['Person'] = Relationship(back_populates='customer')
contracted_properties: Optional[List['Property']] = Relationship(back_populates='occupant_customer')
class Property(SQLModel, table=True):
property_id: uuid_pkg.UUID = Field(default_factory=uuid_pkg.uuid4, primary_key=True, index=True, nullable=True)
occupant_customer_id: uuid_pkg.UUID = Field(foreign_key='customer.customer_id')
address: str
person: Optional['Person'] = Relationship(back_populates='mailing_property')
occupant_customer: Optional['Customer'] = Relationship(back_populates='contracted_properties')
# Initialising the database
engine = create_engine(f'postgresql://{DB_USERNAME}:{DB_PASSWORD}@{DB_URL}:{DB_PORT}/{DB_NAME}')
SQLModel.metadata.create_all(engine)
# Defining the database entries
john = Person(
person_id = 'eb7a0f5d-e09b-4b36-8e15-e9541ea7bd6e',
first_names = 'John',
last_name = 'Smith',
mailing_property_id = '4d6aed8d-d1a2-4152-ae4b-662baddcbef4'
)
johns_lettings = Customer(
customer_id = 'cb58199b-d7cf-4d94-a4ba-e7bb32f1cda4',
lead_person_id = 'eb7a0f5d-e09b-4b36-8e15-e9541ea7bd6e',
contract_type = 'Landlord Premium'
)
johns_property_1 = Property(
property_id = '4d6aed8d-d1a2-4152-ae4b-662baddcbef4',
occupant_customer_id = 'cb58199b-d7cf-4d94-a4ba-e7bb32f1cda4',
address = '123 High Street'
)
johns_property_2 = Property(
property_id = '2ac15ac9-9ab3-4a7c-80ad-961dd565ab0a',
occupant_customer_id = 'cb58199b-d7cf-4d94-a4ba-e7bb32f1cda4',
address = '456 High Street'
)
# Committing the database entries
with Session(engine) as session:
session.add(john)
session.add(johns_lettings)
session.add(johns_property_1)
session.add(johns_property_2)
session.commit()
```
### Description
Goal:
To model the back-end database for a cleaning company. Specifically, trying to model a system where customers can have multiple properties that need to be cleaned and each customer has a single lead person who has a single mailing property (to contact them at). Ideally, I want to be able to use a single table for the mailing properties and cleaning properties (as in most instances they will be the same).
Constraints:
* Customers can be either individual people or organisations
* A lead person must be identifiable for each customer
* Each person must be matched to a property (so that their mailing address can be identified)
* A single customer can have multiple properties attached to them (e.g. for a landlord that includes cleaning as part of the rent)
The issue is that the foreign keys have a circular dependency.
* Customer -> Person based on the `lead_person_id`
* Person -> Property based on the `mailing_property_id`
* Property -> Customer based on the `occupant_customer_id`

Running the code written above results in:
```
ForeignKeyViolation: insert or update on table "customer" violates foreign key constraint "customer_lead_person_id_fkey"
DETAIL: Key (lead_person_id)=(eb7a0f5d-e09b-4b36-8e15-e9541ea7bd6e) is not present in table "person".
```
This issue is specific to Postgres, which unlike SQLite (used in the docs) imposes constraints on foreign keys when data is being added. I.e. replacing `engine = create_engine(f'postgresql://{DB_USERNAME}:{DB_PASSWORD}@{DB_URL}:{DB_PORT}/{DB_NAME}')` with `engine = create_engine('sqlite:///test.db')` will let the database be initialised without causing an error - however my use-case is with a Postgres DB.
<br>
Attempted Solutions:
* Used link tables between customers/people and properties/customers - no luck
* Used `Session.exec` with [this code from SO](https://stackoverflow.com/a/48204024/8035710) to temporarily remove foreign key constraints then add them back on - no luck
* Used primary joins instead of foreign keys as described in [this SQLModel Issue](https://github.com/tiangolo/sqlmodel/issues/10#issuecomment-1002835506) - no luck
### Operating System
macOS
### Operating System Details
Using an M1 Mac but have replicated the issue on ubuntu as well
### SQLModel Version
0.0.6
### Python Version
3.10.4
### Additional Context
_No response_
|
open
|
2022-04-25T10:16:57Z
|
2022-08-17T03:55:54Z
|
https://github.com/fastapi/sqlmodel/issues/313
|
[
"question"
] |
AyrtonB
| 2
|
davidsandberg/facenet
|
tensorflow
| 843
|
AttributeError: module 'facenet' has no attribute 'store_revision_info'
|
hi, I run C:\Users\shend\Anaconda3>python c:\facenet\src\align\align_dataset_mtcnn.py c:\lfw c:\imagenes
but a I have this error:
Traceback (most recent call last):
File "c:\facenet\src\align\align_dataset_mtcnn.py", line 160, in <module>
main(parse_arguments(sys.argv[1:]))
File "c:\facenet\src\align\align_dataset_mtcnn.py", line 47, in main
facenet.store_revision_info(src_path, output_dir, ' '.join(sys.argv))
AttributeError: module 'facenet' has no attribute 'store_revision_info'
what's wrong with de facenet code?
|
open
|
2018-08-08T23:18:42Z
|
2019-02-13T14:37:13Z
|
https://github.com/davidsandberg/facenet/issues/843
|
[] |
shendrysu
| 2
|
kaliiiiiiiiii/Selenium-Driverless
|
web-scraping
| 89
|
how to get driver pid
|
`driver.service.process.pid` is not working
Can you tell me how to get PID of the driver?
|
closed
|
2023-10-19T08:59:33Z
|
2023-10-19T09:59:28Z
|
https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/89
|
[] |
gouravkumar99
| 0
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 499
|
指令精调后出现胡言乱语和好多感叹号
|
### 详细描述问题

训练脚本
lr=1e-4
lora_rank=8
lora_alpha=32
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
pretrained_model="/alidata1/admin/LLaMA/modules/llama-7b-hf"
chinese_tokenizer_path="/alidata1/admin/LLaMA/modules/chinese-llama-lora-7b"
dataset_dir="/alidata1/admin/LLaMA/datasets"
per_device_train_batch_size=1
per_device_eval_batch_size=1
training_steps=100
gradient_accumulation_steps=1
output_dir="/alidata1/admin/LLaMA/Chinese-LLaMA-Alpaca/scripts/insure/sft_lora_model"
validation_file=./insure_validation/insure.json
deepspeed_config_file=ds_zero2_no_offload.json
torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
--deepspeed ${deepspeed_config_file} \
--model_name_or_path ${pretrained_model} \
--tokenizer_name_or_path ${chinese_tokenizer_path} \
--dataset_dir ${dataset_dir} \
--validation_split_percentage 0.001 \
--per_device_train_batch_size ${per_device_train_batch_size} \
--per_device_eval_batch_size ${per_device_eval_batch_size} \
--do_train \
--do_eval \
--seed $RANDOM \
--fp16 \
--max_steps ${training_steps} \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.03 \
--weight_decay 0 \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 3 \
--evaluation_strategy steps \
--eval_steps 250 \
--save_steps 500 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--max_seq_length 500 \
--output_dir ${output_dir} \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank ${lora_rank} \
--lora_alpha ${lora_alpha} \
--trainable ${lora_trainable} \
--lora_dropout ${lora_dropout} \
--torch_dtype float16 \
--validation_file ${validation_file} \
--ddp_find_unused_parameters False
数据集(数据量在500条左右):
[
....
{
"instruction": "有安装起搏器能投保众安尊享e生2023版中端医疗险吗",
"input": "",
"output": "心脏起搏器的安装是在局部麻醉下进行的,手术全程中患者意识都是清楚的。一般选择锁骨下静脉为穿刺点,建立静脉通路,将起搏器的导线置入心腔内。 如果是双腔起搏器,则一根电极置入右心耳,另一根电极置入右心室,从X线下判断电极的植入位置。然后将起搏器的电极与起搏器相连接,最后在胸部切开皮肤,制作囊袋,将起搏器埋入其中后将皮肤缝合。\n\n被保险人既往症如有安装起搏器,无法投保众安尊享e生2023版中端医疗险。"
}
.....
]
训练结束后
mv /alidata1/admin/LLaMA/Chinese-LLaMA-Alpaca/scripts/insure/sft_lora_model/pytorch_model.bin /alidata1/admin/LLaMA/Chinese-LLaMA-Alpaca/scripts/lora_model/adapter_model.bin
cp /alidata1/admin/LLaMA/modules/chinese-llama-plus-lora-7b/*token* lora_model/
cp /alidata1/admin/LLaMA/modules/chinese-llama-plus-lora-7b/adapter_config.json lora_model/
合并数据
python /alidata1/admin/LLaMA/Chinese-LLaMA-Alpaca/scripts/merge_llama_with_chinese_lora.py \
--base_model /alidata1/admin/LLaMA/modules/llama-7b-hf \
--lora_model /alidata1/admin/LLaMA/modules/chinese-llama-plus-lora-7b,/alidata1/admin/LLaMA/modules/chinese-alpaca-plus-lora-7b,/alidata1/admin/LLaMA/Chinese-LLaMA-Alpaca/scripts/lora_model \
--output_type huggingface \
--output_dir /alidata1/admin/LLaMA/lora/
启动合并后的lora
python3 server.py --model lora --listen --api --chat --auto-devices
帮忙分析下是什么原因,训练后不能正常回答
|
closed
|
2023-06-02T11:45:08Z
|
2023-06-19T22:02:34Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/499
|
[
"stale"
] |
alanbeen
| 9
|
ultralytics/yolov5
|
pytorch
| 13,144
|
How to increase FPS camera capture inside the Raspberry Pi 4B 8GB with best.onnx model
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Detection
### Bug
Hi, i am currently trying to make traffic sign detection and recognition by using the YOLOv5 Pytorch with Yolov5s model. I am using detect.py file to run the model and the FPS i get is only 1 FPS. The dataset contain around 2K images with 200 epochs. I run the code with:
python detect.py --weights best.onnx --img 640 --conf 0.7 --source 0
Is there any modify to the code so that i can get more than 4FPS?
### Environment
-Raspberry Pi 4B with 8GB Ram
-Webcam
-Model best.onnx
-Train using Yolov5 Pytorch
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR!
|
open
|
2024-06-27T21:16:08Z
|
2024-10-20T19:49:02Z
|
https://github.com/ultralytics/yolov5/issues/13144
|
[
"bug",
"Stale"
] |
Killuagg
| 13
|
tensorpack/tensorpack
|
tensorflow
| 1,086
|
The actual usage of the quantized DoReFa weights
|
Previously response to a closed topic so post a new issue.
I am wondering if there are any float tensors in the actual dorefa quantized network. Once the quantized weights are trained, we could either fixed-point them with the activation values and do fixed-point calculations or only do the fixed-point convolution in the convolution layers and perform the float-to-fixed-point conversions.
The 2nd method seems unreasonable to convert the format so frequently but the 1st method might introduce the extra error caused by fixed-point. I have tried w4a4 and the error from fixed-point degraded the result a lot more compared to fixed-point w8a8. I am not sure what is the correct way to actually use the quantized weights.
The ulp seemed to apply the 2nd method. Does that mean the time cost of the format conversion is neglectable?
|
closed
|
2019-02-19T12:22:53Z
|
2019-03-01T21:51:35Z
|
https://github.com/tensorpack/tensorpack/issues/1086
|
[
"examples"
] |
asjmasjm
| 14
|
davidsandberg/facenet
|
computer-vision
| 347
|
How to debug the two models,20170511-185253 and 20170512-110547??
|
As the title, how to debug these two models, it is difficult for me to understand the meaning of the downloaded model? What is the process? Thx
|
closed
|
2017-06-22T08:22:02Z
|
2017-07-15T15:25:30Z
|
https://github.com/davidsandberg/facenet/issues/347
|
[] |
YjDai
| 2
|
keras-team/autokeras
|
tensorflow
| 1,215
|
"checkpoint not found" error in the "structured_data_classification" example
|
### Bug Description
I was trying to run the "structured_data_classification" example with the newly released version 1.0.3, but encountered this "checkpoint not found" error
### Bug Reproduction
Code for reproducing the bug:
train_file_path = "data/train.csv"
test_file_path = "data/eval.csv"
x_train_df = pd.read_csv(train_file_path)
print(type(x_train_df)) # pandas.DataFrame
y_train_df = x_train_df.pop('survived')
print(type(y_train_df)) # pandas.Series
\# Preparing testing data.
x_test_df = pd.read_csv(test_file_path)
y_test_df = x_test_df.pop('survived')
\# It tries 3 different models.
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3,
seed=66,
project_name='song4',
directory='akeras_models/'
)
\# Feed the structured data classifier with training data.
clf.fit(x_train_df, y_train_df) # , epochs=10)
\# Predict with the best model.
predicted_y = clf.predict(x_test_df)
\# Evaluate the best model with testing data.
print(clf.evaluate(x_test_df, y_test_df))
Data used by the code:
### Expected Behavior
2020-06-26 11:26:29.720935: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2020-06-26 11:26:29.720955: E tensorflow/stream_executor/cuda/cuda_driver.cc:313] failed call to cuInit: UNKNOWN ERROR (303)
2020-06-26 11:26:29.720970: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (Super-AI): /proc/driver/nvidia/version does not exist
2020-06-26 11:26:29.721125: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-06-26 11:26:29.752555: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2900230000 Hz
2020-06-26 11:26:29.758760: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f82d4000b20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-06-26 11:26:29.758785: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
[Starting new trial]
Epoch 1/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7875 - accuracy: 0.5016/16 [==============================] - 0s 11ms/step - loss: 0.7533 - accuracy: 0.5977 - val_loss: 0.6331 - val_accuracy: 0.7043
Epoch 2/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7675 - accuracy: 0.6516/16 [==============================] - 0s 3ms/step - loss: 0.6475 - accuracy: 0.6855 - val_loss: 0.7478 - val_accuracy: 0.4261
Epoch 3/1000
1/16 [>.............................] - ETA: 0s - loss: 0.8791 - accuracy: 0.5016/16 [==============================] - 0s 3ms/step - loss: 0.6380 - accuracy: 0.6914 - val_loss: 0.8299 - val_accuracy: 0.3304
Epoch 4/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7965 - accuracy: 0.6816/16 [==============================] - 0s 3ms/step - loss: 0.6334 - accuracy: 0.7285 - val_loss: 0.8162 - val_accuracy: 0.3565
Epoch 5/1000
1/16 [>.............................] - ETA: 0s - loss: 0.6657 - accuracy: 0.6516/16 [==============================] - 0s 3ms/step - loss: 0.5991 - accuracy: 0.7344 - val_loss: 0.7410 - val_accuracy: 0.3913
Epoch 6/1000
1/16 [>.............................] - ETA: 0s - loss: 0.6547 - accuracy: 0.6516/16 [==============================] - 0s 3ms/step - loss: 0.5796 - accuracy: 0.7285 - val_loss: 0.7873 - val_accuracy: 0.3826
Epoch 7/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5260 - accuracy: 0.7516/16 [==============================] - 0s 3ms/step - loss: 0.5761 - accuracy: 0.7266 - val_loss: 0.8332 - val_accuracy: 0.3652
Epoch 8/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7437 - accuracy: 0.6816/16 [==============================] - 0s 3ms/step - loss: 0.5543 - accuracy: 0.7500 - val_loss: 0.7986 - val_accuracy: 0.4000
Epoch 9/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4914 - accuracy: 0.7516/16 [==============================] - 0s 3ms/step - loss: 0.5277 - accuracy: 0.7734 - val_loss: 0.7369 - val_accuracy: 0.4087
Epoch 10/1000
1/16 [>.............................] - ETA: 0s - loss: 0.6530 - accuracy: 0.7116/16 [==============================] - 0s 3ms/step - loss: 0.5629 - accuracy: 0.7520 - val_loss: 0.6682 - val_accuracy: 0.5391
Epoch 11/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5999 - accuracy: 0.7816/16 [==============================] - 0s 3ms/step - loss: 0.5369 - accuracy: 0.7676 - val_loss: 0.7082 - val_accuracy: 0.5478
[Trial complete]
[Trial summary]
|-Trial ID: f3e9fec44a0956276a01421772bcb618
|-Score: 0.7043478488922119
|-Best step: 0
> Hyperparameters:
|-classification_head_1/dropout_rate: 0
|-optimizer: adam
|-structured_data_block_1/dense_block_1/dropout_rate: 0.5
|-structured_data_block_1/dense_block_1/num_layers: 1
|-structured_data_block_1/dense_block_1/units_0: 512
|-structured_data_block_1/dense_block_1/units_1: 16
|-structured_data_block_1/dense_block_1/use_batchnorm: True
[Starting new trial]
Epoch 1/1000
1/16 [>.............................] - ETA: 0s - loss: 0.9774 - accuracy: 0.5016/16 [==============================] - 0s 10ms/step - loss: 0.8073 - accuracy: 0.5391 - val_loss: 1.0624 - val_accuracy: 0.4870
Epoch 2/1000
1/16 [>.............................] - ETA: 0s - loss: 0.9896 - accuracy: 0.4316/16 [==============================] - 0s 5ms/step - loss: 0.6816 - accuracy: 0.6426 - val_loss: 0.6217 - val_accuracy: 0.7130
Epoch 3/1000
1/16 [>.............................] - ETA: 0s - loss: 0.6186 - accuracy: 0.6816/16 [==============================] - 0s 5ms/step - loss: 0.5884 - accuracy: 0.7207 - val_loss: 0.6258 - val_accuracy: 0.7043
Epoch 4/1000
1/16 [>.............................] - ETA: 0s - loss: 0.6960 - accuracy: 0.5616/16 [==============================] - 0s 4ms/step - loss: 0.5688 - accuracy: 0.7266 - val_loss: 0.6487 - val_accuracy: 0.6522
Epoch 5/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4751 - accuracy: 0.7516/16 [==============================] - 0s 4ms/step - loss: 0.5353 - accuracy: 0.7422 - val_loss: 0.7280 - val_accuracy: 0.4087
Epoch 6/1000
1/16 [>.............................] - ETA: 0s - loss: 0.6062 - accuracy: 0.5616/16 [==============================] - 0s 5ms/step - loss: 0.5268 - accuracy: 0.7461 - val_loss: 0.7232 - val_accuracy: 0.6348
Epoch 7/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4782 - accuracy: 0.6516/16 [==============================] - 0s 5ms/step - loss: 0.5273 - accuracy: 0.7559 - val_loss: 0.6015 - val_accuracy: 0.7130
Epoch 8/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4447 - accuracy: 0.8116/16 [==============================] - 0s 4ms/step - loss: 0.5104 - accuracy: 0.7676 - val_loss: 0.6554 - val_accuracy: 0.6435
Epoch 9/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4685 - accuracy: 0.8716/16 [==============================] - 0s 4ms/step - loss: 0.5135 - accuracy: 0.7812 - val_loss: 0.5953 - val_accuracy: 0.7391
Epoch 10/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4235 - accuracy: 0.8716/16 [==============================] - 0s 4ms/step - loss: 0.4754 - accuracy: 0.7871 - val_loss: 0.6090 - val_accuracy: 0.7130
Epoch 11/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7253 - accuracy: 0.7816/16 [==============================] - 0s 4ms/step - loss: 0.4961 - accuracy: 0.7734 - val_loss: 0.5601 - val_accuracy: 0.7391
Epoch 12/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4596 - accuracy: 0.8116/16 [==============================] - 0s 5ms/step - loss: 0.4459 - accuracy: 0.8008 - val_loss: 0.5370 - val_accuracy: 0.7826
Epoch 13/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5129 - accuracy: 0.7816/16 [==============================] - 0s 5ms/step - loss: 0.4602 - accuracy: 0.8066 - val_loss: 0.4664 - val_accuracy: 0.8087
Epoch 14/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4598 - accuracy: 0.8116/16 [==============================] - 0s 4ms/step - loss: 0.4366 - accuracy: 0.8281 - val_loss: 0.4810 - val_accuracy: 0.7913
Epoch 15/1000
1/16 [>.............................] - ETA: 0s - loss: 0.6860 - accuracy: 0.7816/16 [==============================] - 0s 4ms/step - loss: 0.4841 - accuracy: 0.8027 - val_loss: 0.4353 - val_accuracy: 0.8174
Epoch 16/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4164 - accuracy: 0.7816/16 [==============================] - 0s 5ms/step - loss: 0.4539 - accuracy: 0.8086 - val_loss: 0.4668 - val_accuracy: 0.7913
Epoch 17/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5218 - accuracy: 0.6816/16 [==============================] - 0s 5ms/step - loss: 0.4484 - accuracy: 0.7910 - val_loss: 0.4026 - val_accuracy: 0.8174
Epoch 18/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4353 - accuracy: 0.8716/16 [==============================] - 0s 4ms/step - loss: 0.4409 - accuracy: 0.7969 - val_loss: 0.4444 - val_accuracy: 0.8087
Epoch 19/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5144 - accuracy: 0.6516/16 [==============================] - 0s 5ms/step - loss: 0.4730 - accuracy: 0.7754 - val_loss: 0.4585 - val_accuracy: 0.8087
Epoch 20/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4510 - accuracy: 0.8116/16 [==============================] - 0s 5ms/step - loss: 0.4625 - accuracy: 0.7969 - val_loss: 0.4528 - val_accuracy: 0.8435
Epoch 21/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5078 - accuracy: 0.8116/16 [==============================] - 0s 5ms/step - loss: 0.4495 - accuracy: 0.7910 - val_loss: 0.4037 - val_accuracy: 0.8348
Epoch 22/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4735 - accuracy: 0.8116/16 [==============================] - 0s 5ms/step - loss: 0.4015 - accuracy: 0.8242 - val_loss: 0.4102 - val_accuracy: 0.8087
Epoch 23/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4540 - accuracy: 0.7816/16 [==============================] - 0s 5ms/step - loss: 0.4340 - accuracy: 0.8047 - val_loss: 0.3913 - val_accuracy: 0.8348
Epoch 24/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5882 - accuracy: 0.7116/16 [==============================] - 0s 4ms/step - loss: 0.4257 - accuracy: 0.8125 - val_loss: 0.3854 - val_accuracy: 0.8435
Epoch 25/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4903 - accuracy: 0.7816/16 [==============================] - 0s 5ms/step - loss: 0.4263 - accuracy: 0.8242 - val_loss: 0.4097 - val_accuracy: 0.8261
Epoch 26/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5059 - accuracy: 0.8416/16 [==============================] - 0s 4ms/step - loss: 0.4068 - accuracy: 0.8203 - val_loss: 0.4205 - val_accuracy: 0.8174
Epoch 27/1000
1/16 [>.............................] - ETA: 0s - loss: 0.6119 - accuracy: 0.7516/16 [==============================] - 0s 4ms/step - loss: 0.4279 - accuracy: 0.8086 - val_loss: 0.3904 - val_accuracy: 0.8348
Epoch 28/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4844 - accuracy: 0.7816/16 [==============================] - 0s 4ms/step - loss: 0.4052 - accuracy: 0.8398 - val_loss: 0.4189 - val_accuracy: 0.8174
Epoch 29/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5712 - accuracy: 0.6516/16 [==============================] - 0s 4ms/step - loss: 0.4195 - accuracy: 0.8203 - val_loss: 0.4423 - val_accuracy: 0.8348
Epoch 30/1000
1/16 [>.............................] - ETA: 0s - loss: 0.3971 - accuracy: 0.8716/16 [==============================] - 0s 4ms/step - loss: 0.4081 - accuracy: 0.8340 - val_loss: 0.3681 - val_accuracy: 0.8522
Epoch 31/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4551 - accuracy: 0.8116/16 [==============================] - 0s 5ms/step - loss: 0.4357 - accuracy: 0.8145 - val_loss: 0.4294 - val_accuracy: 0.8696
Epoch 32/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4730 - accuracy: 0.7816/16 [==============================] - 0s 4ms/step - loss: 0.4254 - accuracy: 0.8184 - val_loss: 0.4023 - val_accuracy: 0.8174
Epoch 33/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4941 - accuracy: 0.7816/16 [==============================] - 0s 4ms/step - loss: 0.4452 - accuracy: 0.8145 - val_loss: 0.3756 - val_accuracy: 0.8261
Epoch 34/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5311 - accuracy: 0.8116/16 [==============================] - 0s 5ms/step - loss: 0.4210 - accuracy: 0.8164 - val_loss: 0.4062 - val_accuracy: 0.8087
Epoch 35/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4841 - accuracy: 0.7116/16 [==============================] - 0s 5ms/step - loss: 0.4034 - accuracy: 0.8105 - val_loss: 0.3638 - val_accuracy: 0.8348
Epoch 36/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4511 - accuracy: 0.8416/16 [==============================] - 0s 4ms/step - loss: 0.4177 - accuracy: 0.8242 - val_loss: 0.4077 - val_accuracy: 0.8522
Epoch 37/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5379 - accuracy: 0.8116/16 [==============================] - 0s 4ms/step - loss: 0.4063 - accuracy: 0.8262 - val_loss: 0.3705 - val_accuracy: 0.8348
Epoch 38/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4644 - accuracy: 0.8416/16 [==============================] - 0s 4ms/step - loss: 0.4079 - accuracy: 0.8359 - val_loss: 0.3600 - val_accuracy: 0.8696
Epoch 39/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4944 - accuracy: 0.7516/16 [==============================] - 0s 4ms/step - loss: 0.4129 - accuracy: 0.8457 - val_loss: 0.3446 - val_accuracy: 0.8609
Epoch 40/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4056 - accuracy: 0.8116/16 [==============================] - 0s 4ms/step - loss: 0.3861 - accuracy: 0.8262 - val_loss: 0.3640 - val_accuracy: 0.8696
Epoch 41/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4578 - accuracy: 0.8716/16 [==============================] - 0s 5ms/step - loss: 0.4050 - accuracy: 0.8262 - val_loss: 0.3439 - val_accuracy: 0.8696
Epoch 42/1000
1/16 [>.............................] - ETA: 0s - loss: 0.5131 - accuracy: 0.8116/16 [==============================] - 0s 4ms/step - loss: 0.3956 - accuracy: 0.8281 - val_loss: 0.3349 - val_accuracy: 0.8609
Epoch 43/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4367 - accuracy: 0.8116/16 [==============================] - 0s 5ms/step - loss: 0.3686 - accuracy: 0.8340 - val_loss: 0.3352 - val_accuracy: 0.8435
Epoch 44/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4487 - accuracy: 0.8716/16 [==============================] - 0s 4ms/step - loss: 0.3879 - accuracy: 0.8281 - val_loss: 0.3221 - val_accuracy: 0.8783
Epoch 45/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4602 - accuracy: 0.8416/16 [==============================] - 0s 5ms/step - loss: 0.3942 - accuracy: 0.8418 - val_loss: 0.3461 - val_accuracy: 0.8609
Epoch 46/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4294 - accuracy: 0.8116/16 [==============================] - 0s 5ms/step - loss: 0.3849 - accuracy: 0.8320 - val_loss: 0.3527 - val_accuracy: 0.8609
Epoch 47/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4323 - accuracy: 0.8416/16 [==============================] - 0s 5ms/step - loss: 0.3939 - accuracy: 0.8203 - val_loss: 0.3732 - val_accuracy: 0.8261
Epoch 48/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4246 - accuracy: 0.8416/16 [==============================] - 0s 4ms/step - loss: 0.3832 - accuracy: 0.8496 - val_loss: 0.4042 - val_accuracy: 0.8696
Epoch 49/1000
1/16 [>.............................] - ETA: 0s - loss: 0.3794 - accuracy: 0.8116/16 [==============================] - 0s 5ms/step - loss: 0.3627 - accuracy: 0.8398 - val_loss: 0.3698 - val_accuracy: 0.8609
Epoch 50/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4711 - accuracy: 0.7816/16 [==============================] - 0s 4ms/step - loss: 0.3887 - accuracy: 0.8242 - val_loss: 0.3443 - val_accuracy: 0.8609
Epoch 51/1000
1/16 [>.............................] - ETA: 0s - loss: 0.3824 - accuracy: 0.8116/16 [==============================] - 0s 4ms/step - loss: 0.3925 - accuracy: 0.8340 - val_loss: 0.4368 - val_accuracy: 0.8261
Epoch 52/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4646 - accuracy: 0.8116/16 [==============================] - 0s 5ms/step - loss: 0.3852 - accuracy: 0.8438 - val_loss: 0.4202 - val_accuracy: 0.8087
Epoch 53/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4349 - accuracy: 0.8716/16 [==============================] - 0s 5ms/step - loss: 0.3697 - accuracy: 0.8398 - val_loss: 0.4056 - val_accuracy: 0.8261
Epoch 54/1000
1/16 [>.............................] - ETA: 0s - loss: 0.4676 - accuracy: 0.8416/16 [==============================] - 0s 5ms/step - loss: 0.3724 - accuracy: 0.8379 - val_loss: 0.3753 - val_accuracy: 0.8261
[Trial complete]
[Trial summary]
|-Trial ID: 05b23cac10d5f471c6d77d73f3c7000e
|-Score: 0.8782608509063721
|-Best step: 43
> Hyperparameters:
|-classification_head_1/dropout_rate: 0
|-optimizer: adam
|-structured_data_block_1/dense_block_1/dropout_rate: 0.25
|-structured_data_block_1/dense_block_1/num_layers: 2
|-structured_data_block_1/dense_block_1/units_0: 1024
|-structured_data_block_1/dense_block_1/units_1: 128
|-structured_data_block_1/dense_block_1/use_batchnorm: True
[Starting new trial]
Epoch 1/1000
1/16 [>.............................] - ETA: 0s - loss: 2.1611 - accuracy: 0.5316/16 [==============================] - 0s 6ms/step - loss: 0.9218 - accuracy: 0.6270 - val_loss: 0.5265 - val_accuracy: 0.7043
Epoch 2/1000
1/16 [>.............................] - ETA: 0s - loss: 0.8769 - accuracy: 0.4616/16 [==============================] - 0s 2ms/step - loss: 0.7397 - accuracy: 0.6289 - val_loss: 0.6304 - val_accuracy: 0.7043
Epoch 3/1000
1/16 [>.............................] - ETA: 0s - loss: 1.0053 - accuracy: 0.4616/16 [==============================] - 0s 2ms/step - loss: 0.7013 - accuracy: 0.6152 - val_loss: 0.5581 - val_accuracy: 0.7304
Epoch 4/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7856 - accuracy: 0.5916/16 [==============================] - 0s 2ms/step - loss: 0.6441 - accuracy: 0.6621 - val_loss: 0.5394 - val_accuracy: 0.7478
Epoch 5/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7105 - accuracy: 0.5316/16 [==============================] - 0s 2ms/step - loss: 0.6350 - accuracy: 0.6758 - val_loss: 0.5434 - val_accuracy: 0.7304
Epoch 6/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7370 - accuracy: 0.5616/16 [==============================] - 0s 2ms/step - loss: 0.6494 - accuracy: 0.6699 - val_loss: 0.5633 - val_accuracy: 0.7130
Epoch 7/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7826 - accuracy: 0.5916/16 [==============================] - 0s 2ms/step - loss: 0.6480 - accuracy: 0.6797 - val_loss: 0.5581 - val_accuracy: 0.7130
Epoch 8/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7585 - accuracy: 0.5616/16 [==============================] - 0s 2ms/step - loss: 0.6350 - accuracy: 0.6797 - val_loss: 0.5479 - val_accuracy: 0.7217
Epoch 9/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7414 - accuracy: 0.5616/16 [==============================] - 0s 2ms/step - loss: 0.6270 - accuracy: 0.6816 - val_loss: 0.5461 - val_accuracy: 0.7130
Epoch 10/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7368 - accuracy: 0.5616/16 [==============================] - 0s 2ms/step - loss: 0.6234 - accuracy: 0.6895 - val_loss: 0.5405 - val_accuracy: 0.7652
Epoch 11/1000
1/16 [>.............................] - ETA: 0s - loss: 0.7234 - accuracy: 0.6216/16 [==============================] - 0s 2ms/step - loss: 0.6178 - accuracy: 0.6934 - val_loss: 0.5374 - val_accuracy: 0.7652
[Trial complete]
[Trial summary]
|-Trial ID: c0ef66df3ffa96c10e2e2bae85fc9e26
|-Score: 0.7652173638343811
|-Best step: 9
> Hyperparameters:
|-classification_head_1/dropout_rate: 0
|-optimizer: adam
|-structured_data_block_1/dense_block_1/dropout_rate: 0.0
|-structured_data_block_1/dense_block_1/num_layers: 2
|-structured_data_block_1/dense_block_1/units_0: 32
|-structured_data_block_1/dense_block_1/units_1: 64
|-structured_data_block_1/dense_block_1/use_batchnorm: False
Epoch 1/54
1/20 [>.............................] - ETA: 0s - loss: 0.7088 - accuracy: 0.5317/20 [========================>.....] - ETA: 0s - loss: 0.6610 - accuracy: 0.6420/20 [==============================] - 0s 3ms/step - loss: 0.6477 - accuracy: 0.6539
Epoch 2/54
1/20 [>.............................] - ETA: 0s - loss: 0.7955 - accuracy: 0.5918/20 [==========================>...] - ETA: 0s - loss: 0.6055 - accuracy: 0.6920/20 [==============================] - 0s 3ms/step - loss: 0.5937 - accuracy: 0.7033
Epoch 3/54
1/20 [>.............................] - ETA: 0s - loss: 0.7455 - accuracy: 0.6217/20 [========================>.....] - ETA: 0s - loss: 0.5864 - accuracy: 0.7220/20 [==============================] - 0s 3ms/step - loss: 0.5717 - accuracy: 0.7352
Epoch 4/54
1/20 [>.............................] - ETA: 0s - loss: 0.7916 - accuracy: 0.5618/20 [==========================>...] - ETA: 0s - loss: 0.5190 - accuracy: 0.7220/20 [==============================] - 0s 3ms/step - loss: 0.5068 - accuracy: 0.7384
Epoch 5/54
1/20 [>.............................] - ETA: 0s - loss: 0.7673 - accuracy: 0.6818/20 [==========================>...] - ETA: 0s - loss: 0.5247 - accuracy: 0.7420/20 [==============================] - 0s 3ms/step - loss: 0.5110 - accuracy: 0.7544
Epoch 6/54
1/20 [>.............................] - ETA: 0s - loss: 0.6668 - accuracy: 0.6818/20 [==========================>...] - ETA: 0s - loss: 0.5063 - accuracy: 0.7620/20 [==============================] - 0s 3ms/step - loss: 0.4923 - accuracy: 0.7719
Epoch 7/54
1/20 [>.............................] - ETA: 0s - loss: 0.5890 - accuracy: 0.6818/20 [==========================>...] - ETA: 0s - loss: 0.5177 - accuracy: 0.7620/20 [==============================] - 0s 3ms/step - loss: 0.5031 - accuracy: 0.7767
Epoch 8/54
1/20 [>.............................] - ETA: 0s - loss: 0.4941 - accuracy: 0.7818/20 [==========================>...] - ETA: 0s - loss: 0.4566 - accuracy: 0.7920/20 [==============================] - 0s 3ms/step - loss: 0.4423 - accuracy: 0.8054
Epoch 9/54
1/20 [>.............................] - ETA: 0s - loss: 0.5542 - accuracy: 0.7818/20 [==========================>...] - ETA: 0s - loss: 0.4922 - accuracy: 0.7820/20 [==============================] - 0s 3ms/step - loss: 0.4780 - accuracy: 0.7879
Epoch 10/54
1/20 [>.............................] - ETA: 0s - loss: 0.5366 - accuracy: 0.7518/20 [==========================>...] - ETA: 0s - loss: 0.4773 - accuracy: 0.7920/20 [==============================] - 0s 3ms/step - loss: 0.4616 - accuracy: 0.8054
Epoch 11/54
1/20 [>.............................] - ETA: 0s - loss: 0.5197 - accuracy: 0.7518/20 [==========================>...] - ETA: 0s - loss: 0.4724 - accuracy: 0.8020/20 [==============================] - 0s 3ms/step - loss: 0.4572 - accuracy: 0.8150
Epoch 12/54
1/20 [>.............................] - ETA: 0s - loss: 0.5889 - accuracy: 0.7818/20 [==========================>...] - ETA: 0s - loss: 0.4865 - accuracy: 0.7920/20 [==============================] - 0s 3ms/step - loss: 0.4692 - accuracy: 0.8006
Epoch 13/54
1/20 [>.............................] - ETA: 0s - loss: 0.4531 - accuracy: 0.7518/20 [==========================>...] - ETA: 0s - loss: 0.4594 - accuracy: 0.7720/20 [==============================] - 0s 3ms/step - loss: 0.4458 - accuracy: 0.7847
Epoch 14/54
1/20 [>.............................] - ETA: 0s - loss: 0.4767 - accuracy: 0.8118/20 [==========================>...] - ETA: 0s - loss: 0.4649 - accuracy: 0.7820/20 [==============================] - 0s 3ms/step - loss: 0.4464 - accuracy: 0.7959
Epoch 15/54
1/20 [>.............................] - ETA: 0s - loss: 0.5535 - accuracy: 0.7818/20 [==========================>...] - ETA: 0s - loss: 0.4507 - accuracy: 0.8020/20 [==============================] - 0s 3ms/step - loss: 0.4358 - accuracy: 0.8102
Epoch 16/54
1/20 [>.............................] - ETA: 0s - loss: 0.5714 - accuracy: 0.7819/20 [===========================>..] - ETA: 0s - loss: 0.4497 - accuracy: 0.7920/20 [==============================] - 0s 3ms/step - loss: 0.4420 - accuracy: 0.8022
Epoch 17/54
1/20 [>.............................] - ETA: 0s - loss: 0.5527 - accuracy: 0.8118/20 [==========================>...] - ETA: 0s - loss: 0.4730 - accuracy: 0.8020/20 [==============================] - 0s 3ms/step - loss: 0.4567 - accuracy: 0.8134
Epoch 18/54
1/20 [>.............................] - ETA: 0s - loss: 0.4463 - accuracy: 0.8118/20 [==========================>...] - ETA: 0s - loss: 0.4503 - accuracy: 0.7920/20 [==============================] - 0s 3ms/step - loss: 0.4341 - accuracy: 0.8054
Epoch 19/54
1/20 [>.............................] - ETA: 0s - loss: 0.5689 - accuracy: 0.7519/20 [===========================>..] - ETA: 0s - loss: 0.4518 - accuracy: 0.7920/20 [==============================] - 0s 3ms/step - loss: 0.4424 - accuracy: 0.7959
Epoch 20/54
1/20 [>.............................] - ETA: 0s - loss: 0.6748 - accuracy: 0.7518/20 [==========================>...] - ETA: 0s - loss: 0.4583 - accuracy: 0.7920/20 [==============================] - 0s 3ms/step - loss: 0.4414 - accuracy: 0.8022
Epoch 21/54
1/20 [>.............................] - ETA: 0s - loss: 0.5214 - accuracy: 0.7818/20 [==========================>...] - ETA: 0s - loss: 0.4366 - accuracy: 0.8120/20 [==============================] - 0s 3ms/step - loss: 0.4210 - accuracy: 0.8214
Epoch 22/54
1/20 [>.............................] - ETA: 0s - loss: 0.5192 - accuracy: 0.7518/20 [==========================>...] - ETA: 0s - loss: 0.4446 - accuracy: 0.8020/20 [==============================] - 0s 3ms/step - loss: 0.4345 - accuracy: 0.8102
Epoch 23/54
1/20 [>.............................] - ETA: 0s - loss: 0.4957 - accuracy: 0.8118/20 [==========================>...] - ETA: 0s - loss: 0.4292 - accuracy: 0.8120/20 [==============================] - 0s 3ms/step - loss: 0.4153 - accuracy: 0.8214
Epoch 24/54
1/20 [>.............................] - ETA: 0s - loss: 0.5285 - accuracy: 0.8118/20 [==========================>...] - ETA: 0s - loss: 0.4391 - accuracy: 0.8120/20 [==============================] - 0s 3ms/step - loss: 0.4238 - accuracy: 0.8262
Epoch 25/54
1/20 [>.............................] - ETA: 0s - loss: 0.5750 - accuracy: 0.7518/20 [==========================>...] - ETA: 0s - loss: 0.4353 - accuracy: 0.8120/20 [==============================] - 0s 3ms/step - loss: 0.4227 - accuracy: 0.8214
Epoch 26/54
1/20 [>.............................] - ETA: 0s - loss: 0.5240 - accuracy: 0.7818/20 [==========================>...] - ETA: 0s - loss: 0.4439 - accuracy: 0.8120/20 [==============================] - 0s 3ms/step - loss: 0.4325 - accuracy: 0.8214
Epoch 27/54
1/20 [>.............................] - ETA: 0s - loss: 0.5631 - accuracy: 0.7118/20 [==========================>...] - ETA: 0s - loss: 0.4357 - accuracy: 0.7920/20 [==============================] - 0s 3ms/step - loss: 0.4200 - accuracy: 0.8054
Epoch 28/54
1/20 [>.............................] - ETA: 0s - loss: 0.5509 - accuracy: 0.8418/20 [==========================>...] - ETA: 0s - loss: 0.4116 - accuracy: 0.8220/20 [==============================] - 0s 3ms/step - loss: 0.3992 - accuracy: 0.8309
Epoch 29/54
1/20 [>.............................] - ETA: 0s - loss: 0.5300 - accuracy: 0.7518/20 [==========================>...] - ETA: 0s - loss: 0.4295 - accuracy: 0.8120/20 [==============================] - 0s 3ms/step - loss: 0.4145 - accuracy: 0.8246
Epoch 30/54
1/20 [>.............................] - ETA: 0s - loss: 0.5644 - accuracy: 0.7518/20 [==========================>...] - ETA: 0s - loss: 0.4213 - accuracy: 0.8120/20 [==============================] - 0s 3ms/step - loss: 0.4067 - accuracy: 0.8214
Epoch 31/54
1/20 [>.............................] - ETA: 0s - loss: 0.4828 - accuracy: 0.7818/20 [==========================>...] - ETA: 0s - loss: 0.4098 - accuracy: 0.8120/20 [==============================] - 0s 3ms/step - loss: 0.3952 - accuracy: 0.8230
Epoch 32/54
1/20 [>.............................] - ETA: 0s - loss: 0.4392 - accuracy: 0.8118/20 [==========================>...] - ETA: 0s - loss: 0.4129 - accuracy: 0.8120/20 [==============================] - 0s 3ms/step - loss: 0.3986 - accuracy: 0.8198
Epoch 33/54
1/20 [>.............................] - ETA: 0s - loss: 0.4751 - accuracy: 0.7518/20 [==========================>...] - ETA: 0s - loss: 0.4084 - accuracy: 0.8120/20 [==============================] - 0s 3ms/step - loss: 0.3929 - accuracy: 0.8262
Epoch 34/54
1/20 [>.............................] - ETA: 0s - loss: 0.5702 - accuracy: 0.7818/20 [==========================>...] - ETA: 0s - loss: 0.4053 - accuracy: 0.8320/20 [==============================] - 0s 3ms/step - loss: 0.3921 - accuracy: 0.8405
Epoch 35/54
1/20 [>.............................] - ETA: 0s - loss: 0.5798 - accuracy: 0.7818/20 [==========================>...] - ETA: 0s - loss: 0.4133 - accuracy: 0.8220/20 [==============================] - 0s 3ms/step - loss: 0.3983 - accuracy: 0.8325
Epoch 36/54
1/20 [>.............................] - ETA: 0s - loss: 0.5477 - accuracy: 0.7519/20 [===========================>..] - ETA: 0s - loss: 0.4206 - accuracy: 0.8120/20 [==============================] - 0s 3ms/step - loss: 0.4111 - accuracy: 0.8214
Epoch 37/54
1/20 [>.............................] - ETA: 0s - loss: 0.4746 - accuracy: 0.8118/20 [==========================>...] - ETA: 0s - loss: 0.3898 - accuracy: 0.8320/20 [==============================] - 0s 3ms/step - loss: 0.3768 - accuracy: 0.8389
Epoch 38/54
1/20 [>.............................] - ETA: 0s - loss: 0.4018 - accuracy: 0.8418/20 [==========================>...] - ETA: 0s - loss: 0.3853 - accuracy: 0.8220/20 [==============================] - 0s 3ms/step - loss: 0.3733 - accuracy: 0.8357
Epoch 39/54
1/20 [>.............................] - ETA: 0s - loss: 0.4808 - accuracy: 0.8118/20 [==========================>...] - ETA: 0s - loss: 0.4058 - accuracy: 0.8320/20 [==============================] - 0s 3ms/step - loss: 0.3945 - accuracy: 0.8373
Epoch 40/54
1/20 [>.............................] - ETA: 0s - loss: 0.5848 - accuracy: 0.7518/20 [==========================>...] - ETA: 0s - loss: 0.3917 - accuracy: 0.8220/20 [==============================] - 0s 3ms/step - loss: 0.3813 - accuracy: 0.8309
Epoch 41/54
1/20 [>.............................] - ETA: 0s - loss: 0.4457 - accuracy: 0.7820/20 [==============================] - ETA: 0s - loss: 0.3719 - accuracy: 0.8320/20 [==============================] - 0s 3ms/step - loss: 0.3719 - accuracy: 0.8373
Epoch 42/54
1/20 [>.............................] - ETA: 0s - loss: 0.4841 - accuracy: 0.7819/20 [===========================>..] - ETA: 0s - loss: 0.3947 - accuracy: 0.8320/20 [==============================] - 0s 3ms/step - loss: 0.3856 - accuracy: 0.8405
Epoch 43/54
1/20 [>.............................] - ETA: 0s - loss: 0.4550 - accuracy: 0.8118/20 [==========================>...] - ETA: 0s - loss: 0.4005 - accuracy: 0.8220/20 [==============================] - 0s 3ms/step - loss: 0.3844 - accuracy: 0.8373
Epoch 44/54
1/20 [>.............................] - ETA: 0s - loss: 0.4141 - accuracy: 0.7818/20 [==========================>...] - ETA: 0s - loss: 0.3850 - accuracy: 0.8320/20 [==============================] - 0s 3ms/step - loss: 0.3735 - accuracy: 0.8437
Epoch 45/54
1/20 [>.............................] - ETA: 0s - loss: 0.5307 - accuracy: 0.7819/20 [===========================>..] - ETA: 0s - loss: 0.3839 - accuracy: 0.8320/20 [==============================] - 0s 3ms/step - loss: 0.3754 - accuracy: 0.8357
Epoch 46/54
1/20 [>.............................] - ETA: 0s - loss: 0.4849 - accuracy: 0.7519/20 [===========================>..] - ETA: 0s - loss: 0.3710 - accuracy: 0.8320/20 [==============================] - 0s 3ms/step - loss: 0.3630 - accuracy: 0.8437
Epoch 47/54
1/20 [>.............................] - ETA: 0s - loss: 0.3873 - accuracy: 0.8419/20 [===========================>..] - ETA: 0s - loss: 0.3604 - accuracy: 0.8320/20 [==============================] - 0s 3ms/step - loss: 0.3521 - accuracy: 0.8437
Epoch 48/54
1/20 [>.............................] - ETA: 0s - loss: 0.4314 - accuracy: 0.7819/20 [===========================>..] - ETA: 0s - loss: 0.3736 - accuracy: 0.8520/20 [==============================] - 0s 3ms/step - loss: 0.3650 - accuracy: 0.8565
Epoch 49/54
1/20 [>.............................] - ETA: 0s - loss: 0.4029 - accuracy: 0.8718/20 [==========================>...] - ETA: 0s - loss: 0.3775 - accuracy: 0.8220/20 [==============================] - 0s 3ms/step - loss: 0.3652 - accuracy: 0.8357
Epoch 50/54
1/20 [>.............................] - ETA: 0s - loss: 0.4561 - accuracy: 0.8118/20 [==========================>...] - ETA: 0s - loss: 0.3712 - accuracy: 0.8520/20 [==============================] - 0s 3ms/step - loss: 0.3590 - accuracy: 0.8581
Epoch 51/54
1/20 [>.............................] - ETA: 0s - loss: 0.4574 - accuracy: 0.7118/20 [==========================>...] - ETA: 0s - loss: 0.3812 - accuracy: 0.8220/20 [==============================] - 0s 3ms/step - loss: 0.3678 - accuracy: 0.8357
Epoch 52/54
1/20 [>.............................] - ETA: 0s - loss: 0.5078 - accuracy: 0.8718/20 [==========================>...] - ETA: 0s - loss: 0.4007 - accuracy: 0.8320/20 [==============================] - 0s 3ms/step - loss: 0.3848 - accuracy: 0.8437
Epoch 53/54
1/20 [>.............................] - ETA: 0s - loss: 0.5683 - accuracy: 0.7818/20 [==========================>...] - ETA: 0s - loss: 0.3683 - accuracy: 0.8320/20 [==============================] - 0s 3ms/step - loss: 0.3588 - accuracy: 0.8437
Epoch 54/54
1/20 [>.............................] - ETA: 0s - loss: 0.3632 - accuracy: 0.8418/20 [==========================>...] - ETA: 0s - loss: 0.3854 - accuracy: 0.8420/20 [==============================] - 0s 3ms/step - loss: 0.3714 - accuracy: 0.8517
Traceback (most recent call last):
File "structured_data_classification.py", line 37, in <module>
predicted_y = clf.predict(x_test_df)
File "/home/gsong/anaconda3/envs/tf2/lib/python3.6/site-packages/autokeras/tasks/structured_data.py", line 114, in predict
**kwargs)
File "/home/gsong/anaconda3/envs/tf2/lib/python3.6/site-packages/autokeras/auto_model.py", line 414, in predict
model = self.tuner.get_best_model()
File "/home/gsong/anaconda3/envs/tf2/lib/python3.6/site-packages/autokeras/engine/tuner.py", line 49, in get_best_model
model = super().get_best_models()[0]
File "/home/gsong/anaconda3/envs/tf2/lib/python3.6/site-packages/kerastuner/engine/tuner.py", line 258, in get_best_models
return super(Tuner, self).get_best_models(num_models)
File "/home/gsong/anaconda3/envs/tf2/lib/python3.6/site-packages/kerastuner/engine/base_tuner.py", line 241, in get_best_models
models = [self.load_model(trial) for trial in best_trials]
File "/home/gsong/anaconda3/envs/tf2/lib/python3.6/site-packages/kerastuner/engine/base_tuner.py", line 241, in <listcomp>
models = [self.load_model(trial) for trial in best_trials]
File "/home/gsong/anaconda3/envs/tf2/lib/python3.6/site-packages/kerastuner/engine/tuner.py", line 184, in load_model
trial.trial_id, best_epoch))
File "/home/gsong/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 250, in load_weights
return super(Model, self).load_weights(filepath, by_name, skip_mismatch)
File "/home/gsong/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1231, in load_weights
py_checkpoint_reader.NewCheckpointReader(filepath)
File "/home/gsong/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow/python/training/py_checkpoint_reader.py", line 95, in NewCheckpointReader
return CheckpointReader(compat.as_bytes(filepattern))
ValueError: Unsuccessful TensorSliceReader constructor: Failed to get matching files on akeras_models/song4/trial_05b23cac10d5f471c6d77d73f3c7000e/checkpoints/epoch_43/checkpoint: Not found: akeras_models/song4/trial_05b23cac10d5f471c6d77d73f3c7000e/checkpoints/epoch_43; No such file or directory
### Setup Details
Include the details about the versions of:
- OS type and version: Unbuntu 18.04
- Python: 3.6
- autokeras: 1.0.3
- keras-tuner: 1.0.2rc0
- scikit-learn:
- numpy:
- pandas:
- tensorflow: 2.2.0 no GPU used
### Additional context
<!---
If applicable, add any other context about the problem.
-->
|
closed
|
2020-06-26T19:00:50Z
|
2020-07-29T23:31:20Z
|
https://github.com/keras-team/autokeras/issues/1215
|
[
"bug report",
"pinned"
] |
chenmin1968
| 14
|
charlesq34/pointnet
|
tensorflow
| 67
|
OSError: raw write() returned invalid length 42 (should have been between 0 and 21)
|
Hello.
Pointnet was executed using python2.7, cuDNN5.1, CUDA8.0 under Windows 10 64bit.
The Error has been happened as follow.
Let me know how to solve it.
(tensorflow) C:\WORK\pointnet-master>python train.py
'cp' は、内部コマンドまたは外部コマンド、
操作可能なプログラムまたはバッチ ファイルとして認識されていません。
'cp' は、内部コマンドまたは外部コマンド、
操作可能なプログラムまたはバッチ ファイルとして認識されていません。
Tensor("Placeholder_2:0", shape=(), dtype=bool, device=/device:GPU:0)
2017-12-22 23:33:14.875482: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
**** EPOCH 000 ****
----0-----
mean loss: 3.888857
mean loss: 3.888857
Traceback (most recent call last):
File "train.py", line 260, in <module>
train()
File "train.py", line 161, in train
train_one_epoch(sess, ops, train_writer)
File "train.py", line 212, in train_one_epoch
log_string('mean loss: %f' % (loss_sum / float(num_batches)))
File "train.py", line 70, in log_string
print(out_str)
OSError: raw write() returned invalid length 42 (should have been between 0 and 21)
Yoshiyuki
|
closed
|
2017-12-22T14:47:47Z
|
2017-12-27T07:57:30Z
|
https://github.com/charlesq34/pointnet/issues/67
|
[] |
yoshiyama
| 1
|
aidlearning/AidLearning-FrameWork
|
jupyter
| 93
|
After a while it is constantly stuck on the Loading screen
|
After using AID for a while (like just running the examples) the Desktop will reboot to the animated Loading screen and get stuck there. Continuously playing the animations. The only way to fix this is closing the AID desktop and relaunching the app.
This is a very new device – Note 10+, 1TB of storage. Nothing else is running on the phone (fresh reboot)
|
closed
|
2020-03-12T18:57:14Z
|
2020-07-29T01:11:24Z
|
https://github.com/aidlearning/AidLearning-FrameWork/issues/93
|
[] |
adamhill
| 2
|
marcomusy/vedo
|
numpy
| 333
|
creating subplots
|
Hi @marcomusy ,
I would like to create a figure with subplots (please check below)

generally, I would do
plt = Plotter(shape=(3,2))
However, I am not sure how to place single figure in the first column. Could you please offer some suggestions?
|
closed
|
2021-03-05T12:53:49Z
|
2021-03-05T16:31:44Z
|
https://github.com/marcomusy/vedo/issues/333
|
[] |
DeepaMahm
| 1
|
tqdm/tqdm
|
pandas
| 1,023
|
[Jupyter Lab] visual output bug in nested for loops
|
- [X] I have marked all applicable categories:
+ [x] exception-raising bug
+ [X] visual output bug
+ [x] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [x] new feature request
- [X] I have visited the [source website], and in particular
read the [known issues]
- [X] I have searched through the [issue tracker] for duplicates
- [X] I have mentioned version numbers, operating system and
environment, where applicable
### Environment
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
# 4.48.2 3.7.4 (default, Aug 13 2019, 15:17:50)
# [Clang 4.0.1 (tags/RELEASE_401/final)] darwin
```
`conda list` output:
```
ipykernel 5.1.4 py37h39e3cac_0
ipython 7.12.0 py37h5ca1d4c_0
ipython_genutils 0.2.0 py37_0
ipywidgets 7.5.1 py_0
jupyter 1.0.0 py37_7
jupyter_client 5.3.4 py37_0
jupyter_console 6.1.0 py_0
jupyter_core 4.6.1 py37_0
jupyterlab 2.1.5 py_0 conda-forge
jupyterlab_server 1.2.0 py_0 conda-forge
```
### Visual output

[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
|
open
|
2020-08-23T07:26:36Z
|
2023-05-11T07:10:53Z
|
https://github.com/tqdm/tqdm/issues/1023
|
[] |
hongshaoyang
| 7
|
plotly/dash
|
flask
| 3,211
|
[BUG] Version 3.0.0rc3 - Default Dropdown Value Bug
|
When selecting another option from the dropdown that is not the default option, it changes normally, but when returning to the default option, it does not switch back to it.
It is possible to see in the video that "Tratar + Ignorar" was the default option and after I change that, I can't switch back to it.
https://github.com/user-attachments/assets/3f377f08-61fe-474f-a86b-0230d0af542c
Code:
```python
dbc.Label("Escolha o Tratamento de inviabilidades", className="d-flex justify-content-center", style={"color": "#002f4a", "font-weight": "bold"}),
dcc.Dropdown(
id='inviabilidade-estudo',
options=[
{"label": "Tratar inviabilidades", "value": 1},
{"label": "Ignorar inviabilidades", "value": 2},
{"label": "Tratar + Ignorar", "value": 3},
{"label": "Tratamento True", "value": 4}
],
value=3
)
```
Versions:
dash==3.0.0rc3
dash-bootstrap-components==2.0.0b2
dash-breakpoints==0.1.0
dash-core-components==2.0.0
dash-html-components==2.0.0
dash-table==5.0.0
dash-ag-grid==31.3.1rc1
dash_auth==2.3.0
dash-mantine-components==0.12.1
|
closed
|
2025-03-11T21:35:25Z
|
2025-03-12T12:49:33Z
|
https://github.com/plotly/dash/issues/3211
|
[] |
xiforivia
| 0
|
custom-components/pyscript
|
jupyter
| 352
|
app : config type TypeError: string indices must be integers
|
Could you help please I'm going crazy it's my first app with pyscript .
the state_trigger is ok but when want to use the app I get an error for config :
<2022-05-29 03:36:56 INFO (MainThread) [custom_components.pyscript.global_ctx] Reloaded /config/pyscript/currentgen.py
2022-05-29 03:36:56 INFO (MainThread) [custom_components.pyscript.file.currentgen.currentoff_startup] {'allow_all_imports': True, 'hass_is_global': True, 'apps': {'currentoff': [{'dev': 'dev1', 'switch_id': 'switch.salon_socket_1', 'CTlim': 0.7, 'DTref': 0, 'DDref': 0}, {'dev': 'dev2', 'switch_id': 'switch.salon_socket_1', 'CTlim': 0.5, 'DTref': 30, 'DDref': 10}, {'dev': 'dev3', 'switch_id': 'switch.salon_socket_1', 'CTlim': 0.3, 'DTref': 60, 'DDref': 10}]}}
2022-05-29 03:36:56 INFO (MainThread) [custom_components.pyscript.file.currentgen.currentoff_startup] app allow_all_imports
2022-05-29 03:36:56 INFO (MainThread) [custom_components.pyscript.file.currentgen.currentoff_startup] app hass_is_global
2022-05-29 03:36:56 INFO (MainThread) [custom_components.pyscript.file.currentgen.currentoff_startup] app apps
2022-05-29 03:36:56 ERROR (MainThread) [custom_components.pyscript.file.currentgen.currentoff_startup] Exception in <file.currentgen.currentoff_startup> line 53:
@state_trigger(f"float(sensor.current) < {config['CTlim']} and {config['switch_id']} == 'on' and vcur == False", state_hold_false=0)
^
TypeError: string indices must be integers
@state_trigger(f"float(sensor.current) < {config['CTlim']} and {config['switch_id']} == 'on' and vcur == False", state_hold_false=0)
^
TypeError: string indices must be integers >
`########### the config yaml file :
allow_all_imports: true
hass_is_global: true
apps:
currentoff:
- dev : dev1
switch_id: switch.salon_socket_1
CTlim : 0.7
DTref : 0
DDref: 0
- dev : dev2
switch_id: switch.prise2 Socket 1
CTlim : 0.5
DTref : 30
DDref: 10
- dev : dev3
switch_id: switch.hall_socket_1
CTlim : 0.3
DTref : 60
DDref: 10
'''
registered_triggers = []
global_vars = {}
def make_currentoff(config):
vcur = False #global_vars[f"{config['cur']}"]
@state_trigger(f"float(sensor.current) < {config['CTlim']} and {config['switch_id']} == 'on' and vcur == False", state_hold_false=0)
def currentoff():
global global_vars
dev_id = config["dev"]
DTref = config["DTref"]
task.unique(f"currentoff_{dev_id}")
switch.turn_off(entity_id=config["switch_id"])
vcur = True #global_vars[f"{config['cur']}"] = True
task.sleep(float(1))
vswitch_id = config["switch_id"]
log.info(f"currentoff etat apres lancer trigger {vswitch_id}")
if float(DTref) == 0 : return
task.sleep(float(DTref))
switch.turn_on(entity_id=config["switch_id"])
task.sleep(float(1))
log.info(f"currentoff {vswitch_id}")
task.sleep(float(config["DDref"]))
switch.turn_off(entity_id=config["switch_id"])
vcur = False #global_vars[f"{config['cur']}"] = False
task.sleep(float(1))
log.info(f"currentoff {vswitch_id}")
registered_triggers.append(currentoff)
@time_trigger('startup')
def currentoff_startup():
log.info(pyscript.config)
for app in pyscript.config:
log.info("app "+app)
make_currentoff(app)`
|
closed
|
2022-05-29T02:57:35Z
|
2022-05-30T01:43:02Z
|
https://github.com/custom-components/pyscript/issues/352
|
[] |
kabcasa
| 2
|
vllm-project/vllm
|
pytorch
| 15,105
|
[Bug]: Extremely slow inference + big waste of memory on 0.8.0
|
### Your current environment
2xRTX3090 32GB RAM
Driver Version: 570.124.04
nvcc --version:
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0
```
### 🐛 Describe the bug
Hello! I've encountered an unpleasant bug: When I run Qwen QwQ in AWQ or GPTQ 4-bit quantization on version 0.8.0, the text generation speed is only 7 tokens per second, whereas on version 0.7.3 it was consistently 45 tokens per second. Additionally, memory consumption has sharply increased—while using the same context window that I used with 0.7.3, Qwen now throws a "CUDA out of memory" error. The issue was only resolved by adding the parameter `--disable-mm-preprocessor-cache`, which allowed Qwen AWQ to barely fit into two 3090 GPUs, consuming exactly 24068 MiB on each. On version 0.7.3, this number was only 19-20 GB. Please help me, I would be very grateful!
The command I use to run both versions 0.8.0 and 0.7.3 is the same (except for the --disable-mm-preprocessor-cache option):
```bash
CUDA_VISIBLE_DEVICES=0,1 CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_LAUNCH_BLOCKING=1 vllm serve OPEA_QwQ-32B-int4-AutoRound-gptq-sym --dtype auto --api-key token-abc123 --host 0.0.0.0 --port 8000 --tensor-parallel-size 2 --gpu-memory-utilization 0.95 --max-model-len 30000 --cpu-offload-gb 0 --device cuda --disable-custom-all-reduce --enable-reasoning --reasoning-parser deepseek_r1 --served-model-name AlexBefest/Qwen_QwQ-32B-AWQ --block-size 32 --max-seq-len-to-capture 30000 --disable-mm-preprocessor-cache
```
### Logs
Using --disable-mm-preprocessor-cache:
```bash
(base) root@287aa8679d31:/workdir# CUDA_VISIBLE_DEVICES=0,1 CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_LAUNCH_BLOCKING=1 vllm serve Qwen_QwQ-32B-AWQ --dtype auto --api-key token-abc123 --host 0.0.0.0 --port
8000 --tensor-parallel-size 2 --gpu-memory-utilization 0.9 --max-model-len 30000 --cpu-offload-gb 0 --device cuda --disable-custom-all-reduce --enable-reasoning --reasoning-parser deepseek_r
1 --served-model-name AlexBefest/Qwen_QwQ-32B-AWQ --max-seq-len-to-capture 30000 --disable-mm-preprocessor-cache
INFO 03-19 07:08:38 [__init__.py:256] Automatically detected platform cuda.
INFO 03-19 07:08:38 [api_server.py:977] vLLM API server version 0.8.0
INFO 03-19 07:08:38 [api_server.py:978] args: Namespace(subparser='serve', model_tag='Qwen_QwQ-32B-AWQ', config='', host='0.0.0.0', port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key='token-abc123', lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='Qwen_QwQ-32B-AWQ', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=30000, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=2, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0.0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=30000, disable_custom_all_reduce=True, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=True, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='cuda', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['AlexBefest/Qwen_QwQ-32B-AWQ'], qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=True, reasoning_parser='deepseek_r1', disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False, dispatch_function=<function ServeSubcommand.cmd at 0x7bec2da64680>)
INFO 03-19 07:08:42 [config.py:583] This model supports multiple tasks: {'embed', 'classify', 'reward', 'score', 'generate'}. Defaulting to 'generate'.
INFO 03-19 07:08:42 [awq_marlin.py:114] The model is convertible to awq_marlin during runtime. Using awq_marlin kernel.
INFO 03-19 07:08:43 [config.py:1515] Defaulting to use mp for distributed inference
INFO 03-19 07:08:43 [config.py:1693] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 03-19 07:08:45 [__init__.py:256] Automatically detected platform cuda.
INFO 03-19 07:08:46 [core.py:53] Initializing a V1 LLM engine (v0.8.0) with config: model='Qwen_QwQ-32B-AWQ', speculative_config=None, tokenizer='Qwen_QwQ-32B-AWQ', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=30000, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=True, quantization=awq_marlin, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend='deepseek_r1'), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=AlexBefest/Qwen_QwQ-32B-AWQ, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=True, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
WARNING 03-19 07:08:46 [multiproc_worker_utils.py:310] Reducing Torch parallelism from 10 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO 03-19 07:08:46 [custom_cache_manager.py:19] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager
INFO 03-19 07:08:46 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1], buffer_handle=(2, 10485760, 10, 'psm_beb3b9f4'), local_subscribe_addr='ipc:///tmp/ce9eccf8-bbce-4307-8cf2-67e527f7bbd5', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 03-19 07:08:48 [__init__.py:256] Automatically detected platform cuda.
WARNING 03-19 07:08:49 [utils.py:2282] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f61187e0740>
(VllmWorker rank=0 pid=2154) INFO 03-19 07:08:49 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_8542c68d'), local_subscribe_addr='ipc:///tmp/e9ec4cbd-1549-49b7-a170-b2c893ca601f', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 03-19 07:08:51 [__init__.py:256] Automatically detected platform cuda.
WARNING 03-19 07:08:53 [utils.py:2282] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7eb68b240740>
(VllmWorker rank=1 pid=2171) INFO 03-19 07:08:53 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_f9144cb5'), local_subscribe_addr='ipc:///tmp/9b5a184a-0e3d-4651-b866-a1040b73bc65', remote_subscribe_addr=None, remote_addr_ipv6=False)
(VllmWorker rank=0 pid=2154) INFO 03-19 07:08:53 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorker rank=1 pid=2171) INFO 03-19 07:08:53 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorker rank=0 pid=2154) INFO 03-19 07:08:53 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorker rank=1 pid=2171) INFO 03-19 07:08:53 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorker rank=0 pid=2154) INFO 03-19 07:08:53 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_feb1ae88'), local_subscribe_addr='ipc:///tmp/ee6a85d1-d404-463e-aa96-dddabc0e21df', remote_subscribe_addr=None, remote_addr_ipv6=False)
(VllmWorker rank=1 pid=2171) INFO 03-19 07:08:53 [parallel_state.py:967] rank 1 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 1
(VllmWorker rank=0 pid=2154) INFO 03-19 07:08:53 [parallel_state.py:967] rank 0 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 0
(VllmWorker rank=1 pid=2171) INFO 03-19 07:08:53 [cuda.py:215] Using Flash Attention backend on V1 engine.
(VllmWorker rank=0 pid=2154) INFO 03-19 07:08:53 [cuda.py:215] Using Flash Attention backend on V1 engine.
(VllmWorker rank=1 pid=2171) INFO 03-19 07:08:53 [gpu_model_runner.py:1128] Starting to load model Qwen_QwQ-32B-AWQ...
(VllmWorker rank=0 pid=2154) INFO 03-19 07:08:53 [gpu_model_runner.py:1128] Starting to load model Qwen_QwQ-32B-AWQ...
(VllmWorker rank=0 pid=2154) WARNING 03-19 07:08:53 [topk_topp_sampler.py:63] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
(VllmWorker rank=1 pid=2171) WARNING 03-19 07:08:53 [topk_topp_sampler.py:63] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
Loading safetensors checkpoint shards: 0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 20% Completed | 1/5 [00:10<00:42, 10.68s/it]
Loading safetensors checkpoint shards: 40% Completed | 2/5 [00:21<00:32, 10.85s/it]
Loading safetensors checkpoint shards: 60% Completed | 3/5 [00:33<00:22, 11.14s/it]
Loading safetensors checkpoint shards: 80% Completed | 4/5 [00:43<00:10, 10.97s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:53<00:00, 10.41s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:53<00:00, 10.65s/it]
(VllmWorker rank=0 pid=2154)
(VllmWorker rank=1 pid=2171) INFO 03-19 07:09:47 [loader.py:429] Loading weights took 53.30 seconds
(VllmWorker rank=0 pid=2154) INFO 03-19 07:09:47 [loader.py:429] Loading weights took 53.30 seconds
(VllmWorker rank=0 pid=2154) INFO 03-19 07:09:48 [gpu_model_runner.py:1140] Model loading took 9.0968 GB and 54.175259 seconds
(VllmWorker rank=1 pid=2171) INFO 03-19 07:09:48 [gpu_model_runner.py:1140] Model loading took 9.0968 GB and 54.463499 seconds
(VllmWorker rank=0 pid=2154) INFO 03-19 07:09:59 [backends.py:409] Using cache directory: /root/.cache/vllm/torch_compile_cache/68067b4c1d/rank_0_0 for vLLM's torch.compile
(VllmWorker rank=0 pid=2154) INFO 03-19 07:09:59 [backends.py:419] Dynamo bytecode transform time: 10.99 s
(VllmWorker rank=1 pid=2171) INFO 03-19 07:09:59 [backends.py:409] Using cache directory: /root/.cache/vllm/torch_compile_cache/68067b4c1d/rank_1_0 for vLLM's torch.compile
(VllmWorker rank=1 pid=2171) INFO 03-19 07:09:59 [backends.py:419] Dynamo bytecode transform time: 11.04 s
(VllmWorker rank=0 pid=2154) INFO 03-19 07:10:02 [backends.py:132] Cache the graph of shape None for later use
(VllmWorker rank=1 pid=2171) INFO 03-19 07:10:02 [backends.py:132] Cache the graph of shape None for later use
(VllmWorker rank=0 pid=2154) INFO 03-19 07:10:37 [backends.py:144] Compiling a graph for general shape takes 37.07 s
(VllmWorker rank=1 pid=2171) INFO 03-19 07:10:37 [backends.py:144] Compiling a graph for general shape takes 37.58 s
(VllmWorker rank=0 pid=2154) INFO 03-19 07:11:09 [monitor.py:33] torch.compile takes 48.05 s in total
(VllmWorker rank=1 pid=2171) INFO 03-19 07:11:09 [monitor.py:33] torch.compile takes 48.62 s in total
INFO 03-19 07:11:10 [kv_cache_utils.py:537] GPU KV cache size: 48,928 tokens
INFO 03-19 07:11:10 [kv_cache_utils.py:540] Maximum concurrency for 30,000 tokens per request: 1.63x
INFO 03-19 07:11:10 [kv_cache_utils.py:537] GPU KV cache size: 48,928 tokens
INFO 03-19 07:11:10 [kv_cache_utils.py:540] Maximum concurrency for 30,000 tokens per request: 1.63x
(VllmWorker rank=1 pid=2171) INFO 03-19 07:12:00 [gpu_model_runner.py:1436] Graph capturing finished in 50 secs, took 2.28 GiB
(VllmWorker rank=0 pid=2154) INFO 03-19 07:12:00 [gpu_model_runner.py:1436] Graph capturing finished in 50 secs, took 2.28 GiB
INFO 03-19 07:12:01 [core.py:138] init engine (profile, create kv cache, warmup model) took 133.53 seconds
INFO 03-19 07:12:01 [serving_chat.py:115] Using default chat sampling params from model: {'temperature': 0.6, 'top_k': 40, 'top_p': 0.95}
INFO 03-19 07:12:01 [serving_completion.py:61] Using default completion sampling params from model: {'temperature': 0.6, 'top_k': 40, 'top_p': 0.95}
INFO 03-19 07:12:01 [api_server.py:1024] Starting vLLM API server on http://0.0.0.0:8000
INFO 03-19 07:12:01 [launcher.py:26] Available routes are:
INFO 03-19 07:12:01 [launcher.py:34] Route: /openapi.json, Methods: GET, HEAD
INFO 03-19 07:12:01 [launcher.py:34] Route: /docs, Methods: GET, HEAD
INFO 03-19 07:12:01 [launcher.py:34] Route: /docs/oauth2-redirect, Methods: GET, HEAD
INFO 03-19 07:12:01 [launcher.py:34] Route: /redoc, Methods: GET, HEAD
INFO 03-19 07:12:01 [launcher.py:34] Route: /health, Methods: GET
INFO 03-19 07:12:01 [launcher.py:34] Route: /load, Methods: GET
INFO 03-19 07:12:01 [launcher.py:34] Route: /ping, Methods: GET, POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /tokenize, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /detokenize, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /v1/models, Methods: GET
INFO 03-19 07:12:01 [launcher.py:34] Route: /version, Methods: GET
INFO 03-19 07:12:01 [launcher.py:34] Route: /v1/chat/completions, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /v1/completions, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /v1/embeddings, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /pooling, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /score, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /v1/score, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /v1/audio/transcriptions, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /rerank, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /v1/rerank, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /v2/rerank, Methods: POST
INFO 03-19 07:12:01 [launcher.py:34] Route: /invocations, Methods: POST
INFO: Started server process [2040]
INFO: Waiting for application startup.
INFO: Application startup complete.
```
Without --disable-mm-preprocessor-cache:
```bash
(base) root@287aa8679d31:/workdir# CUDA_VISIBLE_DEVICES=0,1 CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_LAUNCH_BLOCKING=1 vllm serve Qwen_QwQ-32B-AWQ --dtype auto --api-key token-abc123 --host 0.0.0.0 --port
8000 --tensor-parallel-size 2 --gpu-memory-utilization 0.95 --max-model-len 30000 --cpu-offload-gb 0 --device cuda --disable-custom-all-reduce --enable-reasoning --reasoning-parser deepseek
_r1 --served-model-name AlexBefest/Qwen_QwQ-32B-AWQ --block-size 32 --max-seq-len-to-capture 30000
INFO 03-19 07:26:21 [__init__.py:256] Automatically detected platform cuda.
INFO 03-19 07:26:22 [api_server.py:977] vLLM API server version 0.8.0
INFO 03-19 07:26:22 [api_server.py:978] args: Namespace(subparser='serve', model_tag='Qwen_QwQ-32B-AWQ', config='', host='0.0.0.0', port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key='token-abc123', lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='Qwen_QwQ-32B-AWQ', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=30000, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=2, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=32, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0.0, gpu_memory_utilization=0.95, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=30000, disable_custom_all_reduce=True, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='cuda', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['AlexBefest/Qwen_QwQ-32B-AWQ'], qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=True, reasoning_parser='deepseek_r1', disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False, dispatch_function=<function ServeSubcommand.cmd at 0x70a528e44680>)
INFO 03-19 07:26:25 [config.py:583] This model supports multiple tasks: {'embed', 'classify', 'reward', 'generate', 'score'}. Defaulting to 'generate'.
INFO 03-19 07:26:26 [awq_marlin.py:114] The model is convertible to awq_marlin during runtime. Using awq_marlin kernel.
INFO 03-19 07:26:26 [config.py:1515] Defaulting to use mp for distributed inference
INFO 03-19 07:26:26 [config.py:1693] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 03-19 07:26:28 [__init__.py:256] Automatically detected platform cuda.
INFO 03-19 07:26:29 [core.py:53] Initializing a V1 LLM engine (v0.8.0) with config: model='Qwen_QwQ-32B-AWQ', speculative_config=None, tokenizer='Qwen_QwQ-32B-AWQ', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=30000, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=True, quantization=awq_marlin, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend='deepseek_r1'), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=AlexBefest/Qwen_QwQ-32B-AWQ, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512}
WARNING 03-19 07:26:29 [multiproc_worker_utils.py:310] Reducing Torch parallelism from 10 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO 03-19 07:26:29 [custom_cache_manager.py:19] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager
INFO 03-19 07:26:29 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1], buffer_handle=(2, 10485760, 10, 'psm_bb033d9c'), local_subscribe_addr='ipc:///tmp/a2fc9108-5a69-4e68-a24d-ee50a0d20326', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 03-19 07:26:31 [__init__.py:256] Automatically detected platform cuda.
WARNING 03-19 07:26:33 [utils.py:2282] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x73e7feb3a990>
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:33 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_e8fce57c'), local_subscribe_addr='ipc:///tmp/4ed9f7dc-1a74-4c8e-a8cf-1ad57aaa1d98', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 03-19 07:26:35 [__init__.py:256] Automatically detected platform cuda.
WARNING 03-19 07:26:36 [utils.py:2282] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7402d36d3080>
(VllmWorker rank=1 pid=2978) INFO 03-19 07:26:36 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_7277707f'), local_subscribe_addr='ipc:///tmp/437feb63-2089-4cda-a88d-ffc5c5c0cd8e', remote_subscribe_addr=None, remote_addr_ipv6=False)
(VllmWorker rank=1 pid=2978) INFO 03-19 07:26:36 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:36 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorker rank=1 pid=2978) INFO 03-19 07:26:36 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:36 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:36 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_4229fceb'), local_subscribe_addr='ipc:///tmp/b60d89c4-f906-4be3-8728-d0344f13458c', remote_subscribe_addr=None, remote_addr_ipv6=False)
(VllmWorker rank=1 pid=2978) INFO 03-19 07:26:36 [parallel_state.py:967] rank 1 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 1
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:36 [parallel_state.py:967] rank 0 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 0
(VllmWorker rank=1 pid=2978) INFO 03-19 07:26:36 [cuda.py:215] Using Flash Attention backend on V1 engine.
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:36 [cuda.py:215] Using Flash Attention backend on V1 engine.
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:36 [gpu_model_runner.py:1128] Starting to load model Qwen_QwQ-32B-AWQ...
(VllmWorker rank=1 pid=2978) INFO 03-19 07:26:36 [gpu_model_runner.py:1128] Starting to load model Qwen_QwQ-32B-AWQ...
(VllmWorker rank=0 pid=2961) WARNING 03-19 07:26:37 [topk_topp_sampler.py:63] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
Loading safetensors checkpoint shards: 0% Completed | 0/5 [00:00<?, ?it/s]
(VllmWorker rank=1 pid=2978) WARNING 03-19 07:26:37 [topk_topp_sampler.py:63] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
Loading safetensors checkpoint shards: 20% Completed | 1/5 [00:00<00:02, 1.51it/s]
Loading safetensors checkpoint shards: 40% Completed | 2/5 [00:01<00:02, 1.44it/s]
Loading safetensors checkpoint shards: 60% Completed | 3/5 [00:02<00:01, 1.43it/s]
Loading safetensors checkpoint shards: 80% Completed | 4/5 [00:02<00:00, 1.56it/s]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:03<00:00, 1.74it/s]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:03<00:00, 1.62it/s]
(VllmWorker rank=0 pid=2961)
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:40 [loader.py:429] Loading weights took 3.13 seconds
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:41 [gpu_model_runner.py:1140] Model loading took 9.0970 GB and 3.917567 seconds
(VllmWorker rank=1 pid=2978) INFO 03-19 07:26:42 [loader.py:429] Loading weights took 5.21 seconds
(VllmWorker rank=1 pid=2978) INFO 03-19 07:26:43 [gpu_model_runner.py:1140] Model loading took 9.0970 GB and 6.210210 seconds
(VllmWorker rank=1 pid=2978) INFO 03-19 07:26:54 [backends.py:409] Using cache directory: /root/.cache/vllm/torch_compile_cache/68067b4c1d/rank_1_0 for vLLM's torch.compile
(VllmWorker rank=1 pid=2978) INFO 03-19 07:26:54 [backends.py:419] Dynamo bytecode transform time: 10.80 s
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:54 [backends.py:409] Using cache directory: /root/.cache/vllm/torch_compile_cache/68067b4c1d/rank_0_0 for vLLM's torch.compile
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:54 [backends.py:419] Dynamo bytecode transform time: 11.01 s
(VllmWorker rank=1 pid=2978) INFO 03-19 07:26:55 [backends.py:115] Directly load the compiled graph for shape None from the cache
(VllmWorker rank=0 pid=2961) INFO 03-19 07:26:56 [backends.py:115] Directly load the compiled graph for shape None from the cache
(VllmWorker rank=0 pid=2961) INFO 03-19 07:27:07 [monitor.py:33] torch.compile takes 11.01 s in total
(VllmWorker rank=1 pid=2978) INFO 03-19 07:27:07 [monitor.py:33] torch.compile takes 10.80 s in total
INFO 03-19 07:27:08 [kv_cache_utils.py:537] GPU KV cache size: 58,816 tokens
INFO 03-19 07:27:08 [kv_cache_utils.py:540] Maximum concurrency for 30,000 tokens per request: 1.96x
INFO 03-19 07:27:08 [kv_cache_utils.py:537] GPU KV cache size: 58,816 tokens
INFO 03-19 07:27:08 [kv_cache_utils.py:540] Maximum concurrency for 30,000 tokens per request: 1.96x
(VllmWorker rank=0 pid=2961) INFO 03-19 07:27:52 [gpu_model_runner.py:1436] Graph capturing finished in 44 secs, took 2.28 GiB
(VllmWorker rank=1 pid=2978) INFO 03-19 07:27:52 [gpu_model_runner.py:1436] Graph capturing finished in 44 secs, took 2.28 GiB
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] WorkerProc hit an exception: %s
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] WorkerProc hit an exception: %s
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] Traceback (most recent call last):
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] Traceback (most recent call last):
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1308, in _dummy_sampler_run
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1308, in _dummy_sampler_run
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] sampler_output = self.model.sample(
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] sampler_output = self.model.sample(
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 480, in sample
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 480, in sample
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] next_tokens = self.sampler(logits, sampling_metadata)
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] next_tokens = self.sampler(logits, sampling_metadata)
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/sample/sampler.py", line 49, in forward
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/sample/sampler.py", line 49, in forward
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] sampled = self.sample(logits, sampling_metadata)
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] sampled = self.sample(logits, sampling_metadata)
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/sample/sampler.py", line 104, in sample
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/sample/sampler.py", line 104, in sample
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] random_sampled = self.topk_topp_sampler(
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] random_sampled = self.topk_topp_sampler(
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs)
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] return forward_call(*args, **kwargs)
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/sample/ops/topk_topp_sampler.py", line 79, in forward_native
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/sample/ops/topk_topp_sampler.py", line 79, in forward_native
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] logits = apply_top_k_top_p(logits, k, p)
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] logits = apply_top_k_top_p(logits, k, p)
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/sample/ops/topk_topp_sampler.py", line 111, in apply_top_k_top_p
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/sample/ops/topk_topp_sampler.py", line 111, in apply_top_k_top_p
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] logits_sort, logits_idx = logits.sort(dim=-1, descending=False)
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] logits_sort, logits_idx = logits.sort(dim=-1, descending=False)
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.74 GiB. GPU 1 has a total capacity of 23.57 GiB of which 605.88 MiB is free. Process 133288 has 22.97 GiB memory in use. Of the allocated memory 20.21 GiB is allocated by PyTorch, with 128.00 MiB allocated in private pools (e.g., CUDA Graphs), and 185.98 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.74 GiB. GPU 0 has a total capacity of 23.57 GiB of which 594.88 MiB is free. Process 133267 has 22.97 GiB memory in use. Of the allocated memory 20.21 GiB is allocated by PyTorch, with 128.00 MiB allocated in private pools (e.g., CUDA Graphs), and 185.98 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375]
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375]
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] The above exception was the direct cause of the following exception:
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] The above exception was the direct cause of the following exception:
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375]
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375]
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] Traceback (most recent call last):
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] Traceback (most recent call last):
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 371, in worker_busy_loop
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 371, in worker_busy_loop
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] output = func(*args, **kwargs)
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] output = func(*args, **kwargs)
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 226, in compile_or_warm_up_model
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 226, in compile_or_warm_up_model
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] self.model_runner._dummy_sampler_run(
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] self.model_runner._dummy_sampler_run(
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] return func(*args, **kwargs)
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] return func(*args, **kwargs)
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1312, in _dummy_sampler_run
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1312, in _dummy_sampler_run
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] raise RuntimeError(
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] raise RuntimeError(
(VllmWorker rank=1 pid=2978) ERROR 03-19 07:27:53 [multiproc_executor.py:375] RuntimeError: CUDA out of memory occurred when warming up sampler with 1024 dummy requests. Please try lowering `max_num_seqs` or `gpu_memory_utilization` when initializing the engine.
(VllmWorker rank=0 pid=2961) ERROR 03-19 07:27:53 [multiproc_executor.py:375] RuntimeError: CUDA out of memory occurred when warming up sampler with 1024 dummy requests. Please try lowering `max_num_seqs` or `gpu_memory_utilization` when initializing the engine.
ERROR 03-19 07:27:53 [core.py:340] EngineCore hit an exception: Traceback (most recent call last):
ERROR 03-19 07:27:53 [core.py:340] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 332, in run_engine_core
ERROR 03-19 07:27:53 [core.py:340] engine_core = EngineCoreProc(*args, **kwargs)
ERROR 03-19 07:27:53 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-19 07:27:53 [core.py:340] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 287, in __init__
ERROR 03-19 07:27:53 [core.py:340] super().__init__(vllm_config, executor_class, log_stats)
ERROR 03-19 07:27:53 [core.py:340] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 62, in __init__
ERROR 03-19 07:27:53 [core.py:340] num_gpu_blocks, num_cpu_blocks = self._initialize_kv_caches(
ERROR 03-19 07:27:53 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-19 07:27:53 [core.py:340] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 135, in _initialize_kv_caches
ERROR 03-19 07:27:53 [core.py:340] self.model_executor.initialize_from_config(kv_cache_configs)
ERROR 03-19 07:27:53 [core.py:340] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 63, in initialize_from_config
ERROR 03-19 07:27:53 [core.py:340] self.collective_rpc("compile_or_warm_up_model")
ERROR 03-19 07:27:53 [core.py:340] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 133, in collective_rpc
ERROR 03-19 07:27:53 [core.py:340] raise e
ERROR 03-19 07:27:53 [core.py:340] File "/opt/conda/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 122, in collective_rpc
ERROR 03-19 07:27:53 [core.py:340] raise result
ERROR 03-19 07:27:53 [core.py:340] RuntimeError: CUDA out of memory occurred when warming up sampler with 1024 dummy requests. Please try lowering `max_num_seqs` or `gpu_memory_utilization` when initializing the engine.
ERROR 03-19 07:27:53 [core.py:340]
CRITICAL 03-19 07:27:53 [core_client.py:269] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
Killed
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
closed
|
2025-03-19T07:36:31Z
|
2025-03-20T03:56:36Z
|
https://github.com/vllm-project/vllm/issues/15105
|
[
"bug"
] |
AlexBefest
| 10
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 1,541
|
WHy is there no latest_net_G.pth in checkpoints folder?
|
When I train my own custom dataset, I am getting this error: FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/flir_v2ir/latest_net_G.pth'. Any ideas?
This is the full error message:
Traceback (most recent call last):
File "train.py", line 34, in <module>
model.setup(opt) # regular setup: load and print networks; create schedulers
File "/content/drive/MyDrive/pytorch-CycleGAN-and-pix2pix-master/models/base_model.py", line 88, in setup
self.load_networks(load_suffix)
File "/content/drive/MyDrive/pytorch-CycleGAN-and-pix2pix-master/models/base_model.py", line 192, in load_networks
state_dict = torch.load(load_path, map_location=str(self.device))
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 771, in load
with _open_file_like(f, 'rb') as opened_file:
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 270, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/usr/local/lib/python3.8/dist-packages/torch/serialization.py", line 251, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/flir_v2ir/latest_net_G.pth'
|
open
|
2023-02-06T17:36:56Z
|
2023-02-10T14:16:44Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1541
|
[] |
jcgit786
| 1
|
ranaroussi/yfinance
|
pandas
| 1,557
|
code from quickstart is not working properly
|
I am referring to the code that is displayed under the Quick Start section of the module. It doesn't seems to work with version 0.2.20 with the following error
yfinance failed to decrypt Yahoo data response
for the following attribute:
#msft.shares
#msft.income_stmt
#msft.quarterly_income_stmt
#msft.balance_sheet
#msft.quarterly_balance_sheet
#msft.cashflow
#msft.quarterly_cashflow
#msft.earnings
#msft.quarterly_earnings
#msft.sustainability
#msft.recommendations
#msft.recommendations_summary
#msft.analyst_price_target
#msft.revenue_forecasts
#msft.earnings_forecasts
#msft.earnings_trend
#msft.calendar
I get that there was a significant API change. I was wondering how to improve the quickstart guide. I was thinking about removal of the code that is not working for a start. But maybe there is a better way to proceed (Is there a way to get these informations from an alternative way in the API and document that ?)
|
closed
|
2023-06-10T10:16:53Z
|
2023-06-24T18:10:13Z
|
https://github.com/ranaroussi/yfinance/issues/1557
|
[] |
lcrmorin
| 3
|
2noise/ChatTTS
|
python
| 630
|
S
|
closed
|
2024-07-25T08:47:12Z
|
2024-07-27T16:27:22Z
|
https://github.com/2noise/ChatTTS/issues/630
|
[
"invalid"
] |
MantleGuy12
| 0
|
|
hzwer/ECCV2022-RIFE
|
computer-vision
| 161
|
Performance of v3 model
|
I'm not sure if v3 model can beat v2.4 model because the scale hypermeter is very sensitive after v3 model.
Anyone can help to test some hard cases?
|
closed
|
2021-05-15T09:28:57Z
|
2021-05-17T06:53:38Z
|
https://github.com/hzwer/ECCV2022-RIFE/issues/161
|
[] |
hzwer
| 4
|
microsoft/nni
|
tensorflow
| 5,009
|
Experiment can not run(Waiting). And I can not refresh the web after the first time I open it. Some experiments would fail because of 'placementConstraint: { type: 'None', gpus: [] }'.
|
**Describe the issue**:
Experiment can not run(Waiting). And I can not refresh the web after the first time I open it. Some experiments would fail because of 'placementConstraint: { type: 'None', gpus: [] }'.
**Environment**:
- NNI version: 2.6.1
- Training service (local|remote|pai|aml|etc): local
- Client OS: Linux
- Server OS (for remote mode only):
- Python version: 3.6.13
- PyTorch/TensorFlow version: Pytorch
- Is conda/virtualenv/venv used?: Conda
- Is running in Docker?: no
[dispatcher.log](https://github.com/microsoft/nni/files/9156977/dispatcher.log)
[nnictl_stderr.log](https://github.com/microsoft/nni/files/9156978/nnictl_stderr.log)
[nnictl_stdout.log](https://github.com/microsoft/nni/files/9156979/nnictl_stdout.log)
[nnimanager.log](https://github.com/microsoft/nni/files/9156980/nnimanager.log)
|
closed
|
2022-07-21T07:10:12Z
|
2022-07-23T04:41:02Z
|
https://github.com/microsoft/nni/issues/5009
|
[] |
JimmyMa99
| 10
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,770
|
i am facing this issue please read the body
|
Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeException: "[ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running BatchNormalization node. Name:'BatchNormalization_28' Status Message: bad allocation"
Traceback Error: "
File "UVR.py", line 6638, in process_start
File "separate.py", line 499, in seperate
File "separate.py", line 594, in demix
File "separate.py", line 635, in run_model
File "separate.py", line 491, in <lambda>
File "onnxruntime\capi\onnxruntime_inference_collection.py", line 192, in run
"
Error Time Stamp [2025-03-10 12:19:28]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Inst HQ 3
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: True
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems
|
open
|
2025-03-10T06:50:31Z
|
2025-03-10T06:50:31Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1770
|
[] |
Draxie232
| 0
|
biolab/orange3
|
numpy
| 6,713
|
AttributeError: module 'Orange.widgets.gui' has no attribute 'WebviewWidge
|
orange 3.3
orange-text 1.15.0
OS linux mint victoria
run it with the "python3 -m Orange.canvas" command
Expected behavior
It's suposed to create a worcloud from a corpus file.
Actual behavior
when run it appears an error message: 'Orange.widgets.gui' has no attribute 'WebviewWidget'.
Steps to reproduce the behavior
Create corpus
link corpus to preprrocessing text
link preprocessing to wordcloud
try to open wordcloud
|
closed
|
2024-01-24T02:00:14Z
|
2025-01-12T22:29:29Z
|
https://github.com/biolab/orange3/issues/6713
|
[
"snack"
] |
JoaoGabrielTN
| 19
|
modin-project/modin
|
data-science
| 7,298
|
BUG: conda install modin-all isn't installing modin-ray or ray
|
### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [ ] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-main-branch).)
### Reproducible Example
```python
conda install modin-all
```
### Issue Description
In the list of packages that will be installed, there isn't modin-ray nor ray
### Expected Behavior
`conda install modin-all` should install modin-ray, or ray, or both
### Error Logs
_No response_
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : bdc79c146c2e32f2cab629be240f01658cfb6cc2
python : 3.12.3.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : fr_FR.UTF-8
LOCALE : English_Europe.1252
pandas : 2.2.1
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 69.5.1
pip : 24.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.4
IPython : None
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.3.1
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 14.0.2
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : 0.22.0
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
|
closed
|
2024-06-03T15:09:10Z
|
2024-06-05T10:02:48Z
|
https://github.com/modin-project/modin/issues/7298
|
[
"bug 🦗",
"Triage 🩹"
] |
RomainROCH
| 16
|
keras-team/keras
|
pytorch
| 20,108
|
Bug in `keras.src.saving.saving_lib._save_model_to_dir`
|
`tf.keras.__version__` -> "3.4.1"
If model is already saved then method call by `keras.src.models.model.Model.save` call `keras.src.saving.saving_lib._save_model_to_dir`, if model is already saved then `asset_store = DiskIOStore(assert_dirpath, mode="w")` ([Line - 178](https://github.com/keras-team/keras/blob/master/keras/src/saving/saving_lib.py#L179)) raise `FileExistsError` which error handling and finally clause line - `asset_store.close()` ([Line - 189](https://github.com/keras-team/keras/blob/master/keras/src/saving/saving_lib.py#L189)) causes - `UnboundLocalError: local variable 'asset_store' referenced before assignment` as `asset_store` is not define.
```shell
FileExistsError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py](https://localhost:8080/#) in _save_model_to_dir(model, dirpath, weights_format)
139 )
--> 140 asset_store = DiskIOStore(assert_dirpath, mode="w")
141 _save_state(
FileExistsError: [Errno 17] File exists: '/content/.../model_weights/assets'
During handling of the above exception, another exception occurred:
UnboundLocalError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py](https://localhost:8080/#) in _save_model_to_dir(model, dirpath, weights_format)
148 finally:
149 weights_store.close()
--> 150 asset_store.close()
151
152
UnboundLocalError: local variable 'asset_store' referenced before assignment
```
Solution to move `asset_store.close()` from `finally` clause to try clause or check if `asset_store` is define then only call `asset_store.close()` (Update from line 158 to line 189 i.e., https://github.com/keras-team/keras/blob/master/keras/src/saving/saving_lib.py#L158-L189)
```python
def _save_model_to_dir(model, dirpath, weights_format):
if not file_utils.exists(dirpath):
file_utils.makedirs(dirpath)
config_json, metadata_json = _serialize_model_as_json(model)
with open(file_utils.join(dirpath, _METADATA_FILENAME), "w") as f:
f.write(metadata_json)
with open(file_utils.join(dirpath, _CONFIG_FILENAME), "w") as f:
f.write(config_json)
weights_filepath = file_utils.join(dirpath, _VARS_FNAME_H5)
assert_dirpath = file_utils.join(dirpath, _ASSETS_DIRNAME)
try:
if weights_format == "h5":
weights_store = H5IOStore(weights_filepath, mode="w")
elif weights_format == "npz":
weights_store = NpzIOStore(weights_filepath, mode="w")
else:
raise ValueError(
"Unknown `weights_format` argument. "
"Expected 'h5' or 'npz'. "
f"Received: weights_format={weights_format}"
)
asset_store = DiskIOStore(assert_dirpath, mode="w")
_save_state(
model,
weights_store=weights_store,
assets_store=asset_store,
inner_path="",
visited_saveables=set(),
)
finally:
weights_store.close()
if ('asset_store' in locals()): asset_store.close() # check if `asset_store` define then only close
```
|
closed
|
2024-08-10T13:12:49Z
|
2024-08-15T05:33:26Z
|
https://github.com/keras-team/keras/issues/20108
|
[
"stat:awaiting response from contributor",
"type:Bug"
] |
MegaCreater
| 6
|
autokey/autokey
|
automation
| 157
|
Does not work with Rofi
|
## Classification:
Bug
## Reproducibility:
Always
## Summary
I use a launcher program called [rofi](https://github.com/DaveDavenport/rofi). I use autokey phrases to map arrow keys to hyper + hjkl (I modified the us layout file to have capslock as the hyper key). When I open to rofi window to run a program, autokey seems to send the keys to the window below it instead.
## Steps to Reproduce
For example:
1. Using firefox and then open rofi,
2. Try to use Hyper + j/k to move between the programs listed in rofi.
## Expected Results
It should act like arrow keys and move change the highlighted program up/down.
## Actual Results
The page in firefox scroll in stead, a bunch of j,k letters is typed in to rofi.
## Version
Latest git e1b6ba4.
Distro:
KDE Neon, basically Ubuntu 16.04
|
closed
|
2018-06-18T11:07:54Z
|
2022-10-02T06:51:29Z
|
https://github.com/autokey/autokey/issues/157
|
[
"wontfix",
"upstream bug",
"autokey triggers"
] |
snippins
| 7
|
tqdm/tqdm
|
jupyter
| 779
|
fix broken url link to awesome-python
|
Hi, the url link to awesome-python repo in the index README is not working because of the unwanted trailing `)`.
https://github.com/tqdm/tqdm/pull/778
|
closed
|
2019-07-19T10:15:30Z
|
2019-08-08T16:33:32Z
|
https://github.com/tqdm/tqdm/issues/779
|
[
"question/docs ‽",
"to-merge ↰"
] |
cheng10
| 1
|
xuebinqin/U-2-Net
|
computer-vision
| 251
|
For smooth loss func advise
|
SInce I tried BCE loss for a while, the loss struggle around 0.03. I thought the BCE is not that suitable for smooth labeled training. As for lots of dataset, gts are labeled in [0-1] including DUTS-TR you used. So isn't it better to use mse loss, even KL Divergence as the loss function instead of BCE? Or convert it to be a ce problem with softmax activation. @xuebinqin
|
open
|
2021-08-27T13:08:51Z
|
2022-02-14T17:07:12Z
|
https://github.com/xuebinqin/U-2-Net/issues/251
|
[] |
Sparknzz
| 1
|
tensorpack/tensorpack
|
tensorflow
| 1,155
|
Stuck on Pre-filling StagingArea
|
Settings:
Tensorflow 1.9.0
Cuda 9.0
Tensorpack 0.8.9
I use the official training code (alexnet-dorefa.py), but stuck on Pre-filling StagingArea. When I change the `SyncMultiGPUTrainerReplicated` -> `SyncMultiGPUTrainerParameterServer`, the training works. Can anyone provide some suggestions? I have tried to change tensorflow version 1.9.0, 1.10.0, 1.11.0, and tensorpack to latest one 0.9.4, but still fails if using official SyncMultiGPUTrainerReplicated.
|
closed
|
2019-04-18T10:09:29Z
|
2019-04-18T13:26:05Z
|
https://github.com/tensorpack/tensorpack/issues/1155
|
[] |
snownus
| 2
|
deeppavlov/DeepPavlov
|
tensorflow
| 1,557
|
👩💻📞 DeepPavlov Community Call #15
|
Всем привет,
Мы решили не нарушать наших традиций и спустя небольшой перерыв вновь собираемся встретиться с вами на Community Call, ведь нам есть что рассказать! В этом месяце Community Call пройдет только на **русском языке.**
Предстоящий звонок мы посвятим [Dialog Flow Framework](https://github.com/deepmipt/dialog_flow_framework#-dialog-flow-engine-stable), фреймворку для создания диалоговых систем. Благодаря ему у разработчиков есть возможность быстро создавать небольших ботов и AI ассистентов, в основе которых - всего один скилл.
Dialog Flow Framework успел обзавестись некоторыми фичами, которые можно использовать уже прямо сейчас. Денис Кузнецов расскажет, как и где их применять, как же будет происходить дальнейшее развитие продукта и с чего все начиналось.
Современный мир нуждается в этом решении, а вот какие преимущества оно даст разработчикам и где оно применимо - Денис расскажет на звонке.
Ждем ваши предложения и надеемся увидеть на нашем Community Call!
**Мы проведем следующий звонок 27 апреля 2022 в 19.00 по Московскому времени (19 MSK/16 или 17 UTC в зависимости от региона).**
> Добавьте напоминание в календарь: https://bit.ly/DPMonthlyCallRU
**Повестка DeepPavlov Community Call #15:**
> 7:00pm–7:10pm | Приветствие
7:10pm–8:00pm | Денис Кузнецов: DialogFlow Framework
8:00pm–8:30pm | Вопросы и обсуждения с командой инженеров DeepPavlov
В случае, если вы пропустили Calls ранее, вы всегда их можете найти в [плейлисте](https://www.youtube.com/playlist?list=PLt1IfGj6-_-ev9BsM38sXyQ-_ODqsNvIl).
Мы приглашаем вас присоединиться к нам, чтобы сообщить, что вы думаете о последних изменениях, поделиться своими ожиданиями от предстоящей версии библиотеки и рассказать, как DeepPavlov помогает вам в ваших проектах!
**Оставьте отзыв о библиотеке DeepPavlov**
Мы хотим услышать вас. Вы можете заполнить форму ниже, чтобы сообщить нам, как вы используете DeepPavlov Library, что вы хотите, чтобы мы добавили или улучшили!
https://bit.ly/DPLibrarySurvey
**Заинтересовались?**
Не упускайте шанс и присоединяйтесь к нам! Этот Call открыт для всех энтузиастов в области Conversational AI.
|
closed
|
2022-04-21T12:29:19Z
|
2022-05-26T12:44:49Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1557
|
[
"discussion"
] |
PolinaMrdv
| 0
|
sinaptik-ai/pandas-ai
|
pandas
| 1,136
|
SSL in MySQLConnector
|
### 🚀 The feature
Hi @gventuri ,
We are using pandasai connectors to connect to mysql in our production environment.
And the MySQL connection we have is SSL enabled.
However we see that in pandasai connectors we are not able to connect to SSL
### Motivation, pitch
MySQL Connection - SSL
### Alternatives
_No response_
### Additional context
_No response_
|
closed
|
2024-04-25T07:52:50Z
|
2024-08-05T16:05:17Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1136
|
[] |
shwetabhattad-TU
| 1
|
OpenGeoscience/geonotebook
|
jupyter
| 107
|
RasterData name should defer to reader
|
https://github.com/OpenGeoscience/geonotebook/blob/master/geonotebook/wrappers/raster.py#L169-L170
|
open
|
2017-03-14T16:18:22Z
|
2017-03-14T16:18:22Z
|
https://github.com/OpenGeoscience/geonotebook/issues/107
|
[] |
kotfic
| 0
|
hzwer/ECCV2022-RIFE
|
computer-vision
| 49
|
关于flow_gt于loss_dis
|
作者您好,我有点疑问。
在loss代码中,根据权重, loss_cons应该是论文中的loss_dis吧?
for i in range(3):
loss_cons += self.epe(flow_list[i], flow_gt[:, :2], 1)
loss_cons += self.epe(-flow_list[i], flow_gt[:, 2:4], 1)

定义是这样的,flow_list[i]于-flow_list[i]是代表0->1和1->0?
论文中的是0->t,和t->1?
|
closed
|
2020-12-04T08:57:03Z
|
2020-12-06T03:32:53Z
|
https://github.com/hzwer/ECCV2022-RIFE/issues/49
|
[] |
xxh96
| 2
|
yt-dlp/yt-dlp
|
python
| 11,718
|
Theater Complex TOWN is not working
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Japan
### Provide a description that is worded well enough to be understood
Theater Complex TOWN is not working
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.theater-complex.town/en/live/79akNM7bJeD5Fi9EP39aDp', '--username', 'PRIVATE', 'and', '--password', 'PRIVATE']
[debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (win_x86_exe)
[debug] Python 3.10.11 (CPython AMD64 32bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 5.1.2-essentials_build-www.gyan.dev (setts)
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.11.18 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.11.18 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.theater-complex.town/en/live/79akNM7bJeD5Fi9EP39aDp
[generic] 79akNM7bJeD5Fi9EP39aDp: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 79akNM7bJeD5Fi9EP39aDp: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.theater-complex.town/en/live/79akNM7bJeD5Fi9EP39aDp
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1624, in wrapper
File "yt_dlp\YoutubeDL.py", line 1759, in __extract_info
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\generic.py", line 2553, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.theater-complex.town/en/live/79akNM7bJeD5Fi9EP39aDp
[generic] Extracting URL: and
ERROR: [generic] 'and' is not a valid URL. Set --default-search "ytsearch" (or run yt-dlp "ytsearch:and" ) to search YouTube
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\generic.py", line 2361, in _real_extract
```
|
closed
|
2024-12-03T05:05:20Z
|
2025-01-26T03:13:05Z
|
https://github.com/yt-dlp/yt-dlp/issues/11718
|
[
"account-needed",
"site-bug",
"patch-available",
"can-share-account"
] |
shibage
| 2
|
modin-project/modin
|
pandas
| 6,870
|
BUG: Modin on dask is throwing errors on initialization,
|
### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
# modin version 0.26.0
# dask version 2024.1
from modin.config import Engine
Engine.put("dask")
from dask.distributed import Client
client = Client('localhost:8786') # this port forwards to dask cluster
import modin.pandas as mpd
df2 = mpd.DataFrame({'a': [1, 2], 'b': [3, 4]})
df2
```
### Issue Description
Seems like modin is using an api from distributed client which is no longer supported.
### Expected Behavior
It should create a simple test modin dataframe.
### Error Logs
<details>
```python-traceback
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
Cell In[26], line 1
----> 1 df2 = mpd.DataFrame({'a': [1, 2], 'b': [3, 4]})
2 df2
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/modin/logging/logger_decorator.py:129, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
114 """
115 Compute function with logging if Modin logging is enabled.
116
(...)
126 Any
127 """
128 if LogMode.get() == "disable":
--> 129 return obj(*args, **kwargs)
131 logger = get_logger()
132 logger_level = getattr(logger, log_level)
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/modin/pandas/dataframe.py:179, in DataFrame.__init__(self, data, index, columns, dtype, copy, query_compiler)
177 # Check type of data and use appropriate constructor
178 elif query_compiler is None:
--> 179 distributed_frame = from_non_pandas(data, index, columns, dtype)
180 if distributed_frame is not None:
181 self._query_compiler = distributed_frame._query_compiler
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/modin/pandas/io.py:970, in from_non_pandas(df, index, columns, dtype)
949 """
950 Convert a non-pandas DataFrame into Modin DataFrame.
951
(...)
966 Converted DataFrame.
967 """
968 from modin.core.execution.dispatching.factories.dispatcher import FactoryDispatcher
--> 970 new_qc = FactoryDispatcher.from_non_pandas(df, index, columns, dtype)
971 if new_qc is not None:
972 return ModinObjects.DataFrame(query_compiler=new_qc)
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/modin/core/execution/dispatching/factories/dispatcher.py:177, in FactoryDispatcher.from_non_pandas(cls, *args, **kwargs)
174 @classmethod
175 @_inherit_docstrings(factories.BaseFactory._from_non_pandas)
176 def from_non_pandas(cls, *args, **kwargs):
--> 177 return cls.get_factory()._from_non_pandas(*args, **kwargs)
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/modin/core/execution/dispatching/factories/dispatcher.py:115, in FactoryDispatcher.get_factory(cls)
112 if cls.__factory is None:
113 from modin.pandas import _update_engine
--> 115 Engine.subscribe(_update_engine)
116 Engine.subscribe(cls._update_factory)
117 StorageFormat.subscribe(cls._update_factory)
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/modin/config/pubsub.py:291, in Parameter.subscribe(cls, callback)
282 """
283 Add `callback` to the `_subs` list and then execute it.
284
(...)
288 Callable to execute.
289 """
290 cls._subs.append(callback)
--> 291 callback(cls)
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/modin/pandas/__init__.py:154, in _update_engine(publisher)
151 if _is_first_update.get("Dask", True):
152 from modin.core.execution.dask.common import initialize_dask
--> 154 initialize_dask()
155 elif publisher.get() == "Unidist":
156 if _is_first_update.get("Unidist", True):
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/modin/core/execution/dask/common/utils.py:42, in initialize_dask()
38 import warnings
40 warnings.simplefilter("ignore", category=FutureWarning)
---> 42 client.run(_disable_warnings)
44 except ValueError:
45 from distributed import Client
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/distributed/client.py:2998, in Client.run(self, function, workers, wait, nanny, on_error, *args, **kwargs)
2915 def run(
2916 self,
2917 function,
(...)
2923 **kwargs,
2924 ):
2925 """
2926 Run a function on all workers outside of task scheduling system
2927
(...)
2996 >>> c.run(print_state, wait=False) # doctest: +SKIP
2997 """
-> 2998 return self.sync(
2999 self._run,
3000 function,
3001 *args,
3002 workers=workers,
3003 wait=wait,
3004 nanny=nanny,
3005 on_error=on_error,
3006 **kwargs,
3007 )
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/distributed/utils.py:358, in SyncMethodMixin.sync(self, func, asynchronous, callback_timeout, *args, **kwargs)
356 return future
357 else:
--> 358 return sync(
359 self.loop, func, *args, callback_timeout=callback_timeout, **kwargs
360 )
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/distributed/utils.py:434, in sync(loop, func, callback_timeout, *args, **kwargs)
431 wait(10)
433 if error is not None:
--> 434 raise error
435 else:
436 return result
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/distributed/utils.py:408, in sync.<locals>.f()
406 awaitable = wait_for(awaitable, timeout)
407 future = asyncio.ensure_future(awaitable)
--> 408 result = yield future
409 except Exception as exception:
410 error = exception
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/tornado/gen.py:767, in Runner.run(self)
765 try:
766 try:
--> 767 value = future.result()
768 except Exception as e:
769 # Save the exception for later. It's important that
770 # gen.throw() not be called inside this try/except block
771 # because that makes sys.exc_info behave unexpectedly.
772 exc: Optional[Exception] = e
File ~/.pyenv/versions/venv/lib/python3.11/site-packages/distributed/client.py:2903, in Client._run(self, function, nanny, workers, wait, on_error, *args, **kwargs)
2900 continue
2902 if on_error == "raise":
-> 2903 raise exc
2904 elif on_error == "return":
2905 results[key] = exc
File /opt/conda/lib/python3.10/site-packages/distributed/scheduler.py:6258, in send_message()
File /opt/conda/lib/python3.10/site-packages/distributed/core.py:1180, in send_recv()
Exception: TypeError('code expected at most 16 arguments, got 18')
```
</details>
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 47a9a4a294c75cd7b67f0fd7f95f846ed53fbafa
python : 3.11.1.final.0
python-bits : 64
OS : Darwin
OS-release : 23.2.0
Version : Darwin Kernel Version 23.2.0: Wed Nov 15 21:55:06 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6020
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Modin dependencies
------------------
modin : 0.26.0
ray : 2.9.0
dask : 2024.1.0
distributed : 2024.1.0
hdk : None
pandas dependencies
-------------------
pandas : 2.1.4
numpy : 1.26.1
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3.2
Cython : None
pytest : 7.1.2
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.9.3
html5lib : None
pymysql : None
psycopg2 : 2.9.5
jinja2 : 3.1.2
IPython : 8.17.2
pandas_datareader : None
bs4 : 4.12.2
bottleneck : None
dataframe-api-compat: None
fastparquet : None
fsspec : 2023.10.0
gcsfs : 2023.10.0
matplotlib : 3.6.2
numba : 0.58.1
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 14.0.1
pyreadstat : None
pyxlsb : None
s3fs : 0.4.2
scipy : 1.11.4
sqlalchemy : 1.4.49
tables : None
tabulate : None
xarray : 2023.11.0
xlrd : 2.0.1
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
|
closed
|
2024-01-19T23:03:40Z
|
2024-01-30T12:51:08Z
|
https://github.com/modin-project/modin/issues/6870
|
[
"bug 🦗",
"Needs more information ❔",
"Dask ⚡",
"External"
] |
jan876
| 3
|
marshmallow-code/marshmallow-sqlalchemy
|
sqlalchemy
| 58
|
Using Python built-in Enum type in sqlalchemy.Enum column type produces wrong oneOf validation
|
``` python
class Choices(enum.Enum):
a = 'a'
b = 'b'
c = 'c'
class MyModel(db.Model):
# ...
selected_choice = db.Column(db.Enum(Choices), nullable=False)
class MyModelSchema(ModelSchema):
class Meta:
model = MyModel
fields = ['selected_choice']
```
The `MyModelSchema` end up having
```
selected_choice.validate[0].choices == (<enum 'ChoicesEnum'>,)
```
(notice a tuple), because `MyModel.selected_choice.type.enums` stores a tuple of an enum for some reason...
BTW, constructing a marshmallow field with `oneOf(ChoicesEnum)` (omit the unnecessary tuple wrapping) doesn't help much:
``` python
>>> f = fields.Str(validate=[
validate.OneOf(ChoicesEnum),
])
>>> f.serialize('a', ChoicesEnum.a)
'ChoicesEnum.a'
```
And I couldn't make it to deserialize.
A related discussion that I have found: https://github.com/marshmallow-code/marshmallow-sqlalchemy/pull/2
|
closed
|
2016-02-22T15:54:13Z
|
2016-11-10T18:39:03Z
|
https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/58
|
[] |
frol
| 2
|
mirumee/ariadne
|
graphql
| 1,039
|
Incorrect typing for values on EnumType.__init__()
|
Ariadne 0.18.0 has updated the `EnumType` class in a way that causes the `values` argument of `__init__()` to become typed, but the typing is incorrect:
```
def __init__(
self, name: str, values: Union[Dict[str, Any], enum.Enum, enum.IntEnum]
) -> None:
```
should be:
```
def __init__(
self, name: str, values: Union[Dict[str, Any], Type[enum.Enum], Type[enum.IntEnum]]
) -> None:
```
|
closed
|
2023-02-21T17:21:07Z
|
2023-02-22T10:29:22Z
|
https://github.com/mirumee/ariadne/issues/1039
|
[
"bug",
"help wanted"
] |
markedwards
| 4
|
supabase/supabase-py
|
fastapi
| 790
|
Cannot get past this empty error
|
# Bug report
## Describe the bug
Trying to execute a simple select query using Python 3.12 or 3.9. I cannot get past this error.
## To Reproduce
```python
Python 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from supabase import create_client, Client
>>> from supabase.lib.client_options import ClientOptions
>>> url: str = "https://svyjpvnhftybdowglgmt.supabase.co/rest/v1/iot"
>>> key: str = "OMITTED"
>>> client_options = ClientOptions(postgrest_client_timeout=999999, schema="public")
>>> supabase: Client = create_client(url, key, client_options)
>>> print(supabase.table("iot").select("id").execute())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/markreeves/.local/share/virtualenvs/vercel-tidbyt-35n3k3fp/lib/python3.12/site-packages/postgrest/_sync/request_builder.py", line 78, in execute
raise APIError(r.json())
postgrest.exceptions.APIError: {}
>>> print(supabase)
<supabase._sync.client.SyncClient object at 0x1043c5a30>
>>> print (supabase.table("iot").select("id"))
<postgrest._sync.request_builder.SyncSelectRequestBuilder object at 0x1063f7fe0>
```
I've tried using postgrest directly too, receive the same error. Same happens with `select("*")`.
## Expected behavior
It works in RapidAPI or using `requests` to simply fetch my project URL, so it's not a permissions issue. I expect to not get an error using the documented methods.
## System information
- OS: macOS
- Version of supabase-py: 2.4.5
|
closed
|
2024-05-05T22:36:32Z
|
2024-05-22T20:24:19Z
|
https://github.com/supabase/supabase-py/issues/790
|
[
"invalid"
] |
heymarkreeves
| 1
|
piskvorky/gensim
|
data-science
| 3,094
|
IndexError related to self.vectors_lockf in KeyedVectors.intersect_word2vec_format() in 4.0+
|
Both lines containing `vectors_lockf` variable should be:
```python
self.vectors_lockf = lockf
```
Current (4.0.0) version is:
```python
self.vectors_lockf[self.get_index(word)] = lockf
```
And this gives us IndexError trying to make `intersect_word2vec_format()`
|
open
|
2021-03-29T05:05:38Z
|
2022-06-08T11:33:09Z
|
https://github.com/piskvorky/gensim/issues/3094
|
[
"bug"
] |
notonlyvandalzzz
| 7
|
KaiyangZhou/deep-person-reid
|
computer-vision
| 254
|
Can you release model zoo configs?
|
closed
|
2019-11-07T07:10:54Z
|
2019-11-08T07:25:54Z
|
https://github.com/KaiyangZhou/deep-person-reid/issues/254
|
[] |
moyans
| 3
|
|
pandas-dev/pandas
|
pandas
| 60,861
|
BUG: Poor GroupBy Performance with ArrowDtype(...) wrapped types
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame({"key": range(100000), "val": "test"})
%timeit df.groupby(["key"]).first();
pa_df = df.convert_dtypes(dtype_backend="pyarrow")
%timeit pa_df.groupby(["key"]).first();
pa_df = pa_df.astype({"val": pd.StringDtype("pyarrow")})
%timeit pa_df.groupby(["key"]).first();
```
### Issue Description
Grouping by and then aggregating on a dataframe that contains `ArrowDtype(pyarrow.string())` columns is orders of magnitude slower than performing the same operations on an equivalent dataframe whose corresponding string column is of any other acceptable string type (e.g. `string`, `StringDtype("python"), StringDtype("pyarrow")`). This is surprising in particular because `StringDtype("pyarrow")` does not exhibit the same problem.
Note that in the bug reproduction example, `DataFrame.convert_dtypes` with `dtype_backend="pyarrow"` converts `string` columns to `ArrowDtype(pyarrow.string())` rather than `StringDtype("pyarrow")`.
Finally, here's a sample run, with dtypes printed out for clarity; I've reproduced this on both OS X and OpenSuse Tumbleweed for the listed pandas and pyarrow versions (as well as current `main`):
```python
In [7]: import pandas as pd
In [8]: df = pd.DataFrame({"key": range(100000), "val": "test"})
In [9]: df["val"].dtype
Out[9]: dtype('O')
In [10]: %timeit df.groupby(["key"]).first();
8.37 ms ± 599 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [11]: pa_df = df.convert_dtypes(dtype_backend="pyarrow")
In [13]: type(pa_df["val"].dtype)
Out[13]: pandas.core.dtypes.dtypes.ArrowDtype
In [14]: %timeit pa_df.groupby(["key"]).first();
2.39 s ± 142 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [15]: pa_df = pa_df.astype({"val": pd.StringDtype("pyarrow")})
...:
In [16]: type(pa_df["val"].dtype)
Out[16]: pandas.core.arrays.string_.StringDtype
In [17]: %timeit pa_df.groupby(["key"]).first();
12.9 ms ± 306 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```
### Expected Behavior
Aggregation performance on `ArrowDtype(pyarrow.string())` columns should be comparable to aggregation performance on `StringDtype("pyarrow")`, `string` typed columns.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.1
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:16 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : en_CA.UTF-8
LANG : None
LOCALE : en_CA.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : 8.32.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.2.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
|
open
|
2025-02-05T20:53:50Z
|
2025-02-10T14:19:38Z
|
https://github.com/pandas-dev/pandas/issues/60861
|
[
"Bug",
"Dtype Conversions",
"Needs Discussion",
"Arrow"
] |
kzvezdarov
| 13
|
davidsandberg/facenet
|
computer-vision
| 856
|
Can't compile faceneten with tfcompile
|
I'm using tfcompile with this description file:
```
feed {
id { node_name: "input" }
shape {
dim { size: 1 }
dim { size: 3 }
dim { size: 160 }
dim { size: 160 }
}
}
fetch {
id { node_name: "embeddings" }
}
```
`INVALID ARGUMENTS: Unable to functionalize control flow in graph: Switch ('InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/Switch_1') has operands ('InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/Switch_1/Switch' and 'InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond/pred_id') that have different switch depths (1 != 0)`
I'm using frozen_20170512-110547.pb because newest versions don't compile because of unsupported ops.
Can anybody help me to struggle with this error? Maybe I can change Switch on something equivalent?
|
closed
|
2018-08-24T18:13:46Z
|
2019-12-09T10:31:20Z
|
https://github.com/davidsandberg/facenet/issues/856
|
[] |
dbezhetskov
| 0
|
iperov/DeepFaceLab
|
deep-learning
| 5,628
|
Does an Intel arc a750 or 770 work on deepfacelab?
|
As the title states does this program work with an Intel arc a750 or 770. I was going to upgrade my computer so I can move over to saehd, and get much faster iterations. Whilst looking at graphics cards I noticed the A770 was cheaper and performed better than an RTX 3060. But I read a couple reviews saying that it doesn't work well with deep learning programs. So I am asking if it would work with deepfacelab because it does work on some.
|
open
|
2023-02-16T04:12:19Z
|
2023-06-08T23:07:00Z
|
https://github.com/iperov/DeepFaceLab/issues/5628
|
[] |
bwppphillip
| 4
|
biolab/orange3
|
numpy
| 6,976
|
Group By: add straightforward possiblility to do aggregations over all records
|
<!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
**What's your proposed solution?**
Sometimes it is useful to have several aggregations of selected variables over all the data records. To that end, it would be nice to have "all rows" as an option in the column on the left in the Group-by interface
**Are there any alternative solutions?**
The obvious trick to achieve this with the current functionality is to introduce a dummy variable that has the same value for all rows, and to group by the dummy variable. However, as a workaround it is not _that_ intuitive.
|
open
|
2025-01-02T15:54:33Z
|
2025-01-13T09:47:36Z
|
https://github.com/biolab/orange3/issues/6976
|
[
"meal"
] |
wvdvegte
| 5
|
MagicStack/asyncpg
|
asyncio
| 826
|
Large Object support
|
Are there any plans to have direct support for large objects for efficient streaming of data?
My use case: my webapp supports uploads of binary data files. These files are stored with TOAST (bytea) which is fine: these files are not directly downloaded via the app, and even if they were, we're talking 10s of MB, so I'm not worried about memory footprint for an individual record. HOWEVER, part of the requirements for this app is that all these files can be downloaded in a single zip file. This can be 100s of MBs. My plan: kickoff a background task that builds the zip file, then stores the zip file in PG as a large object. The question then is: providing an efficient download via my webapp (aiohttp).
I could stream it with a loop around, e.g.:
```
SELECT lo_get(data_oid, :offset, :chunksize) from zipstorage where id = :id
```
where `chunksize` might be 1MB and `offset` increases by 1MB with each iteration, stopping the iteration when the returned data is < 1MB.
Might there be a more direct, efficient way? E.g., as with psycopg2's [lobject](https://www.psycopg.org/docs/extensions.html#psycopg2.extensions.lobject)?
Other suggestions most welcome.
Thanks!
|
open
|
2021-09-10T17:35:13Z
|
2021-11-07T21:44:25Z
|
https://github.com/MagicStack/asyncpg/issues/826
|
[] |
al-dpopowich
| 1
|
jofpin/trape
|
flask
| 125
|
code 400, message Bad request syntax
|
I got this message. pls help my problem.
_
| |_ ____ ____ ____ ____
| _) / ___) _ | _ \ / _ )
| |__| | ( ( | | | | ( (/ /
\___)_| \_||_| ||_/ \____)
|_| 2018 by Jose Pino (@jofpin)
-----------------------------------------------
People tracker on internet for OSINT research |=-
-----------------------------------------------
| v2.0 |
--------
@-=[ UPDATES: RUNNING RECENT VERSION
LOCAL INFORMATION
-------------------
>-=[ Lure for the users: http://192.168.8.111:8080/github.com
>-=[ Your REST API path: http://192.168.8.111:8080/a46087be590c.js
>-=[ Control Panel Link: http://127.0.0.1:8080/bc8c3e6
>-=[ Your Access key: 1efea5af32f44ae446e24785
PUBLIC INFORMATION
-------------------
>-=[ Link shortened lure: https://goo.gl/FtRaRJ (share)
>-=[ Public lure: http://3c415abc.ngrok.io/github.com
>-=[ Control Panel link: http://3c415abc.ngrok.io/bc8c3e6
[>] Start time: 2018-12-16 - 12:42:40
[?] Do not forget to close Trape, after use. Press Control C
[¡] Waiting for the users to fall...
192.168.8.115 - - [16/Dec/2018 12:44:09] code 400, message Bad request syntax ("\x16\x03\x01\x00\xcf\x01\x00\x00\xcb\x03\x01\\\x15\xed0u\xf0-$\x9a\xc1I3\xaf\xd1A!X\x8a\xf5\xcc6e'm,\x83(A\xba_\x95\x99\x00\x00D\xc0\x14\xc0")
192.168.8.115 - - [16/Dec/2018 12:44:09] "��\�0u�-$��I3��A!X���6e'm,�(A�_��D��" 400 -
192.168.8.115 - - [16/Dec/2018 12:44:09] code 400, message Bad HTTP/0.9 request type ('\x16\x03\x01\x00\xcf\x01\x00\x00\xcb\x03\x01\\\x15\xed0$w\x9b\xf6guxQ\x99\xc7\xc7)`]\xed')
192.168.8.115 - - [16/Dec/2018 12:44:09] "��\�0$w��guxQ���)`]� h��$���" 400 -
192.168.8.115 - - [16/Dec/2018 12:44:09] code 400, message Bad request syntax ('\x16\x03\x00\x00o\x01\x00\x00k\x03\x00\\\x15\xed0X?fx\x1b\xb4\xd4\xfa\xe6\xe6BNY\x8dVP8\xd6\xa9\xc7\x11%\x9b\xcf\x05t;\xb0\x00\x00D\xc0\x14\xc0')
192.168.8.115 - - [16/Dec/2018 12:44:09] "ok\�0X?fx����BNY�VP8֩�%��t;�D��" 400 -
192.168.8.115 - - [16/Dec/2018 12:44:51] code 400, message Bad request syntax ('\x16\x03\x00\x00o\x01\x00\x00k\x03\x00\\\x15\xedZ\xde\xcb\xe7\xbc\x89\x07\xb0G\xab\xb5\xce^\xfc\xcd\x87\x1f9Gci\xef\x98\xd59\xe4\xa1\xf2l\x00\x00D\xc0\x14\xc0')
192.168.8.115 - - [16/Dec/2018 12:44:51] "ok\�Z��缉�G���^�͇9Gci��9��lD��" 400 -
192.168.8.115 - - [16/Dec/2018 12:44:51] code 400, message Bad request syntax ('\x16\x03\x00\x00o\x01\x00\x00k\x03\x00\\\x15\xedZF\x83\xd1t!\xe7=*\xf8\x08\xc9xU\xca.*\xbaw?\x1d\x12\xe0\xae\x92*%Y/\x00\x00D\xc0\x14\xc0')
192.168.8.115 - - [16/Dec/2018 12:44:51] "ok\�ZF��t!�=*�xU�.*�w?ஒ*%Y/D��" 400 -
[trapexx.txt](https://github.com/jofpin/trape/files/2683299/trapexx.txt)
|
open
|
2018-12-16T06:26:40Z
|
2018-12-16T06:27:30Z
|
https://github.com/jofpin/trape/issues/125
|
[] |
aungsoehein
| 0
|
aio-libs/aiomysql
|
sqlalchemy
| 94
|
Allow pulling down MetaData via reflection
|
Is there a way to pull down Table data with the aiomysql.sa API? When I try either of the following I get an error.
`meta = MetaData()`
`yield from meta.reflect(bind=engine)`
or
`meta = MetaData()`
`Table('test_table', meta, autoload=True, autoload_with=engine)`
File "/anaconda/envs/asap/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
return future.result()
File "/anaconda/envs/asap/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/anaconda/envs/asap/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/Users/user/Documents/test/loader/loader_asyncio.py", line 39, in get_vrts
yield from get_data(engine, vrt)
File "/Users/user/Documents/test/loader/loader_asyncio.py", line 53, in get_data
yield from _get_compound_data(engine, message, vrt_number)
File "/Users/user/Documents/test/loader/loader_asyncio.py", line 90, in _get_compound_data
yield from meta.reflect(bind=engine)
File "/anaconda/envs/asap/lib/python3.5/site-packages/sqlalchemy/sql/schema.py", line 3652, in reflect
with bind.connect() as conn:
AttributeError: 'Engine' object has no attribute 'connect'
Exception ignored in: <bound method Connection.__del__ of <aiomysql.connection.Connection object at 0x1039a8940>>
Traceback (most recent call last):
File "/anaconda/envs/asap/lib/python3.5/site-packages/aiomysql/connection.py", line 694, in **del**
File "/anaconda/envs/asap/lib/python3.5/site-packages/aiomysql/connection.py", line 260, in close
File "/anaconda/envs/asap/lib/python3.5/asyncio/selector_events.py", line 573, in close
File "/anaconda/envs/asap/lib/python3.5/asyncio/base_events.py", line 497, in call_soon
File "/anaconda/envs/asap/lib/python3.5/asyncio/base_events.py", line 506, in _call_soon
File "/anaconda/envs/asap/lib/python3.5/asyncio/base_events.py", line 334, in _check_closed
RuntimeError: Event loop is closed
|
closed
|
2016-08-10T13:25:33Z
|
2016-08-10T14:11:41Z
|
https://github.com/aio-libs/aiomysql/issues/94
|
[] |
tkram01
| 2
|
littlecodersh/ItChat
|
api
| 52
|
centos 6.5命令行下,输出二维码直接退出
|
一下两种都试了
1. itchat.auto_login( False , 'itchati.pkl', True )
2. itchat.auto_login( False , 'itchati.pkl', 2 )
输出内容:
Failed to get QR Code, please restart the program
|
closed
|
2016-07-29T07:15:48Z
|
2016-07-31T03:08:43Z
|
https://github.com/littlecodersh/ItChat/issues/52
|
[
"question"
] |
codebean
| 1
|
plotly/dash
|
data-science
| 2,505
|
[BUG] Error on hot-reload with client-side callbacks
|
**Describe your context**
Error raises on hot-reload when there is a client-side callback. Browser has to be reloaded then
```
dash 2.9.2
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
```
Cannot read properties of undefined (reading 'apply')
(This error originated from the built-in JavaScript code that runs Dash apps. Click to see the full stack trace or open your browser's console.)
TypeError: Cannot read properties of undefined (reading 'apply')
at _callee3$ (http://localhost:8060/_dash-component-suites/dash/dash-renderer/build/dash_renderer.v2_9_2m1.dev.js:580:74)
at tryCatch (http://localhost:8060/_dash-component-suites/dash/dash-renderer/build/dash_renderer.v2_9_2m1.dev.js:411:2404)
at Generator._invoke (http://localhost:8060/_dash-component-suites/dash/dash-renderer/build/dash_renderer.v2_9_2m1.dev.js:411:1964)
at Generator.next (http://localhost:8060/_dash-component-suites/dash/dash-renderer/build/dash_renderer.v2_9_2m1.dev.js:411:3255)
at asyncGeneratorStep (http://localhost:8060/_dash-component-suites/dash/dash-renderer/build/dash_renderer.v2_9_2m1.dev.js:415:103)
at _next (http://localhost:8060/_dash-component-suites/dash/dash-renderer/build/dash_renderer.v2_9_2m1.dev.js:416:194)
at http://localhost:8060/_dash-component-suites/dash/dash-renderer/build/dash_renderer.v2_9_2m1.dev.js:416:364
at new Promise (<anonymous>)
at http://localhost:8060/_dash-component-suites/dash/dash-renderer/build/dash_renderer.v2_9_2m1.dev.js:416:97
at handleClientside (http://localhost:8060/_dash-component-suites/dash/dash-renderer/build/dash_renderer.v2_9_2m1.dev.js:532:28)
```
**Screenshots**

|
closed
|
2023-04-13T10:59:32Z
|
2023-04-13T13:26:14Z
|
https://github.com/plotly/dash/issues/2505
|
[] |
esalehim
| 1
|
aws/aws-sdk-pandas
|
pandas
| 2,542
|
empty parquets are not accessible while using `chunked=INTEGER` in `s3.read_parquet`
|
Hello! I am using `awswrangler` to read a dataset from a specific parquet folder on s3, so `s3.read_parquet` is very useful.
Currently, I need to read the dataset in chunks while having control of how many lines every chunk has, and for that reason I use the argument `chunked` with an integer instead of a bool. However, I've found a problem when the parquet file has no rows, only columns (note that it is not an empty file, is a dataset with no records, but with columns). My current line to read the files is:
```
reader = s3.read_parquet('s3://folder/key/', chunked=500000, dtype_backend="pyarrow")
```
On such case that the parquet file has only columns, I cannot reach any of its informations. I need to be able to go through with empty files, and ideally I would go with the code below.
```
reader = s3.read_parquet('s3://folder/key/', chunked=500000, dtype_backend="pyarrow")
for df in reader:
if df.empty:
return empty result
else:
process rows
```
The `df.empty` part however returns nothing, and everything on the code block is ignored. I've tried some other commands and the results are always empty.
```
>>> for df in reader:
... print(df)
...
>>> for df in reader:
... print(dir(df))
...
>>> for df in reader:
... print(df.empty)
...
>>> for df in reader:
... print(df.shape)
...
>>>
```
I tried reading it with the dataset option and had the same result.
```
reader = s3.read_parquet('s3://folder/key/', chunked=500000, dataset=True, dtype_backend="pyarrow")
```
While reading with `chunked=True`, I can check that the object has attributes and methods, but trying to access them gives me nothing.
```
>>> reader = s3.read_parquet('s3://folder/key/', chunked=True, dtype_backend="pyarrow")
>>> for df in reader:
... print(dir(df))
...
[...columns..., ...attributes..., 'select_dtypes', 'sem', 'set_axis', 'set_flags', 'set_index', 'shape', 'shift', 'size', 'skew', 'sort_index', 'sort_values', 'squeeze', 'stack', 'std', 'style', 'sub', 'subtract', 'sum', 'swapaxes', 'swaplevel', 'tail', 'take', 'to_clipboard', 'to_csv', 'to_dict', 'to_excel', 'to_feather', 'to_gbq', 'to_hdf', 'to_html', 'to_json', 'to_latex', 'to_markdown', 'to_numpy', 'to_orc', 'to_parquet', 'to_period', 'to_pickle', 'to_records', 'to_sql', 'to_stata', 'to_string', 'to_timestamp', 'to_xarray', 'to_xml', 'transform', 'transpose', 'truediv', 'truncate', 'tz_convert', 'tz_localize', 'unstack', 'update', 'value_counts', 'values', 'var', 'where', 'xs']
>>> for df in reader:
... print(df.empty)
...
```
Above I left only a part of the output to show as an example, since I can't show the data.
Now, the only way I got that `df` is actually an empty pandas dataframe is using a combination of `chunked=True` and `dataset=True`:
```
>>> reader = s3.read_parquet('s3://folder/key/', chunked=True, dataset=True, dtype_backend='pyarrow')
>>> for df in reader:
... print(df.empty)
...
True
```
But again, with `chunked=True` I cannot control how many rows each iteration has.
My conditions are:
* Reading the entire input parquet folder directly from S3 is much more useful than iterating over files;
* I need to control the number of lines read for each chunk in case the dataframe is not empty;
* I need to check whether the dataframe is empty or not;
And my questions are:
* Is there a way to get everything I need with `awswrangler.s3`?
|
closed
|
2023-12-04T21:21:12Z
|
2024-01-31T14:35:42Z
|
https://github.com/aws/aws-sdk-pandas/issues/2542
|
[
"question"
] |
milena-andreuzo
| 4
|
Kanaries/pygwalker
|
plotly
| 371
|
can i customize the dashboard?
|
Hi,
i want to display pygwalker dashboard.But
- with less buttons in the control pannel (up)
- with default values for plot type, x-axis,y-axis
- put control pannel (up) in the right
Can i do any of this?
thanks
|
open
|
2023-12-23T07:29:35Z
|
2024-06-06T12:10:38Z
|
https://github.com/Kanaries/pygwalker/issues/371
|
[] |
iuiu34
| 2
|
wkentaro/labelme
|
deep-learning
| 1,221
|
赣 字用工具读取会报错。
|
### Provide environment information
赣 字用工具读取会报错。
### What OS are you using?
赣 字用工具读取会报错。
### Describe the Bug
赣 字用工具读取会报错。
### Expected Behavior
_No response_
### To Reproduce
_No response_
|
closed
|
2022-11-29T10:14:36Z
|
2022-12-12T01:40:23Z
|
https://github.com/wkentaro/labelme/issues/1221
|
[
"issue::bug"
] |
raymondwm
| 0
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 11,053
|
Using hybrid_property is select statements raise typing errors
|
### Ensure stubs packages are not installed
- [X] No sqlalchemy stub packages is installed (both `sqlalchemy-stubs` and `sqlalchemy2-stubs` are not compatible with v2)
### Verify if the api is typed
- [X] The api is not in a module listed in [#6810](https://github.com/sqlalchemy/sqlalchemy/issues/6810) so it should pass type checking
### Describe the typing issue
Using `hybrid_property` in `select` and `where` clauses raise typing errors. For e.g. `str` `hybrid_property` is returning a type of `hybrid_property[str]` which is not present in any of the `select` overloads.
### To Reproduce
```python
from sqlalchemy import String, select
from sqlalchemy import func as sa_func
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.orm import Mapped, mapped_column
from src.db.base_class import Base
class User(Base):
first_name: Mapped[str] = mapped_column(String(64), nullable=False)
last_name: Mapped[str] = mapped_column(String(64), nullable=False)
@hybrid_property
def full_name(self) -> str:
return f'{self.first_name} {self.last_name}'
@full_name.inplace.expression
@classmethod
def _full_name_expression(cls):
return sa_func.concat(cls.first_name, ' ', cls.last_name)
select_stmt = select(User.full_name)
# Pylance reports:
# No overloads for "select" match the provided argumentsPylancereportCallIssue
# _selectable_constructors.py(448, 5): Overload 11 is the closest match
# Argument of type "hybrid_property[str]" cannot be assigned to parameter "entities" of type "_ColumnsClauseArgument[Any]" in function "select"
# Type "hybrid_property[str]" cannot be assigned to type "_ColumnsClauseArgument[Any]"
# "hybrid_property[str]" is incompatible with "TypedColumnsClauseRole[Any]"
# "hybrid_property[str]" is incompatible with "ColumnsClauseRole"
# "hybrid_property[str]" is incompatible with "SQLCoreOperations[Any]"
# "hybrid_property[str]" is incompatible with "Type[Any]"
# "hybrid_property[str]" is incompatible with "Inspectable[_HasClauseElement]"
# "hybrid_property[str]" is incompatible with protocol "_HasClauseElement"
# "__clause_element__" is not present
# ...PylancereportArgumentType
# (property) full_name: hybrid_property[str]
```
### Error
```
# Copy the complete text of any errors received by the type checker(s).
Pylance reports:
No overloads for "select" match the provided argumentsPylancereportCallIssue
_selectable_constructors.py(448, 5): Overload 11 is the closest match
Argument of type "hybrid_property[str]" cannot be assigned to parameter "entities" of type "_ColumnsClauseArgument[Any]" in function "select"
Type "hybrid_property[str]" cannot be assigned to type "_ColumnsClauseArgument[Any]"
"hybrid_property[str]" is incompatible with "TypedColumnsClauseRole[Any]"
"hybrid_property[str]" is incompatible with "ColumnsClauseRole"
"hybrid_property[str]" is incompatible with "SQLCoreOperations[Any]"
"hybrid_property[str]" is incompatible with "Type[Any]"
"hybrid_property[str]" is incompatible with "Inspectable[_HasClauseElement]"
"hybrid_property[str]" is incompatible with protocol "_HasClauseElement"
"__clause_element__" is not present
...PylancereportArgumentType
(property) full_name: hybrid_property[str]
```
### Versions
- OS: macOS Monterey 12.6.5 (21G531)
- Python: 3.10.9
- SQLAlchemy: 2.0.7
- Type checker (eg: pyright (Pylance v2024.2.2)):
### Additional context
_No response_
|
closed
|
2024-02-24T07:35:58Z
|
2024-02-24T09:01:27Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/11053
|
[
"typing"
] |
imeckr
| 1
|
gunthercox/ChatterBot
|
machine-learning
| 2,245
|
Changing the input method to voice
|
Hi, a total newbie in here.
I'm trying to get the chatbot to recognize voice inputs, but all that it returns is a None.
I read that you can create a input adapter, but i don't have any idea from where to start. It's all in portuguese, if is confusing i can change it to english.
My code until now is:
class ENGSM:
ISO_639_1 = 'en_core_web_sm'
engine = pyttsx3.init('sapi5')
voices = engine.getProperty('voices')
engine.setProperty('voice', voices[0].id)
def speak(text):
engine.say(text)
engine.runAndWait()
def ouvir_microfone():
# Habilita o microfone do usuário
microfone = sr.Recognizer()
# usando o microfone
with sr.Microphone() as source:
# Chama um algoritmo de reducao de ruidos no som
microfone.adjust_for_ambient_noise(source)
# Frase para o usuario dizer algo
print("Diga alguma coisa ")
# Armazena o que foi dito numa variavel
audio = microfone.listen(source)
try:
# Passa a variável para o algoritmo reconhecedor de padroes
frase = microfone.recognize_google(audio, language='pt-BR')
# Retorna a frase pronunciada
print("Você disse: " + frase)
# Se nao reconheceu o padrao de fala, exibe a mensagem
except sr.UnkownValueError:
print("Não entendi")
speak("Não te entendi")
return frase
speak('Carregando sua assistente pessoal')
print('Carregando sua assistente pessoal')
first_run = speak("Olá! Diga qual nome você quer dar para mim ")
ia_name = [ouvir_microfone()]
my_name = speak("O nome que você escolheu para mim foi " + str(ia_name))
bot = ChatBot(str(ia_name), tagger_language=ENGSM,
logic_adapters=['chatterbot.logic.MathematicalEvaluation'],
input_adapter="chatterbot.input.VariableInputTypeAdapter")
trainer = ListTrainer(bot)
trainer.train(["Oi", "Olá", "Qual seu nome", "meu nome é" + str(ia_name)])
while True:
try:
speak("Oi, eu sou a I A " + str(ia_name) + "Qual o seu comando?")
comando = ouvir_microfone()
print("Sua solicitação foi: ", comando)
speak("Sua solicitação foi: " + comando)
com_txt = comando
com_str = str(com_txt)
bot_resposta = bot.get_response(com_str)
print(bot_resposta)
speak(bot_resposta)
except (KeyboardInterrupt, EOFError, SystemExit):
break
|
closed
|
2022-04-10T12:46:02Z
|
2022-04-11T09:41:12Z
|
https://github.com/gunthercox/ChatterBot/issues/2245
|
[] |
NoronhaT
| 0
|
falconry/falcon
|
api
| 1,999
|
`parse_query_string()`: change default value of `csv` to False
|
Some url encoders don't encode comma to %2C
And in such cases, the Falcon Query parser creates an array instead of a string.
If you have a query string "ABC,ABC" instead of "ABC%2CABC" then if you try to fetch if using
`msg = falcon.uri.parse_query_string(req.query_string).get("message")`
then msg would be an array instead of string
`msg = ['ABC', 'ABC'] `
which is incorrect
|
closed
|
2022-01-01T09:07:51Z
|
2023-12-26T16:51:02Z
|
https://github.com/falconry/falcon/issues/1999
|
[
"breaking-change",
"question"
] |
tahseenjamal
| 1
|
pytest-dev/pytest-django
|
pytest
| 1,075
|
Capture messages from Django messages framework
|
Similar to how pytest shows the captured stdout and log calls, it would be amazing to get the messages from the [Django messages framework](https://docs.djangoproject.com/en/4.2/ref/contrib/messages/). I'm not sure wether this is realistically doable, but I imagine something like the following:
```
---------------------------------------- Captured log setup ----------------------------------------
INFO auth:auth_signals.py:36 login user=TeamMember, ip=None
--------------------------------------- Captured stdout call ---------------------------------------
{'Content-Type': 'text/html; charset=utf-8', 'X-Frame-Options': 'DENY', 'Vary': 'Cookie, Accept-Language, origin', 'Content-Length': '31904', 'Content-Language': 'de', 'X-Content-Type-Options': 'nosniff', 'Referrer-Policy': 'same-origin'}
---------------------------------------- Captured log call -----------------------------------------
DEBUG some_module:58 Some debug message
WARNING some_other_module:106 Something went wrong
---------------------------------------- Captured Django messages call -----------------------------
SUCCESS "Hello user, this action worked as expected"
WARNING "Hello user, this looks a bit weird"
ERROR "Unexpected error happened, please contact an administrator"
```
This would help a lot to debug specific problems which were not logged explicitly.
If somebody has an idea how something like this could be achieved, I would be happy to contribute this feature.
Probably we would somehow need to pass this information from the Django request to pytest, so maybe a special middleware during testing?
|
closed
|
2023-10-10T13:30:56Z
|
2023-10-10T19:38:34Z
|
https://github.com/pytest-dev/pytest-django/issues/1075
|
[] |
timobrembeck
| 1
|
ets-labs/python-dependency-injector
|
asyncio
| 383
|
Check if required container dependencies are provided, aka containers.check_dependencies(instance)
|
Need to implement a checker of container required dependencies.
Ok -- very good. Yes -- I was already trying to do what you suggested with overrides. Now with the defaults on the individual Dependency its cleaner. Checking dependencies would be nice, but I guess that by design you are able to provide them "incrementally" so requiring them all up front would break other use cases. Perhaps a `containers.checkDependencies( instance )` would be nice at some point. Thanks.
I guess we are all set as far as I'm concerned on this issue. Closing, and thank you!
_Originally posted by @shaunc in https://github.com/ets-labs/python-dependency-injector/issues/336#issuecomment-770101562_
|
closed
|
2021-01-30T00:19:51Z
|
2021-02-15T19:23:28Z
|
https://github.com/ets-labs/python-dependency-injector/issues/383
|
[
"feature"
] |
rmk135
| 2
|
pytest-dev/pytest-xdist
|
pytest
| 876
|
What's the reason for ensuring 2 tests per node?
|
https://github.com/pytest-dev/pytest-xdist/blob/master/src/xdist/scheduler/load.py#L266
The comment doesn't seem to explain why we can't always do round robin.
|
closed
|
2023-02-10T23:36:28Z
|
2023-03-01T19:55:39Z
|
https://github.com/pytest-dev/pytest-xdist/issues/876
|
[] |
PrincipalsOffice
| 10
|
graphql-python/graphene
|
graphql
| 707
|
DataLoader pattern with SQL parent/child foreign key relationships
|
closed
|
2018-04-06T00:41:13Z
|
2018-04-06T00:41:17Z
|
https://github.com/graphql-python/graphene/issues/707
|
[] |
bwells
| 0
|
|
huggingface/transformers
|
python
| 36,564
|
Add support for StableAdamW optimizer in Trainer
|
### Feature request
StableAdamW is an optimizer first introduced in [Stable and low-precision training for large-scale vision-language models](https://arxiv.org/pdf/2304.13013), an AdamW and AdaFactor hybrid optimizer, leading to more stable training. Most notably, however, it has been used in the [modernBERT paper](https://arxiv.org/pdf/2412.13663):
> StableAdamW’s learning rate clipping outperformed standard gradient clipping on downstream tasks and led to more stable training
It would be great is this is available as an optimizer in `Trainer`!
### Motivation
More models in the future may use StableAdamW because of its success in training modernBERT, and having it as an option in `Trainer` (as `optim` in `TrainingArguments`) would be convenient.
### Your contribution
I'm interested to contribute! The modernBERT paper uses the implementation from [optimi](https://github.com/warner-benjamin/optimi), which can be added as an import. I'd love to submit a PR.
|
open
|
2025-03-05T15:14:19Z
|
2025-03-06T10:38:17Z
|
https://github.com/huggingface/transformers/issues/36564
|
[
"Feature request"
] |
capemox
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.